ziqihuangg/Collaborative-Diffusion
[CVPR 2023] Collaborative Diffusion
Employs a "dynamic diffuser" architecture that learns spatial and temporal influence functions to selectively weight contributions from pre-trained uni-modal diffusion models, enabling coordinated multi-modal control (text, segmentation masks, sketches) during the reverse diffusion process. Supports both face generation from multi-modal conditions and editing of real images while preserving identity, with implementations at 256×256 and 512×512 resolutions built on latent diffusion. Compatible with enhancement techniques like FreeU and integrates with PyTorch/Hugging Face transformers ecosystem.
438 stars. No commits in the last 6 months.
Stars
438
Forks
38
Language
Python
License
—
Category
Last pushed
Oct 07, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/ziqihuangg/Collaborative-Diffusion"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...