ziqihuangg/Collaborative-Diffusion

[CVPR 2023] Collaborative Diffusion

43
/ 100
Emerging

Employs a "dynamic diffuser" architecture that learns spatial and temporal influence functions to selectively weight contributions from pre-trained uni-modal diffusion models, enabling coordinated multi-modal control (text, segmentation masks, sketches) during the reverse diffusion process. Supports both face generation from multi-modal conditions and editing of real images while preserving identity, with implementations at 256×256 and 512×512 resolutions built on latent diffusion. Compatible with enhancement techniques like FreeU and integrates with PyTorch/Hugging Face transformers ecosystem.

438 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

438

Forks

38

Language

Python

License

Last pushed

Oct 07, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/ziqihuangg/Collaborative-Diffusion"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.