G-U-N/Phased-Consistency-Model

[NeurIPS 2024] Boosting the performance of consistency models with PCM!

37
/ 100
Emerging

Phased Consistency Models (PCM) partition the diffusion ODE trajectory into sub-trajectories, enabling efficient multi-step image generation with 2-4 inference steps while maintaining flexibility for classifier-free guidance and negative prompt conditioning. The approach distills pre-trained diffusion models (SD 1.5, SDXL, SD3) into lightweight LoRA adapters, addressing limitations of prior work (LCM) including stochastic sampling error accumulation and insensitivity to guidance parameters. Training requires O(N) objectives rather than CTM's O(N²), making it more practical for adapting existing text-to-image models on HuggingFace.

514 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

514

Forks

19

Language

Python

License

Apache-2.0

Last pushed

Dec 11, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/G-U-N/Phased-Consistency-Model"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.