kwonminki/One-sentence_Diffusion_summary

The repo for studying and sharing diffusion models.

33
/ 100
Emerging

Provides curated summaries of recent diffusion model papers with technical implementation details—covering text-to-image generation, image editing via inversion and exemplar guidance, video synthesis, and architectural innovations like ControlNet for conditional control. Organized chronologically with Korean explanations of key mechanisms (e.g., CLIP space direction manipulation, cross-attention layer-specific prompting, temporal shift blocks for video). Targets the computer vision community studying diffusion-based generation and editing workflows.

429 stars. No commits in the last 6 months.

No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

429

Forks

36

Language

License

Last pushed

Aug 08, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/kwonminki/One-sentence_Diffusion_summary"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.