adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
Enables efficient multi-concept customization by fine-tuning only cross-attention key/value projections, reducing per-concept storage to 75MB while completing training in ~6 minutes on 2 A100 GPUs. Supports composing multiple learned concepts (objects, styles) through joint training or optimization-based weight merging. Integrated into Hugging Face Diffusers library with support for Stable Diffusion v1.4 and SDXL models.
1,971 stars.
Stars
1,971
Forks
142
Language
Python
License
—
Category
Last pushed
Dec 01, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/adobe-research/custom-diffusion"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...
HorizonWind2004/reconstruction-alignment
[ICLR 2026] Official repo of paper "Reconstruction Alignment Improves Unified Multimodal...