vicgalle/stable-diffusion-aesthetic-gradients
Personalization for Stable Diffusion via Aesthetic Gradients 🎨
Implements gradient-based optimization during the diffusion process to steer generation toward user-defined aesthetic embeddings derived from CLIP, eliminating the need for prompt engineering. Computes aesthetic gradients at each denoising step by optimizing a frozen CLIP embedding representing desired visual characteristics, with tunable `aesthetic_steps` and learning rate parameters. Includes pre-computed embeddings from curated datasets (LAION, SAC, artist styles) and provides `gen_aesthetic_embedding.py` to generate custom embeddings from arbitrary image collections.
741 stars. No commits in the last 6 months.
Stars
741
Forks
62
Language
Jupyter Notebook
License
—
Category
Last pushed
Oct 21, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/vicgalle/stable-diffusion-aesthetic-gradients"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
sakalond/StableGen
Transform your 3D texturing workflow with the power of generative AI, directly within Blender!
neggles/animatediff-cli
a CLI utility/library for AnimateDiff stable diffusion generation
victordibia/peacasso
UI interface for experimenting with multimodal (text, image) models (stable diffusion).
ai-forever/Kandinsky-2
Kandinsky 2 — multilingual text2image latent diffusion model
SyntheticAutonomicMind/ALICE
Artificial Latent Image Composition Engine