victordibia/peacasso
UI interface for experimenting with multimodal (text, image) models (stable diffusion).
Provides both a web-based UI and Python API for Stable Diffusion workflows, supporting text-to-image, image-to-image, and inpainting modes alongside latent space interpolation and intermediate image capture during diffusion sampling. Built on HuggingFace's Diffusers library with configurable model selection from HuggingFace Hub, it emphasizes human-AI interaction design informed by communication theory to streamline experimentation with diffusion parameters.
368 stars and 1,179 monthly downloads. Used by 1 other package. No commits in the last 6 months. Available on PyPI.
Stars
368
Forks
42
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Aug 16, 2023
Monthly downloads
1,179
Commits (30d)
0
Dependencies
10
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/victordibia/peacasso"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
sakalond/StableGen
Transform your 3D texturing workflow with the power of generative AI, directly within Blender!
neggles/animatediff-cli
a CLI utility/library for AnimateDiff stable diffusion generation
carefree0910/carefree-drawboard
🎨 Infinite Drawboard in Python
Teriks/dgenerate
dgenerate is a scriptable command line tool (and library) for generating images and animation...
ai-forever/Kandinsky-2
Kandinsky 2 — multilingual text2image latent diffusion model