wolverinn/stable-diffusion-multi-user
stable diffusion multi-user django server code with multi-GPU load balancing
Implements a distributed architecture separating GPU inference servers from a stateless load-balancing coordinator using Django and Apache, with request affinity to ensure multi-step generation cycles stay on the same GPU. Exposes full webUI-compatible APIs (txt2img, img2img, model switching, LoRA/Civitai support) with per-user request queuing and concurrent model instances on single GPUs. Provides multiple deployment options including self-hosted, Runpod Serverless with autoscaling, and Replicate integration.
318 stars. No commits in the last 6 months.
Stars
318
Forks
62
Language
Python
License
GPL-3.0
Category
Last pushed
Mar 14, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/wolverinn/stable-diffusion-multi-user"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
LykosAI/StabilityMatrix
Multi-Platform Package Manager for Stable Diffusion
AbdBarho/stable-diffusion-webui-docker
Easy Docker setup for Stable Diffusion with user-friendly UI
ashleykleynhans/stable-diffusion-docker
Docker image for Stable Diffusion WebUI with ControlNet, After Detailer, Dreambooth, Deforum and...
runpod-workers/worker-a1111
Automatic1111 serverless worker.
eleiton/ollama-intel-arc
Make use of Intel Arc Series GPU to Run Ollama, StableDiffusion, Whisper and Open WebUI, for...