wolverinn/stable-diffusion-multi-user

stable diffusion multi-user django server code with multi-GPU load balancing

41
/ 100
Emerging

Implements a distributed architecture separating GPU inference servers from a stateless load-balancing coordinator using Django and Apache, with request affinity to ensure multi-step generation cycles stay on the same GPU. Exposes full webUI-compatible APIs (txt2img, img2img, model switching, LoRA/Civitai support) with per-user request queuing and concurrent model instances on single GPUs. Provides multiple deployment options including self-hosted, Runpod Serverless with autoscaling, and Replicate integration.

318 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 9 / 25
Community 22 / 25

How are scores calculated?

Stars

318

Forks

62

Language

Python

License

GPL-3.0

Last pushed

Mar 14, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/wolverinn/stable-diffusion-multi-user"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.