stable-diffusion-webui and web-stable-diffusion

These are complements serving different deployment contexts: the first is a locally-hosted server application with a traditional UI, while the second is a client-side browser-based alternative that eliminates server requirements by running inference directly in WebAssembly/WASM environments.

stable-diffusion-webui
60
Established
web-stable-diffusion
56
Established
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 24/25
Maintenance 0/25
Adoption 13/25
Maturity 25/25
Community 18/25
Stars: 161,689
Forks: 30,155
Downloads:
Commits (30d): 0
Language: Python
License: AGPL-3.0
Stars: 3,714
Forks: 233
Downloads: 17
Commits (30d): 0
Language: Jupyter Notebook
License: Apache-2.0
No Package No Dependents
Stale 6m No Dependents

About stable-diffusion-webui

AUTOMATIC1111/stable-diffusion-webui

Stable Diffusion web UI

Built on Gradio, this interface supports advanced generation techniques including inpainting, outpainting, prompt editing mid-generation, and textual inversion embeddings—all trainable on consumer GPUs. It integrates multiple post-processing models (GFPGAN, RealESRGAN, LDSR) for upscaling and face restoration, plus an API for programmatic access. The architecture includes cross-model checkpoint merging, hypernetworks, LoRAs, and composable multi-prompt generation with weighted control over attention and token sequences.

About web-stable-diffusion

mlc-ai/web-stable-diffusion

Bringing stable diffusion models to web browsers. Everything runs inside the browser with no server support.

Leverages Apache TVM Unity's machine learning compilation to transform Hugging Face Stable Diffusion models through TorchDynamo/Torch FX capture, then automatically optimizes them using TensorIR and MetaSchedule for WebGPU execution. The workflow enables Python-first model development that compiles to WebAssembly and JavaScript runtimes, with careful memory management to fit large diffusion models within browser constraints. Integrates with PyTorch, Hugging Face diffusers, Rust tokenizers, and WebGPU for hardware-accelerated inference across diverse client environments.

Scores updated daily from GitHub, PyPI, and npm data. How scores work