xororz/local-dream
Run Stable Diffusion on Android Devices with Snapdragon NPU acceleration. Also supports CPU/GPU inference.
Leverages Qualcomm QNN SDK for W8A16 quantized NPU inference on Snapdragon chips, while MNN powers flexible CPU/GPU fallbacks with dynamic W8 quantization across multiple resolutions. Supports txt2img, img2img, and inpainting with custom SD1.5 model imports, LoRA weights, prompt emphasis syntax (Automatic1111-compatible), and built-in upscalers—users can convert their own models via the included NPU conversion guide or download pre-quantized versions from HuggingFace.
1,831 stars. Actively maintained with 2 commits in the last 30 days.
Stars
1,831
Forks
114
Language
Kotlin
License
—
Category
Last pushed
Mar 05, 2026
Commits (30d)
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/xororz/local-dream"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
leejet/stable-diffusion.cpp
Diffusion model(SD,Flux,Wan,Qwen Image,Z-Image,...) inference in pure C/C++
xlite-dev/lite.ai.toolkit
đź› A lite C++ AI toolkit: 100+ models with MNN, ORT and TRT, including Det, Seg, Stable-Diffusion,...
MochiDiffusion/MochiDiffusion
Run Stable Diffusion on Mac natively
ssube/onnx-web
web UI for GPU-accelerated ONNX pipelines like Stable Diffusion, even on Windows and AMD
DarthAffe/StableDiffusion.NET
C# Wrapper for StableDiffusion.cpp