xororz/local-dream

Run Stable Diffusion on Android Devices with Snapdragon NPU acceleration. Also supports CPU/GPU inference.

56
/ 100
Established

Leverages Qualcomm QNN SDK for W8A16 quantized NPU inference on Snapdragon chips, while MNN powers flexible CPU/GPU fallbacks with dynamic W8 quantization across multiple resolutions. Supports txt2img, img2img, and inpainting with custom SD1.5 model imports, LoRA weights, prompt emphasis syntax (Automatic1111-compatible), and built-in upscalers—users can convert their own models via the included NPU conversion guide or download pre-quantized versions from HuggingFace.

1,831 stars. Actively maintained with 2 commits in the last 30 days.

No Package No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

1,831

Forks

114

Language

Kotlin

License

Last pushed

Mar 05, 2026

Commits (30d)

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/xororz/local-dream"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.