Aatricks/llmedge

Android native AI inference library, bringing gguf models and stable-diffusion inference on android devices, powered by llama.cpp and stable-diffusion.cpp

45
/ 100
Emerging

Integrates **Whisper.cpp** and **Bark.cpp** for speech inference, with optional GPU acceleration via OpenCL/Vulkan backends; provides native KV-cache optimization and streaming text generation via JNI bindings. Supports multimodal workflows including on-device RAG with PDF indexing, vision models (LLaVA-style), Stable Diffusion with LoRA, and video generation (Wan 2.1), all coordinated through the instance-based `LLMEdge` facade for explicit resource management.

No Package No Dependents
Maintenance 13 / 25
Adoption 7 / 25
Maturity 15 / 25
Community 10 / 25

How are scores calculated?

Stars

37

Forks

4

Language

Kotlin

License

Apache-2.0

Last pushed

Mar 11, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/Aatricks/llmedge"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.