CuzImSlymi/Apertis-LLM
Apertis LLM. Clean. Fast. Built Different. Custom LLM architecture designed to be dead simple, insanely efficient, and easy to run—even without monster GPUs. Powered by Selective Linear Attention, Adaptive Experts, and a Unified Multimodal Core. No BS, just raw performance you can actually use.
No commits in the last 6 months.
Stars
16
Forks
—
Language
Python
License
MIT
Category
Last pushed
Aug 16, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/CuzImSlymi/Apertis-LLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
thu-pacman/chitu
High-performance inference framework for large language models, focusing on efficiency,...
NotPunchnox/rkllama
Ollama alternative for Rockchip NPU: An efficient solution for running AI and Deep learning...
sophgo/LLM-TPU
Run generative AI models in sophgo BM1684X/BM1688
Deep-Spark/DeepSparkHub
DeepSparkHub selects hundreds of application algorithms and models, covering various fields of...
tomdyson/microllama
The smallest possible LLM API