anviit/llm-inference-serving
Production LLM inference stack — 28ms TTFT, 39 tok/s, 81% cache hit rate on a 6GB GPU
Stars
—
Forks
—
Language
Python
License
—
Category
Last pushed
Mar 18, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/anviit/llm-inference-serving"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
OpenNMT/CTranslate2
Fast inference engine for Transformer models
mechramc/Orion
Local AI runtime for training & running small LLMs directly on Apple Neural Engine (ANE). No...
Pomilon/LEMA
LEMA (Layer-wise Efficient Memory Abstraction): A hardware-aware framework for fine-tuning LLMs...
dilbersha/llm-inference-benchmarking-3080
A production-grade telemetry-aware suite for benchmarking LLM inference performance on NVIDIA RTX 3080.
Yuan-ManX/infera
Infera — A High-Performance Inference Engine for Large Language Models.