dilbersha/llm-inference-benchmarking-3080
A production-grade telemetry-aware suite for benchmarking LLM inference performance on NVIDIA RTX 3080.
Stars
1
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 19, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/dilbersha/llm-inference-benchmarking-3080"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
OpenNMT/CTranslate2
Fast inference engine for Transformer models
mechramc/Orion
Local AI runtime for training & running small LLMs directly on Apple Neural Engine (ANE). No...
Pomilon/LEMA
LEMA (Layer-wise Efficient Memory Abstraction): A hardware-aware framework for fine-tuning LLMs...
Yuan-ManX/infera
Infera — A High-Performance Inference Engine for Large Language Models.
gxcsoccer/alloy
Hybrid SSM-Attention language model on Apple Silicon with MLX — interleaving Mamba-2 and...