G-B-KEVIN-ARJUN/runtime-inference
"Faster AI: Accelerating Qwen 2.5 from 7 t/s to 82 t/s on a single RTX 4060 using Llama.cpp and ONNX" a comparative analysis of LLM inference runtimes (PyTorch, ONNX, Llama.cpp) on consumer hardware. Benchmarking throughput, latency, and quantization trade-offs to optimize local deployment.
Stars
—
Forks
—
Language
Python
License
—
Category
Last pushed
Jan 26, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/G-B-KEVIN-ARJUN/runtime-inference"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
sgl-project/sglang
SGLang is a high-performance serving framework for large language models and multimodal models.
alibaba/MNN
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering...
xorbitsai/inference
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source,...
tensorzero/tensorzero
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM...