e1n00r/tinyserve
30 tok/s for 20B MoE on 8 GB VRAM. Flat throughput to 32K context. Native MXFP4 + GGUF Q4_K/Q5_K/Q6_K via ggml CUDA kernels — zero dequant. Expert offloading for models that don't fit in GPU memory.
Stars
5
Forks
2
Language
Python
License
MIT
Last pushed
Apr 05, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/e1n00r/tinyserve"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
campfirein/byterover-cli
ByteRover CLI (brv) - The portable memory layer for autonomous coding agents (formerly Cipher)
mistralai/client-python
Python client library for Mistral AI platform
openai/openai-python
The official Python library for the OpenAI API
pydantic/pydantic
Data validation using Python type hints
milla-jovovich/mempalace
The highest-scoring AI memory system ever benchmarked. And it's free.