underneathall/pinferencia
Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
545 stars and 74 monthly downloads. No commits in the last 6 months. Available on PyPI.
Stars
545
Forks
82
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 14, 2023
Monthly downloads
74
Commits (30d)
0
Dependencies
5
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/underneathall/pinferencia"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
sgl-project/sglang
SGLang is a high-performance serving framework for large language models and multimodal models.
alibaba/MNN
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering...
xorbitsai/inference
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source,...
tensorzero/tensorzero
TensorZero is an open-source stack for industrial-grade LLM applications. It unifies an LLM...