puneethkotha/Falcon
Production ML inference platform. Multi-worker · Nginx load balancing · idempotency · exponential backoff · Prometheus metrics. Reduced p95 latency 30%.
Stars
—
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 17, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/puneethkotha/Falcon"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
modelscope/modelscope
ModelScope: bring the notion of Model-as-a-Service to life.
Lightning-AI/LitServe
A minimal Python framework for building custom AI inference servers with full control over...
basetenlabs/truss
The simplest way to serve AI/ML models in production
tensorflow/serving
A flexible, high-performance serving system for machine learning models
deepjavalibrary/djl-serving
A universal scalable machine learning model deployment solution