A-SHOJAEI/adaptive-inference-router-with-cascade-serving
A research-grade adaptive inference routing system that learns to dynamically dispatch incoming requests across a cascade of model variants (quantized, pruned, distilled, full-precision) based on predicted query difficulty, SLA constraints, and real-time cluster load using multi-objective reinforcement learning.
Stars
—
Forks
—
Language
Python
License
—
Category
Last pushed
Feb 21, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/A-SHOJAEI/adaptive-inference-router-with-cascade-serving"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
modelscope/modelscope
ModelScope: bring the notion of Model-as-a-Service to life.
basetenlabs/truss
The simplest way to serve AI/ML models in production
Lightning-AI/LitServe
A minimal Python framework for building custom AI inference servers with full control over...
deepjavalibrary/djl-serving
A universal scalable machine learning model deployment solution
labmlai/labml
🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱