Aryan-202/cookbooks
An intelligent optimization engine that dynamically adjusts LLM selection, context size, and token limits based on real-time hardware telemetry to maximize inference efficiency and prevent resource bottlenecks.
Stars
—
Forks
4
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Aryan-202/cookbooks"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
radlab-dev-group/llm-router
LLM Router is a service that can be deployed on‑premises or in the cloud. It adds a layer...
yonahgraphics/openevalkit
Production-grade Python framework for evaluating LLM and agentic systems with traditional...
squishai/squish
🤖🗜️⚡️ Compress local LLMs once, run them forever at sub-second load times. OpenAI + Ollama...
wesleyscholl/squish
🤖🗜️⚡️ Compress local LLMs once, run them forever at sub-second load times. OpenAI + Ollama...
Yu-amd/Multiverse
Lightweight model inference playground