TensorOpsAI/LLMstudio
Framework to bring LLM applications to production
Provides a unified proxy layer across OpenAI, Anthropic, and Google LLMs plus local models via Ollama, with smart routing and fallback mechanisms for reliability. Includes a web-based prompt playground UI, Python SDK, request monitoring/logging, and LangChain compatibility for seamless integration into existing projects. Supports batch calling and deploys as a server with separate proxy and tracker APIs.
371 stars and 563 monthly downloads. Available on PyPI.
Stars
371
Forks
39
Language
Python
License
MPL-2.0
Category
Last pushed
Feb 05, 2026
Monthly downloads
563
Commits (30d)
0
Dependencies
5
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/TensorOpsAI/LLMstudio"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related tools
langfuse/langfuse
🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management,...
Arize-ai/phoenix
AI Observability & Evaluation
Mirascope/mirascope
The LLM Anti-Framework
Helicone/helicone
🧊 Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 🍓
Agenta-AI/agenta
The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM...