jthiruveedula/llmops-evaluation-framework
Production LLMOps platform with automated evaluation, A/B testing, prompt versioning, cost tracking, and model drift detection for enterprise GenAI deployments.
Stars
—
Forks
—
Language
Python
License
—
Category
Last pushed
Mar 18, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/jthiruveedula/llmops-evaluation-framework"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kserve/kserve
Standardized Distributed Generative and Predictive AI Inference Platform for Scalable,...
omegaml/omegaml
MLOps simplified. One-stop AI delivery platform, all the features you need.
awslabs/aiops-modules
AIOps modules is a collection of reusable Infrastructure as Code (IaC) modules for Machine...
GoogleCloudDataproc/dataproc-ml-python
Library to simplify running distributed ML workloads with Apache Spark
jina-ai/serve
☁️ Build multimodal AI applications with cloud-native stack