sochaty/llm-governance-engine
A robust LLM Governance & ROI Evaluation platform designed to benchmark Frontier models against local open-source models. Built with an enterprise microservices architecture and cloud-ready for Kubernetes, this tool helps organizations optimize AI spend by calculating the accuracy-vs-cost tradeoff of local vs. cloud inference
Stars
—
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 04, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/sochaty/llm-governance-engine"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kserve/kserve
Standardized Distributed Generative and Predictive AI Inference Platform for Scalable,...
omegaml/omegaml
MLOps simplified. One-stop AI delivery platform, all the features you need.
awslabs/aiops-modules
AIOps modules is a collection of reusable Infrastructure as Code (IaC) modules for Machine...
GoogleCloudDataproc/dataproc-ml-python
Library to simplify running distributed ML workloads with Apache Spark
jina-ai/serve
☁️ Build multimodal AI applications with cloud-native stack