ptimizeroracle/ondine
The LLM Dataset Engine — batch process millions of rows with 100+ providers. Multi-row batching (100x fewer calls), prefix caching (40-50% savings), cost control, checkpointing.
Available on PyPI.
Stars
4
Forks
2
Language
Python
License
MIT
Category
Last pushed
Mar 14, 2026
Commits (30d)
0
Dependencies
16
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/ptimizeroracle/ondine"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kubeflow/katib
Automated Machine Learning on Kubernetes
kubeai-project/kubeai
AI Inference Operator for Kubernetes. The easiest way to serve ML models in production. Supports...
sgl-project/rbg
A workload for deploying LLM inference services on Kubernetes
beam-cloud/beta9
Ultrafast serverless GPU inference, sandboxes, and background jobs
scitix/arks
Arks is a cloud-native inference framework running on Kubernetes