rikhil-amonkar/rl-llm-inference-optimizer
Reinforcement learning–driven optimizer for LLM-RAG inference that uses RAGAS evaluation and token/cost metrics to improve answer quality and efficiency.
Stars
1
Forks
—
Language
Python
License
MIT
Category
Last pushed
Feb 22, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/rikhil-amonkar/rl-llm-inference-optimizer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
LearningCircuit/local-deep-research
Local Deep Research achieves ~95% on SimpleQA benchmark (tested with GPT-4.1-mini). Supports...
NVIDIA-AI-Blueprints/rag
This NVIDIA RAG blueprint serves as a reference solution for a foundational Retrieval Augmented...
Denis2054/RAG-Driven-Generative-AI
This repository provides programs to build Retrieval Augmented Generation (RAG) code for...
0verL1nk/PaperSage
📚 AI-powered research reading workbench. Project-based paper Q&A with Hybrid RAG, multi-agent...
RapidFireAI/rapidfireai
RapidFire AI: Rapid AI Customization from RAG to Fine-Tuning