ray-project/llm-applications
A comprehensive guide to building RAG-based LLM applications for production.
Covers end-to-end RAG pipeline scaling across document loading, chunking, embedding, and indexing using Ray's distributed compute framework. Implements hybrid LLM routing to dynamically select between OpenAI and Anyscale open-source models, with built-in evaluation metrics for both component-level and end-to-end quality optimization. Includes production deployment patterns with vector database integration and multi-GPU serving configurations.
1,853 stars. No commits in the last 6 months.
Stars
1,853
Forks
256
Language
Jupyter Notebook
License
CC-BY-4.0
Category
Last pushed
Aug 02, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/ray-project/llm-applications"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.