ray-project/llm-applications

A comprehensive guide to building RAG-based LLM applications for production.

48
/ 100
Emerging

Covers end-to-end RAG pipeline scaling across document loading, chunking, embedding, and indexing using Ray's distributed compute framework. Implements hybrid LLM routing to dynamically select between OpenAI and Anyscale open-source models, with built-in evaluation metrics for both component-level and end-to-end quality optimization. Includes production deployment patterns with vector database integration and multi-GPU serving configurations.

1,853 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 22 / 25

How are scores calculated?

Stars

1,853

Forks

256

Language

Jupyter Notebook

License

CC-BY-4.0

Last pushed

Aug 02, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/ray-project/llm-applications"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.