rag-demo and rag-demo-llama-index
These are ecosystem siblings—both are reference implementations of the same RAG pattern (chat with PDFs via Couchbase) using different orchestration frameworks (LangChain vs. LlamaIndex), allowing developers to choose their preferred abstraction layer.
About rag-demo
couchbase-examples/rag-demo
A RAG demo using LangChain that allows you to chat with your uploaded PDF documents
Implements dual vector search strategies—Couchbase's Hyperscale/Composite Vector Indexes via SQL++ queries and Full Text Search service—each optimized for different filtering patterns. Built on Streamlit with LangChain integration, it includes LLM response caching in Couchbase to eliminate duplicate OpenAI API calls, and provides side-by-side comparison between RAG-augmented and pure LLM answers for the same questions.
About rag-demo-llama-index
couchbase-examples/rag-demo-llama-index
A RAG demo using LlamaIndex that allows you to chat with your uploaded PDF documents
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work