bRAG-langchain and RAG_local_tutorial
One tool provides a comprehensive toolkit for building RAG applications, while the other offers simple, local tutorials, making them complementary where the latter can introduce concepts reinforced and expanded upon by the former.
About bRAG-langchain
bragai/bRAG-langchain
Everything you need to know to build your own RAG application
Structured as progressive Jupyter notebooks using LangChain, covering foundational vector storage with ChromaDB/Pinecone, multi-query retrieval, semantic routing, and advanced techniques like RAPTOR and ColBERT token-level indexing. Demonstrates end-to-end optimization strategies including reciprocal rank fusion, Cohere re-ranking, and self-RAG approaches, with integration points for OpenAI embeddings, LangSmith tracing, and metadata-filtered vector stores.
About RAG_local_tutorial
sergiopaniego/RAG_local_tutorial
Simple RAG tutorials that can be run locally or using Google Colab (only Pro version).
Covers multiple RAG data sources—PDFs, YouTube videos, audio transcription via Whisper, and GitHub repositories—through standalone Jupyter notebooks. Built on LangChain and LlamaIndex for RAG orchestration with Ollama as the local LLM runtime, enabling fully offline inference without external API dependencies. Supports both local execution and cloud deployment on Google Colab with GPU acceleration for resource-intensive operations.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work