bRAG-langchain and RAG_local_tutorial

One tool provides a comprehensive toolkit for building RAG applications, while the other offers simple, local tutorials, making them complementary where the latter can introduce concepts reinforced and expanded upon by the former.

bRAG-langchain
53
Established
RAG_local_tutorial
34
Emerging
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 0/25
Adoption 8/25
Maturity 8/25
Community 18/25
Stars: 4,051
Forks: 480
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License:
Stars: 47
Forks: 16
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License:
No Package No Dependents
No License Stale 6m No Package No Dependents

About bRAG-langchain

bragai/bRAG-langchain

Everything you need to know to build your own RAG application

Structured as progressive Jupyter notebooks using LangChain, covering foundational vector storage with ChromaDB/Pinecone, multi-query retrieval, semantic routing, and advanced techniques like RAPTOR and ColBERT token-level indexing. Demonstrates end-to-end optimization strategies including reciprocal rank fusion, Cohere re-ranking, and self-RAG approaches, with integration points for OpenAI embeddings, LangSmith tracing, and metadata-filtered vector stores.

About RAG_local_tutorial

sergiopaniego/RAG_local_tutorial

Simple RAG tutorials that can be run locally or using Google Colab (only Pro version).

Covers multiple RAG data sources—PDFs, YouTube videos, audio transcription via Whisper, and GitHub repositories—through standalone Jupyter notebooks. Built on LangChain and LlamaIndex for RAG orchestration with Ollama as the local LLM runtime, enabling fully offline inference without external API dependencies. Supports both local execution and cloud deployment on Google Colab with GPU acceleration for resource-intensive operations.

Scores updated daily from GitHub, PyPI, and npm data. How scores work