RAG-using-Llama3-Langchain-and-ChromaDB and Local-RAG-with-Ollama

These are competitors offering alternative implementations of the same local RAG stack architecture—both use LangChain and ChromaDB for document retrieval, but differ in their choice of LLM backend (Llama3 versus Ollama) to achieve fully local inference.

Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 2/25
Adoption 9/25
Maturity 7/25
Community 22/25
Stars: 131
Forks: 35
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
Stars: 76
Forks: 48
Downloads:
Commits (30d): 0
Language: Python
License:
Stale 6m No Package No Dependents
No License Stale 6m No Package No Dependents

About RAG-using-Llama3-Langchain-and-ChromaDB

GURPREETKAURJETHRA/RAG-using-Llama3-Langchain-and-ChromaDB

RAG using Llama3, Langchain and ChromaDB

Implements document-based question answering by embedding user documents into ChromaDB's vector store, then retrieving relevant chunks during inference to augment Llama3's context window. The system uses Langchain to orchestrate the retrieval pipeline and generation workflow, enabling accurate responses about custom documents like the EU AI Act without requiring model fine-tuning. Validated against real regulatory text to demonstrate RAG's effectiveness in grounding LLM outputs to specific document sources.

About Local-RAG-with-Ollama

ThomasJanssen-tech/Local-RAG-with-Ollama

Build a 100% local Retrieval Augmented Generation (RAG) system with Python, LangChain, Ollama and ChromaDB!

Scores updated daily from GitHub, PyPI, and npm data. How scores work