RAG-using-Llama3-Langchain-and-ChromaDB and Local-RAG-with-Ollama
These are competitors offering alternative implementations of the same local RAG stack architecture—both use LangChain and ChromaDB for document retrieval, but differ in their choice of LLM backend (Llama3 versus Ollama) to achieve fully local inference.
About RAG-using-Llama3-Langchain-and-ChromaDB
GURPREETKAURJETHRA/RAG-using-Llama3-Langchain-and-ChromaDB
RAG using Llama3, Langchain and ChromaDB
Implements document-based question answering by embedding user documents into ChromaDB's vector store, then retrieving relevant chunks during inference to augment Llama3's context window. The system uses Langchain to orchestrate the retrieval pipeline and generation workflow, enabling accurate responses about custom documents like the EU AI Act without requiring model fine-tuning. Validated against real regulatory text to demonstrate RAG's effectiveness in grounding LLM outputs to specific document sources.
About Local-RAG-with-Ollama
ThomasJanssen-tech/Local-RAG-with-Ollama
Build a 100% local Retrieval Augmented Generation (RAG) system with Python, LangChain, Ollama and ChromaDB!
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work