PDF-RAG-with-Llama2-and-Gradio and RAG-Based-LLM-Chatbot
Both implement local RAG pipelines with open-source LLMs and vector databases, making them alternative architectural approaches rather than tools designed to work together—one prioritizes Gradio UI with Llama2 while the other emphasizes containerized deployment with Llama 3.2 and Qdrant, so they are competitors for the same use case.
About PDF-RAG-with-Llama2-and-Gradio
Niez-Gharbi/PDF-RAG-with-Llama2-and-Gradio
Build your own Custom RAG Chatbot using Gradio, Langchain and Llama2
Implements document-grounded retrieval augmentation using ChromaDB for vector storage and semantic search, enabling the chatbot to cite specific PDF pages in responses. The architecture chains LangChain's conversational retrieval pipeline with Hugging Face embeddings for context-aware question answering. Supports configurable model selection via YAML, allowing swapping between different Llama2 variants and embedding providers without code changes.
About RAG-Based-LLM-Chatbot
GURPREETKAURJETHRA/RAG-Based-LLM-Chatbot
RAG Based LLM Chatbot Built using Open Source Stack (Llama 3.2 Model, BGE Embeddings, and Qdrant running locally within a Docker Container)
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work