PDF-RAG-with-Llama2-and-Gradio and RAG-Based-LLM-Chatbot

Both implement local RAG pipelines with open-source LLMs and vector databases, making them alternative architectural approaches rather than tools designed to work together—one prioritizes Gradio UI with Llama2 while the other emphasizes containerized deployment with Llama 3.2 and Qdrant, so they are competitors for the same use case.

Maintenance 0/25
Adoption 9/25
Maturity 16/25
Community 19/25
Maintenance 0/25
Adoption 6/25
Maturity 9/25
Community 17/25
Stars: 80
Forks: 22
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 17
Forks: 10
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About PDF-RAG-with-Llama2-and-Gradio

Niez-Gharbi/PDF-RAG-with-Llama2-and-Gradio

Build your own Custom RAG Chatbot using Gradio, Langchain and Llama2

Implements document-grounded retrieval augmentation using ChromaDB for vector storage and semantic search, enabling the chatbot to cite specific PDF pages in responses. The architecture chains LangChain's conversational retrieval pipeline with Hugging Face embeddings for context-aware question answering. Supports configurable model selection via YAML, allowing swapping between different Llama2 variants and embedding providers without code changes.

About RAG-Based-LLM-Chatbot

GURPREETKAURJETHRA/RAG-Based-LLM-Chatbot

RAG Based LLM Chatbot Built using Open Source Stack (Llama 3.2 Model, BGE Embeddings, and Qdrant running locally within a Docker Container)

Scores updated daily from GitHub, PyPI, and npm data. How scores work