local-LLM-with-RAG and RAG-MultiFile-QA
These are ecosystem siblings—both implement RAG pipelines for document QA, with A providing a framework for running local LLMs and B providing a multi-file document interface, often using shared infrastructure like LangChain and embedding models.
About local-LLM-with-RAG
amscotti/local-LLM-with-RAG
Running local Language Language Models (LLM) to perform Retrieval-Augmented Generation (RAG)
This tool helps you privately ask complex questions about your own documents and get well-researched answers. You provide your documents (PDFs, Word files, etc.) and a question, and it uses a local AI to find and summarize the relevant information. It's ideal for analysts, researchers, or anyone needing to quickly extract information from a personal collection of files without sending them to external AI services.
About RAG-MultiFile-QA
Uni-Creator/RAG-MultiFile-QA
A RAG (Retrieval-Augmented Generation) AI chatbot that allows users to upload multiple document types (PDF, DOCX, TXT, CSV) and ask questions about the content. Built using LangChain, Hugging Face embeddings, and Streamlit, it enables efficient document search and question answering using vector-based retrieval. 🚀
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work