thinktecture-labs/rag-chat-with-pdf-local-llm
Simple demo for chatting with a PDF - and optionally point the RAG implementation to a local LLM
Implements RAG (Retrieval-Augmented Generation) using LangChain and Streamlit, with vector embeddings for semantic PDF search. Supports both cloud-based and locally-hosted LLMs via LM Studio, with tested compatibility for quantized models like Mistral 7B. Configuration allows seamless switching between remote APIs and on-device inference without code changes.
No commits in the last 6 months.
Stars
28
Forks
2
Language
Python
License
MIT
Category
Last pushed
Nov 29, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/thinktecture-labs/rag-chat-with-pdf-local-llm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
watat83/document-chat-system
Open-source document chat platform with semantic search, RAG (Retrieval Augmented Generation),...
amscotti/local-LLM-with-RAG
Running local Language Language Models (LLM) to perform Retrieval-Augmented Generation (RAG)
ranfysvalle02/Interactive-RAG
An interactive RAG agent built with LangChain and MongoDB Atlas. Manage your knowledge base,...
ChatFAQ/ChatFAQ
Open-source ecosystem for building AI-powered conversational solutions using RAG, agents, FSMs, and LLMs.
zilliztech/akcio
Akcio is a demonstration project for Retrieval Augmented Generation (RAG). It leverages the...