waterpare833/Novel-Assistant
로컬 또는 클라우드 LLM을 사용해 문서 폴더 기반으로 검색·답변하는 RAG 어시스턴트
Supports flexible model switching between local Ollama instances and cloud-based OpenRouter APIs with automatic document indexing across nested folder structures. The RAG pipeline enables selective context-aware retrieval—users can toggle document-based mode to ground responses in their local knowledge base while maintaining standard Q&A capability for general queries.
Stars
54
Forks
10
Language
—
License
—
Category
Last pushed
Dec 25, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/waterpare833/Novel-Assistant"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
athrael-soju/Snappy
🐊 Snappy's unique approach unifies vision-language late interaction with structured OCR for...
aakashsharan/research-vault
AI research assistant that extracts structured patterns from papers using RAG, LangGraph, and...
roberto729a/OllamaRAG
🤖 Build a smart AI assistant that learns from any website using a Retrieval-Augmented Generation...
fredsiika/huxley-pdf
Upload personal docs and Chat with your PDF files with this GPT4-powered app. Built with...
yousefmohtady1/CorpGuideAI-HR-Policy-Assistant
CorpGuide AI Backend: An intelligent HR Policy Assistant powered by RAG, Groq, and LangChain....