ObsidianRAG and ragbase
About ObsidianRAG
Vasallo94/ObsidianRAG
RAG system to query your Obsidian notes using LangGraph and local LLMs (Ollama)
Implements hybrid search (vector + BM25) with CrossEncoder reranking and GraphRAG link-following to expand context across interconnected notes, served via FastAPI backend to a native TypeScript Obsidian plugin. Supports streaming responses with source attribution and works entirely offline using Ollama for local LLM inference and HuggingFace embeddings, compatible with multilingual models like Qwen and Gemma.
About ragbase
curiousily/ragbase
Completely local RAG. Chat with your PDF documents (with open LLM) and UI to that uses LangChain, Streamlit, Ollama (Llama 3.1), Qdrant and advanced methods like reranking and semantic chunking.
The ingestor pipeline combines semantic and character-based chunking strategies for flexible document decomposition, while the retriever implements a multi-stage filtering approach using reranking and LLM-based chain filters before response generation. FastEmbed provides efficient local embedding generation, and the system supports swapping between Ollama-hosted models and Groq API inference without architectural changes. Built on LangChain abstractions, it integrates PDFium for robust PDF text extraction and Qdrant for vector storage, enabling completely offline operation or optional cloud inference.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work