ObsidianRAG and ragbase

ObsidianRAG
51
Established
ragbase
47
Emerging
Maintenance 13/25
Adoption 11/25
Maturity 18/25
Community 9/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 21/25
Stars: 29
Forks: 3
Downloads: 85
Commits (30d): 0
Language: Python
License: MIT
Stars: 122
Forks: 43
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No risk flags
Stale 6m No Package No Dependents

About ObsidianRAG

Vasallo94/ObsidianRAG

RAG system to query your Obsidian notes using LangGraph and local LLMs (Ollama)

Implements hybrid search (vector + BM25) with CrossEncoder reranking and GraphRAG link-following to expand context across interconnected notes, served via FastAPI backend to a native TypeScript Obsidian plugin. Supports streaming responses with source attribution and works entirely offline using Ollama for local LLM inference and HuggingFace embeddings, compatible with multilingual models like Qwen and Gemma.

About ragbase

curiousily/ragbase

Completely local RAG. Chat with your PDF documents (with open LLM) and UI to that uses LangChain, Streamlit, Ollama (Llama 3.1), Qdrant and advanced methods like reranking and semantic chunking.

The ingestor pipeline combines semantic and character-based chunking strategies for flexible document decomposition, while the retriever implements a multi-stage filtering approach using reranking and LLM-based chain filters before response generation. FastEmbed provides efficient local embedding generation, and the system supports swapping between Ollama-hosted models and Groq API inference without architectural changes. Built on LangChain abstractions, it integrates PDFium for robust PDF text extraction and Qdrant for vector storage, enabling completely offline operation or optional cloud inference.

Scores updated daily from GitHub, PyPI, and npm data. How scores work