OneRAG and rag-decision-support-system
The first tool provides a framework for integrating various vector databases and LLMs, while the second tool builds a complete RAG system with advanced features like hybrid retrieval and evaluation; thus, they are complements, as the first tool's framework could be used to build the underlying database and LLM integrations for the second tool's sophisticated RAG system.
About OneRAG
notadev-iamaura/OneRAG
Production-ready RAG Framework (Python/FastAPI). 1-line config swaps: 6 Vector DBs (Weaviate, Pinecone, Qdrant, ChromaDB, pgvector, MongoDB), 5 LLMs (Gemini, OpenAI, Claude, Ollama, OpenRouter). OpenAI-compatible API. 2100+ tests.
Supports hybrid search (dense + BM25), GraphRAG for knowledge graph reasoning, and pluggable rerankers (6 options including Jina and Cohere) through a modular pipeline architecture. Includes built-in PII detection/masking, semantic/Redis caching layers, and query routing that classifies requests before retrieval. Designed for gradual complexity—start with basic vector search and layer in advanced features like agents and tool execution without refactoring the codebase.
About rag-decision-support-system
Yassinekraiem08/rag-decision-support-system
Production-style RAG system with hybrid retrieval, citation-grounded LLM responses, verification guardrails, confidence scoring, and evaluation dashboard.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work