RAGLight and super-rag
RAGLight provides a modular foundation for building RAG systems with pluggable components, while Super RAG offers pre-built, specialized RAG pipelines (summarization, retrieval, reranking) that could be implemented as modules within RAGLight's framework—making them complementary rather than competitive.
About RAGLight
Bessouat40/RAGLight
RAGLight is a modular framework for Retrieval-Augmented Generation (RAG). It makes it easy to plug in different LLMs, embeddings, and vector stores, and now includes seamless MCP integration to connect external tools and data sources.
Supports hybrid retrieval combining BM25 keyword search with semantic vector similarity using Reciprocal Rank Fusion, and offers agentic RAG capabilities with query reformulation for multi-turn conversations. Built on pluggable document processors and vector store backends (Chroma, Qdrant) with optional observability via Langfuse tracing. Provides both programmatic Python APIs and CLI/REST interfaces for rapid deployment, including a Docker Compose setup for production environments.
About super-rag
superagent-ai/super-rag
Super performant RAG pipelines for AI apps. Summarization, Retrieve/Rerank and Code Interpreters in one simple API.
Supports pluggable vector databases (Pinecone, Qdrant, Weaviate, PGVector) and multiple embedding providers (OpenAI, Cohere, HuggingFace, FastEmbed), with customizable semantic chunking and metadata filtering via REST API. Built on FastAPI with session-based caching and optional E2B.dev sandbox integration for executing computational queries safely. Handles diverse document formats through the Unstructured library with configurable parsing strategies and table processing.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work