ragbits and context-aware-rag
These are complements: ragbits provides general-purpose RAG building blocks that could integrate NVIDIA's specialized context-aware retrieval functions for knowledge graph-enhanced retrieval pipelines.
About ragbits
deepsense-ai/ragbits
Building blocks for rapid development of GenAI applications
Provides modular Python packages for LLM integration (100+ models via LiteLLM), RAG pipelines with 20+ document formats, and multi-agent coordination using the A2A protocol and Model Context Protocol. Features type-safe prompt execution with Python generics, support for Qdrant/PgVector and other vector stores, Ray-based distributed document ingestion, and OpenTelemetry observability—installable as granular components (core, agents, document-search, evaluate, guardrails, chat, CLI) rather than monolithic framework.
About context-aware-rag
NVIDIA/context-aware-rag
Context-Aware RAG library for Knowledge Graph ingestion and retrieval functions.
Supports multiple data sources and storage backends (Neo4j, Milvus, ArangoDB, MinIO) with pluggable ingestion and retrieval strategies, including GraphRAG for automatic knowledge graph extraction. Built as microservices with separate ingestion and retrieval APIs, integrated OpenTelemetry observability via Phoenix and Prometheus, and experimental Model Context Protocol (MCP) support for agentic AI workflows. Uses component-based architecture enabling custom function composition while maintaining compatibility with existing data pipelines.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work