VisRAG and VARAG
These are competitors offering alternative approaches to vision-language model-based RAG: VisRAG emphasizes parsing-free document processing via VLMs, while VARAG prioritizes vision-first retrieval by processing images before text, representing different design philosophies for the same problem space.
About VisRAG
OpenBMB/VisRAG
Parsing-free RAG supported by VLMs
Embeds document images directly using VLMs rather than parsing text first, preserving original formatting and information integrity across multi-image retrieval scenarios. EVisRAG 2.0 introduces evidence-guided reasoning with Reward-Scoped GRPO, enabling token-level optimization of visual perception and multi-step reasoning in VLMs. The framework provides plug-and-play components (VisRAG-Ret for retrieval, EVisRAG for generation) compatible with various VLM backbones and integrates with UltraRAG for end-to-end deployment.
About VARAG
adithya-s-k/VARAG
Vision-Augmented Retrieval and Generation (VARAG) - Vision first RAG Engine
Supports multiple vision-based retrieval techniques—Simple RAG with OCR via Docling, Vision RAG using cross-modal embeddings (JinaCLIP), ColPali RAG with page-level VLM embeddings and late interaction matching, and Hybrid ColPali combining coarse image retrieval with fine-grained re-ranking. Each RAG method is abstracted as a pluggable class with consistent `index()` and `search()` APIs, using LanceDB as the vector store backend. Integrates with Vision-Language Models (PaliGemma, JinaCLIP), supports cloud deployment via Modal with GPU acceleration, and works with LLMs/VLMs of choice for generation.
Scores updated daily from GitHub, PyPI, and npm data. How scores work