OpenBMB/VisRAG
Parsing-free RAG supported by VLMs
Embeds document images directly using VLMs rather than parsing text first, preserving original formatting and information integrity across multi-image retrieval scenarios. EVisRAG 2.0 introduces evidence-guided reasoning with Reward-Scoped GRPO, enabling token-level optimization of visual perception and multi-step reasoning in VLMs. The framework provides plug-and-play components (VisRAG-Ret for retrieval, EVisRAG for generation) compatible with various VLM backbones and integrates with UltraRAG for end-to-end deployment.
932 stars.
Stars
932
Forks
71
Language
Python
License
Apache-2.0
Category
Last pushed
Dec 07, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/OpenBMB/VisRAG"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
AnswerDotAI/byaldi
Use late-interaction multi-modal models such as ColPali in just a few lines of code.
illuin-tech/colpali
The code used to train and run inference with the ColVision models, e.g. ColPali, ColQwen2, and ColSmol.
jolibrain/colette
Multimodal RAG to search and interact locally with technical documents of any kind
nannib/nbmultirag
Un framework in Italiano ed Inglese, che permette di chattare con i propri documenti in RAG,...
chiang-yuan/llamp
[EMNLP '25] A web app and Python API for multi-modal RAG framework to ground LLMs on...