Hyper-RAG and RAGGuard
Hyper-RAG prevents hallucinations upstream by improving retrieval quality through hypergraph-based ranking, while RAGGuard detects and scores hallucinations downstream after generation, making them complementary approaches that could be used sequentially in a pipeline.
About Hyper-RAG
iMoonLab/Hyper-RAG
"Hyper-RAG: Combating LLM Hallucinations using Hypergraph-Driven Retrieval-Augmented Generation" by Yifan Feng, Hao Hu, Xingliang Hou, Shiquan Liu, Shihui Ying, Shaoyi Du, Han Hu, and Yue Gao.
Implements hypergraph-based knowledge modeling to capture both pairwise and high-order entity correlations from domain-specific corpora, integrated with a native Hypergraph-DB backend for efficient higher-order relationship retrieval. Includes a lightweight variant (Hyper-RAG-Lite) achieving 2× retrieval speed improvement, and provides a web-based visualization UI for hypergraph exploration and QA interaction. Supports multiple LLM providers through configurable API endpoints and demonstrates broad applicability across medical and general-domain datasets.
About RAGGuard
MukundaKatta/RAGGuard
RAG hallucination detection — verify LLM responses are grounded in source documents with faithfulness scoring
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work