Tomsawyerhu/LRP4RAG
RAG Hallucination Detecting By LRP.
This project helps machine learning engineers and researchers evaluate how reliably their Retrieval-Augmented Generation (RAG) models are performing. It takes the outputs from your RAG model, analyzes them using an explainability technique called Layer-wise Relevance Propagation (LRP), and then provides insights into which generated answers might be 'hallucinations' or incorrect. This is designed for those building and deploying RAG systems who need to ensure accuracy and trustworthiness.
No commits in the last 6 months.
Use this if you are developing or managing RAG systems and need a robust method to detect when your models are generating false or unsupported information.
Not ideal if you are looking for a general-purpose explainability tool for any deep learning model, or if your primary concern is not RAG-specific hallucination detection.
Stars
11
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Mar 31, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/Tomsawyerhu/LRP4RAG"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
onestardao/WFGY
WFGY: open-source reasoning and debugging infrastructure for RAG and AI agents. Includes the...
KRLabsOrg/verbatim-rag
Hallucination-prevention RAG system with verbatim span extraction. Ensures all generated content...
iMoonLab/Hyper-RAG
"Hyper-RAG: Combating LLM Hallucinations using Hypergraph-Driven Retrieval-Augmented Generation"...
frmoretto/clarity-gate
Stop LLMs from hallucinating your guesses as facts. Clarity Gate is a verification protocol for...
project-miracl/nomiracl
NoMIRACL: A multilingual hallucination evaluation dataset to evaluate LLM robustness in RAG...