Tomsawyerhu/LRP4RAG

RAG Hallucination Detecting By LRP.

13
/ 100
Experimental

This project helps machine learning engineers and researchers evaluate how reliably their Retrieval-Augmented Generation (RAG) models are performing. It takes the outputs from your RAG model, analyzes them using an explainability technique called Layer-wise Relevance Propagation (LRP), and then provides insights into which generated answers might be 'hallucinations' or incorrect. This is designed for those building and deploying RAG systems who need to ensure accuracy and trustworthiness.

No commits in the last 6 months.

Use this if you are developing or managing RAG systems and need a robust method to detect when your models are generating false or unsupported information.

Not ideal if you are looking for a general-purpose explainability tool for any deep learning model, or if your primary concern is not RAG-specific hallucination detection.

AI-Trustworthiness RAG-Evaluation LLM-Quality-Assurance Natural-Language-Generation Machine-Learning-Operations
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

11

Forks

Language

Jupyter Notebook

License

Last pushed

Mar 31, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/Tomsawyerhu/LRP4RAG"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.