141forever/UncerSema4HalluDetec

This is the repository for the paper 'Enhancing Uncertainty Modeling with Semantic Graph for Hallucination Detection' (AAAI2025)

20
/ 100
Experimental

This project helps you identify when Large Language Models (LLMs) generate non-factual or unfaithful information, often called "hallucinations." It takes text generated by an LLM as input and determines the likelihood of hallucination at the token, sentence, and passage levels. This is useful for anyone relying on LLM outputs for critical tasks, such as content creators, researchers, or data analysts, who need to ensure the accuracy of generated text.

No commits in the last 6 months.

Use this if you need to systematically assess and reduce the risk of misinformation from LLM-generated content in your applications.

Not ideal if you are looking for a tool to generate text or improve the factual accuracy of an LLM through direct training.

AI content verification LLM output validation fact-checking natural language processing content quality assurance
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 8 / 25

How are scores calculated?

Stars

8

Forks

1

Language

Python

License

Last pushed

Apr 05, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/141forever/UncerSema4HalluDetec"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.