othmanelhoufi/LM-for-FactChecking

An automated solution for fact-checking using available claims and fake-news datasets to fine-tune state-of-the-art language models published recently for NLP tasks (BERT, RoBERTa, XLNet, ConvBERT...) in order to classify unseen claims.

20
/ 100
Experimental

Leverages Hugging Face Transformers to enable flexible model selection and fine-tuning across five annotated claim datasets (FEVER, MultiFC, Liar, COVID-19, ANTiVax) with configurable hyperparameters via command-line interface and config files. Integrates Weights & Biases for real-time training metric visualization and experiment tracking. Achieves up to 98% accuracy on domain-specific datasets through transfer learning, with extensible architecture supporting custom datasets and additional transformer models.

No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 9 / 25
Community 6 / 25

How are scores calculated?

Stars

13

Forks

1

Language

Python

License

MIT

Last pushed

Aug 28, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/othmanelhoufi/LM-for-FactChecking"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.