ilias-ant/toxic-spans-detection

An attempt at SemEval 2021 Task 5: Toxic Spans Detection.

31
/ 100
Emerging

This project helps online community managers and content moderators identify the specific phrases or segments within user posts that make them toxic. You provide it with a piece of text, and it highlights the exact parts contributing to its toxicity. It's designed for anyone responsible for maintaining respectful and safe online discussions.

No commits in the last 6 months.

Use this if you need to quickly pinpoint and remove offensive language from user-generated content, rather than just knowing if a post is toxic overall.

Not ideal if you're looking for a simple 'toxic or not toxic' classifier for entire posts, or if you need to detect subtle nuances of hate speech beyond clearly defined toxic spans.

content-moderation community-management online-safety social-media-management text-analysis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 3 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

4

Forks

1

Language

Jupyter Notebook

License

MIT

Last pushed

Jan 26, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/ilias-ant/toxic-spans-detection"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.