rafelps/HLE-UPC-SemEval-2021-ToxicSpansDetection

HLE-UPC at SemEval-2021 Task 5: Toxic Spans Detection

12
/ 100
Experimental

This project helps content moderators and online community managers identify specific phrases or words within text that contribute to its toxicity. You provide a text input, and the tool highlights the exact 'toxic spans' that make the content harmful. It's designed for anyone needing to pinpoint and address toxicity in written online content.

No commits in the last 6 months.

Use this if you need to precisely locate and understand which parts of a sentence or message are considered toxic.

Not ideal if you only need a general classification of whether an entire text is toxic or not.

content-moderation online-safety community-management hate-speech-detection text-analysis
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Python

License

Last pushed

Nov 26, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/rafelps/HLE-UPC-SemEval-2021-ToxicSpansDetection"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.