unitaryai/detoxify
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built using ⚡ Pytorch Lightning and 🤗 Transformers. For access to our API, please email us at contact@unitary.ai.
Provides three distinct model variants—`original`, `unbiased`, and `multilingual`—each optimized for different toxicity detection scenarios, with lightweight ALBERT-based alternatives for resource-constrained deployments. Leverages transformer-based architectures with bias-aware training on aggregated annotator judgments, supporting multi-label classification across toxicity subtypes (obscenity, threats, identity attacks, etc.) and identity mentions. Exposes predictions via a simple Python API returning per-category confidence scores and supports inference across seven languages with per-language performance metrics.
1,202 stars and 94,691 monthly downloads. Used by 4 other packages. Actively maintained with 2 commits in the last 30 days. Available on PyPI.
Stars
1,202
Forks
141
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 05, 2026
Monthly downloads
94,691
Commits (30d)
2
Dependencies
3
Reverse dependents
4
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/unitaryai/detoxify"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
kensk8er/chicksexer
A Python package for gender classification.
Infinitode/ValX
ValX is an open-source Python package for text cleaning tasks, including profanity detection and...
PavelOstyakov/toxic
Toxic Comment Classification Challenge
minerva-ml/open-solution-toxic-comments
Open solution to the Toxic Comment Classification Challenge
IBM/MAX-Toxic-Comment-Classifier
Detect 6 types of toxicity in user comments.