detoxify and MAX-Toxic-Comment-Classifier

These are competitors offering alternative approaches to the same task—both detect toxic comments in text, but Detoxify provides pre-trained models for multiple toxicity subtypes with broader adoption, while MAX-Toxic-Comment-Classifier offers a containerized REST API service with IBM's Model Asset eXchange framework for different deployment preferences.

detoxify
83
Verified
Maintenance 13/25
Adoption 24/25
Maturity 25/25
Community 21/25
Maintenance 2/25
Adoption 8/25
Maturity 16/25
Community 20/25
Stars: 1,202
Forks: 141
Downloads: 94,691
Commits (30d): 2
Language: Python
License: Apache-2.0
Stars: 56
Forks: 31
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No risk flags
Stale 6m No Package No Dependents

About detoxify

unitaryai/detoxify

Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built using ⚡ Pytorch Lightning and 🤗 Transformers. For access to our API, please email us at contact@unitary.ai.

Provides three distinct model variants—`original`, `unbiased`, and `multilingual`—each optimized for different toxicity detection scenarios, with lightweight ALBERT-based alternatives for resource-constrained deployments. Leverages transformer-based architectures with bias-aware training on aggregated annotator judgments, supporting multi-label classification across toxicity subtypes (obscenity, threats, identity attacks, etc.) and identity mentions. Exposes predictions via a simple Python API returning per-category confidence scores and supports inference across seven languages with per-language performance metrics.

About MAX-Toxic-Comment-Classifier

IBM/MAX-Toxic-Comment-Classifier

Detect 6 types of toxicity in user comments.

Related comparisons

Scores updated daily from GitHub, PyPI, and npm data. How scores work