charliegerard/safe-space
Github action that checks the toxicity level of comments and PR reviews to help make repos safe spaces.
Leverages TensorFlow.js's pre-trained toxicity classification model to analyze comments and PR reviews in real-time, with configurable toxicity thresholds (0-1 range) to accommodate different community standards. Triggers automatically on `issue_comment` and `pull_request_review` GitHub events, posting automated warnings that allow authors to edit before visibility spreads. Integrates directly into GitHub Actions workflows with minimal setup, requiring only a `GITHUB_TOKEN` and optional custom messaging parameters.
472 stars. No commits in the last 6 months.
Stars
472
Forks
11
Language
JavaScript
License
GPL-3.0
Category
Last pushed
Jun 24, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/charliegerard/safe-space"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
zake7749/DeepToxic
top 1% solution to toxic comment classification challenge on Kaggle.
aralroca/react-text-toxicity
Detect text toxicity in a simple way, using React. Based in a Keras model, loaded with Tensorflow.js.
jaydeepjethwa/DeTox
A web-app to identify toxic comments in a youtube channel and delete them.
DenisIndenbom/AntiToxicBot
AntiToxicBot is a bot that detects toxics in a chat using Data Science and Machine Learning...
bensonruan/Toxic-Comment-Classifier
Toxic-Comment-Classifier