charliegerard/safe-space

Github action that checks the toxicity level of comments and PR reviews to help make repos safe spaces.

34
/ 100
Emerging

Leverages TensorFlow.js's pre-trained toxicity classification model to analyze comments and PR reviews in real-time, with configurable toxicity thresholds (0-1 range) to accommodate different community standards. Triggers automatically on `issue_comment` and `pull_request_review` GitHub events, posting automated warnings that allow authors to edit before visibility spreads. Integrates directly into GitHub Actions workflows with minimal setup, requiring only a `GITHUB_TOKEN` and optional custom messaging parameters.

472 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

472

Forks

11

Language

JavaScript

License

GPL-3.0

Last pushed

Jun 24, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/charliegerard/safe-space"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.