charmlab/recourse_benchmarks
A package for Displaying and Computing Benchmarking Results of Algorithmic Recourse and Counterfactual Explanation Algorithms
Stars
8
Forks
6
Language
Python
License
MIT
Last pushed
Feb 10, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/charmlab/recourse_benchmarks"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
understandable-machine-intelligence-lab/Quantus
Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations
ModelOriented/DALEX
moDel Agnostic Language for Exploration and eXplanation
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...