shapash and awesome-machine-learning-interpretability

Maintenance 13/25
Adoption 20/25
Maturity 25/25
Community 21/25
Maintenance 20/25
Adoption 10/25
Maturity 16/25
Community 23/25
Stars: 3,150
Forks: 373
Downloads: 8,219
Commits (30d): 3
Language: Jupyter Notebook
License: Apache-2.0
Stars: 3,996
Forks: 623
Downloads:
Commits (30d): 10
Language:
License: CC0-1.0
No risk flags
No Package No Dependents

About shapash

MAIF/shapash

🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models

About awesome-machine-learning-interpretability

jphall663/awesome-machine-learning-interpretability

A curated list of awesome responsible machine learning resources.

Organizes resources across interpretability, fairness, governance, and safety—spanning technical tools (Python/R/JavaScript packages, benchmarks, datasets), policy frameworks, incident documentation, and critical perspectives on AI's environmental and social costs. The collection bridges academic research, regulatory guidance, and real-world AI failure case studies, enabling practitioners to implement responsible ML across the full development lifecycle. Actively maintained with structured categories for education, auditing checklists, red-teaming resources, and emerging topics like generative AI explainability and agentic AI governance.

Scores updated daily from GitHub, PyPI, and npm data. How scores work