Trustworthy-ML-Lab/corrupting_neuron_explanations
[ICCV 23] Evaluating robustness of neuron explanation methods
No commits in the last 6 months.
Stars
4
Forks
1
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Sep 30, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Trustworthy-ML-Lab/corrupting_neuron_explanations"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
namkoong-lab/dro
A package of distributionally robust optimization (DRO) methods. Implemented via cvxpy and PyTorch
THUDM/grb
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for...
neu-autonomy/nfl_veripy
Formal Verification of Neural Feedback Loops (NFLs)
MinghuiChen43/awesome-trustworthy-deep-learning
A curated list of trustworthy deep learning papers. Daily updating...
ADA-research/VERONA
A lightweight Python package for setting up robustness experiments and to compute robustness...