pytorch-grad-cam and cnn_explainer

These are competitors offering overlapping approaches to CNN interpretability through gradient-based visualization methods, though A is significantly more mature and feature-complete with support for modern architectures like Vision Transformers, while B appears to be an abandoned educational project.

pytorch-grad-cam
72
Verified
cnn_explainer
23
Experimental
Maintenance 2/25
Adoption 23/25
Maturity 25/25
Community 22/25
Maintenance 0/25
Adoption 6/25
Maturity 9/25
Community 8/25
Stars: 12,682
Forks: 1,694
Downloads: 58,294
Commits (30d): 0
Language: Python
License: MIT
Stars: 19
Forks: 2
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
Stale 6m
Stale 6m No Package No Dependents

About pytorch-grad-cam

jacobgil/pytorch-grad-cam

Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.

Implements 16+ attribution methods ranging from gradient-based approaches (GradCAM, GradCAM++) to perturbation-based techniques (AblationCAM, ScoreCAM) with batched inference for high performance. Built on PyTorch, it supports explainability across diverse architectures including CNNs, Vision Transformers, and multimodal models like CLIP, plus includes built-in metrics and smoothing algorithms to validate and refine explanation quality. Also works with medical imaging, embedding similarity tasks, and provides deep feature factorization for interpretable representation analysis.

About cnn_explainer

gsurma/cnn_explainer

Making CNNs interpretable.

Scores updated daily from GitHub, PyPI, and npm data. How scores work