tensorflow/lucid
A collection of infrastructure and tools for research in neural network interpretability.
Archived# Technical Summary Provides feature visualization and attribution techniques via differentiable optimization of image parameterizations, enabling direct visualization of learned neural network representations. Includes a model zoo of 27 pre-imported vision models with a consistent API, plus spatial and channel attribution methods for understanding neuron interactions. Built on TensorFlow 1.x with Jupyter notebooks as primary interface; integrates with pre-trained models from standard computer vision benchmarks.
4,703 stars. No commits in the last 6 months.
Stars
4,703
Forks
652
Language
Jupyter Notebook
License
Apache-2.0
Last pushed
Feb 06, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tensorflow/lucid"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
interpretml/interpret
Fit interpretable models. Explain blackbox machine learning.
understandable-machine-intelligence-lab/Quantus
Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations
SeldonIO/alibi
Algorithms for explaining machine learning models