jalammar/ecco
Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, BERT, RoBERTA, T5, and T0).
Ecco integrates multiple feature attribution techniques (IntegratedGradients, Saliency, DeepLift, LRP via Captum) to identify which input tokens influence model predictions, alongside neuron activation analysis using NMF and CCA-based similarity metrics (SVCCA, PWCCA, CKA) to understand internal layer representations. Built on PyTorch and Hugging Face Transformers, it supports local custom models and enables layer-by-layer token probability tracking through logit lens visualizations to map how confidence in predictions evolves across transformer blocks.
2,088 stars and 221 monthly downloads. No commits in the last 6 months. Available on PyPI.
Stars
2,088
Forks
177
Language
Jupyter Notebook
License
BSD-3-Clause
Category
Last pushed
Aug 15, 2024
Monthly downloads
221
Commits (30d)
0
Dependencies
5
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/jalammar/ecco"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
rmovva/HypotheSAEs
HypotheSAEs: hypothesizing interpretable relationships in text datasets using sparse...
interpretml/interpret-text
A library that incorporates state-of-the-art explainers for text-based machine learning models...
fdalvi/NeuroX
A Python library that encapsulates various methods for neuron interpretation and analysis in...
MultiplEYE-COST/wg1-experiment-implementation
In this repository we keep the code for the implementation of the eye-tracking experiment for...
alexdyysp/ESIM-pytorch
中国高校计算机大赛--大数据挑战赛