jalammar/ecco

Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, BERT, RoBERTA, T5, and T0).

52
/ 100
Established

Ecco integrates multiple feature attribution techniques (IntegratedGradients, Saliency, DeepLift, LRP via Captum) to identify which input tokens influence model predictions, alongside neuron activation analysis using NMF and CCA-based similarity metrics (SVCCA, PWCCA, CKA) to understand internal layer representations. Built on PyTorch and Hugging Face Transformers, it supports local custom models and enables layer-by-layer token probability tracking through logit lens visualizations to map how confidence in predictions evolves across transformer blocks.

2,088 stars and 221 monthly downloads. No commits in the last 6 months. Available on PyPI.

Stale 6m
Maintenance 0 / 25
Adoption 15 / 25
Maturity 18 / 25
Community 19 / 25

How are scores calculated?

Stars

2,088

Forks

177

Language

Jupyter Notebook

License

BSD-3-Clause

Last pushed

Aug 15, 2024

Monthly downloads

221

Commits (30d)

0

Dependencies

5

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/jalammar/ecco"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.