santiag0m/traveling-words

Code repository for the paper "Traveling Words: A Geometric Interpretation of Transformers"

13
/ 100
Experimental

This project helps machine learning researchers understand how large language models (LLMs) like GPT-2 process text. By inputting a phrase or set of words, you can visualize how the model's internal representation of those words changes layer by layer. This reveals insights into the underlying geometric mechanisms of transformer networks, making it useful for those studying LLM behavior and interpretability.

No commits in the last 6 months.

Use this if you are a researcher aiming to gain a deeper, geometric understanding of how transformer models process and transform word representations internally.

Not ideal if you are looking to train or fine-tune an LLM, or if you need to integrate LLM capabilities into an application.

LLM-interpretability NLP-research transformer-analysis computational-linguistics neural-network-mechanisms
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

9

Forks

Language

Python

License

Last pushed

Dec 20, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/santiag0m/traveling-words"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.