santiag0m/traveling-words
Code repository for the paper "Traveling Words: A Geometric Interpretation of Transformers"
This project helps machine learning researchers understand how large language models (LLMs) like GPT-2 process text. By inputting a phrase or set of words, you can visualize how the model's internal representation of those words changes layer by layer. This reveals insights into the underlying geometric mechanisms of transformer networks, making it useful for those studying LLM behavior and interpretability.
No commits in the last 6 months.
Use this if you are a researcher aiming to gain a deeper, geometric understanding of how transformer models process and transform word representations internally.
Not ideal if you are looking to train or fine-tune an LLM, or if you need to integrate LLM capabilities into an application.
Stars
9
Forks
—
Language
Python
License
—
Category
Last pushed
Dec 20, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/santiag0m/traveling-words"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
lucidrains/x-transformers
A concise but complete full-attention transformer with a set of promising experimental features...
kanishkamisra/minicons
Utility for behavioral and representational analyses of Language Models
lucidrains/simple-hierarchical-transformer
Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT
lucidrains/dreamer4
Implementation of Danijar's latest iteration for his Dreamer line of work
Nicolepcx/Transformers-in-Action
This is the corresponding code for the book Transformers in Action