eigencore/Tlama_124M
Tlama (124M) is a language model based on LlaMa3 (127M) optimized by EigenCore. It is designed for computational efficiency and scalability, allowing its use on resource-limited hardware without compromising performance.
No commits in the last 6 months.
Stars
12
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 27, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/eigencore/Tlama_124M"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Goekdeniz-Guelmez/mlx-lm-lora
Train Large Language Models on MLX.
uber-research/PPLM
Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.
jarobyte91/pytorch_beam_search
A lightweight implementation of Beam Search for sequence models in PyTorch.
SmallDoges/small-doge
Doge Family of Small Language Models
VHellendoorn/Code-LMs
Guide to using pre-trained large language models of source code