rodgersmag/tinyllm
TinyLLM is a research project focused on developing and training compact, specialized language models using publicly available datasets from platforms like Hugging Face. The project aims to explore diverse architectures and build expertise toward creating a high-performance coding model rivaling the capabilities of claude code
No commits in the last 6 months.
Stars
6
Forks
—
Language
Python
License
—
Category
Last pushed
Aug 06, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/rodgersmag/tinyllm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Goekdeniz-Guelmez/mlx-lm-lora
Train Large Language Models on MLX.
uber-research/PPLM
Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.
VHellendoorn/Code-LMs
Guide to using pre-trained large language models of source code
ssbuild/chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
jarobyte91/pytorch_beam_search
A lightweight implementation of Beam Search for sequence models in PyTorch.