tatsu-lab/stanford_alpaca

Code and documentation to train Stanford's Alpaca models, and generate the data.

48
/ 100
Emerging

Generates 52K instruction-following training data using text-davinci-003 via aggressive batch decoding (20 examples at once), reducing costs to under $500. Fine-tunes LLaMA-7B/13B using Hugging Face with standard supervised learning on structured instruction-input-output triples. Includes complete reproducible pipeline: data generation code, dataset, training recipes, and weight diff recovery for reconstructing model checkpoints from LLaMA base weights.

30,267 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 22 / 25

How are scores calculated?

Stars

30,267

Forks

4,011

Language

Python

License

Apache-2.0

Last pushed

Jul 17, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tatsu-lab/stanford_alpaca"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.