SinclairCoder/Instruction-Tuning-Papers
Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).
Curated reading list spanning 20+ foundational and recent papers on instruction-tuning approaches for language models. Covers diverse methodologies including multi-task prompted training (T0, FLAN), retrieval-augmented generation, human feedback alignment, and synthetic instruction generation—enabling systematic study of how natural language instructions improve zero-shot and few-shot generalization across NLP tasks. Organized chronologically from 2021-2022, the collection documents the evolution from foundational work to scaling techniques like instruction fine-tuning on 1,000+ tasks and cross-lingual transfer learning.
766 stars. No commits in the last 6 months.
Stars
766
Forks
24
Language
—
License
—
Category
Last pushed
Jul 20, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/SinclairCoder/Instruction-Tuning-Papers"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DaoD/INTERS
This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in...
Haiyang-W/TokenFormer
[ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling...
declare-lab/instruct-eval
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca...
hkust-nlp/deita
Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]
kehanlu/DeSTA2
Code and model for ICASSP 2025 Paper "Developing Instruction-Following Speech Language Model...