Azure-Samples/azureai-foundry-finetuning-raft

A recipe that will walk you through using either Meta Llama 3.1 405B or OpenAI GPT-4o deployed on Azure AI to generate a synthetic dataset using UC Berkeley's Gorilla project RAFT method.

40
/ 100
Emerging

Implements end-to-end RAFT fine-tuning via Azure Developer CLI (AZD) automation, handling synthetic dataset generation, model fine-tuning, deployment, and comparative evaluation in a single workflow. Supports flexible model composition—using large teacher models (GPT-4o or Llama 3.1 405B) to distill knowledge into smaller student models (GPT-4o-mini, Llama 3.1 8B)—with configurable embedding and judge models across both OpenAI and Azure Marketplace deployments. Provides infrastructure-as-code provisioning and environment variable management to support bring-your-own-models scenarios alongside managed Azure AI endpoints.

No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 9 / 25
Maturity 9 / 25
Community 20 / 25

How are scores calculated?

Stars

77

Forks

27

Language

Jupyter Notebook

License

MIT

Last pushed

Jul 17, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/Azure-Samples/azureai-foundry-finetuning-raft"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.