Azure-Samples/azureai-foundry-finetuning-raft
A recipe that will walk you through using either Meta Llama 3.1 405B or OpenAI GPT-4o deployed on Azure AI to generate a synthetic dataset using UC Berkeley's Gorilla project RAFT method.
Implements end-to-end RAFT fine-tuning via Azure Developer CLI (AZD) automation, handling synthetic dataset generation, model fine-tuning, deployment, and comparative evaluation in a single workflow. Supports flexible model composition—using large teacher models (GPT-4o or Llama 3.1 405B) to distill knowledge into smaller student models (GPT-4o-mini, Llama 3.1 8B)—with configurable embedding and judge models across both OpenAI and Azure Marketplace deployments. Provides infrastructure-as-code provisioning and environment variable management to support bring-your-own-models scenarios alongside managed Azure AI endpoints.
No commits in the last 6 months.
Stars
77
Forks
27
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jul 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/Azure-Samples/azureai-foundry-finetuning-raft"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
llmware-ai/llmware
Unified framework for building enterprise RAG pipelines with small, specialized models
Sinapsis-AI/sinapsis-chatbots
Monorepo for sinapsis templates supporting LLM based Agents
aimclub/ProtoLLM
Framework for prototyping of LLM-based applications
xi029/Qwen3-VL-MoeLORA
在千问最新的多模态image-text模型Qwen3-VL-4B-Instruct 进行多种lora微调对比效果,通过langchain+RAG+多智能体(Multi-Agent)进行部署
pkargupta/taxoadapt
Dynamically constructs and adapts an LLM-generated taxonomy to a given corpus across multiple dimensions.