alinourian/Fine-tuning-Mistral-7b-QA

Fine tuning Mistral-7b with PEFT(Parameter Efficient Fine-Tuning) and LoRA(Low-Rank Adaptation) on Puffin Dataset(multi-turn conversations between GPT-4 and real humans)

13
/ 100
Experimental

This project helps AI developers fine-tune the Mistral-7B language model for question-answering tasks. It takes a pre-trained Mistral-7B model and a dataset of multi-turn conversations, like the Puffin dataset, to produce a more specialized and accurate QA model. This tool is designed for machine learning engineers and researchers looking to adapt large language models for specific conversational applications.

No commits in the last 6 months.

Use this if you are an AI developer who needs to customize Mistral-7B for better performance on question-answering in conversational contexts.

Not ideal if you are looking for a ready-to-use chatbot or a tool that doesn't require machine learning expertise.

AI-development language-model-fine-tuning conversational-AI natural-language-processing question-answering-systems
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

13

Forks

Language

Jupyter Notebook

License

Category

llm-fine-tuning

Last pushed

Nov 23, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/alinourian/Fine-tuning-Mistral-7b-QA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.