di37/full-fine-tuning-nvidia-question-and-answering

Flan-t5-base model was fine-tuned on Nvidia Question and Answer Pair Dataset available on Kaggle. This is a beginner level project who wants to step in to the world of Large Language Models.

14
/ 100
Experimental

This project offers a foundational guide for developers new to Large Language Models (LLMs). It walks you through fine-tuning a pre-existing model for question-answering, taking in NVIDIA-specific question-answer pairs and producing a specialized model capable of answering questions about NVIDIA topics. This is ideal for early-career machine learning engineers or data scientists looking to build practical LLM skills.

No commits in the last 6 months.

Use this if you are a developer seeking hands-on experience in the practical steps of fine-tuning an LLM for a specific question-answering task.

Not ideal if you are looking for a pre-trained, production-ready model or a deep dive into advanced LLM architectures and optimization techniques.

LLM fine-tuning question answering model machine learning education natural language processing beginner LLM project
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

22

Forks

Language

Jupyter Notebook

License

Category

llm-fine-tuning

Last pushed

Apr 21, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/di37/full-fine-tuning-nvidia-question-and-answering"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.