gpt2-finetune and gpt2-fine-tuning

gpt2-finetune
32
Emerging
gpt2-fine-tuning
25
Experimental
Maintenance 0/25
Adoption 5/25
Maturity 16/25
Community 11/25
Maintenance 0/25
Adoption 5/25
Maturity 8/25
Community 12/25
Stars: 13
Forks: 2
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
Stars: 11
Forks: 2
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License:
Stale 6m No Package No Dependents
No License Stale 6m No Package No Dependents

About gpt2-finetune

arham-kk/gpt2-finetune

Fine tuning a text generation model using the GPT-2 architecture and a csv dataset

This project helps developers adapt a GPT-2 text generation model for specific writing tasks by training it on their own dataset. You provide a CSV file containing the text examples you want the model to learn from, and it outputs a specialized GPT-2 model capable of generating similar text. This is designed for AI/ML engineers or data scientists looking to customize an existing language model.

text-generation natural-language-processing deep-learning model-training language-model-customization

About gpt2-fine-tuning

mpuig/gpt2-fine-tuning

Fine-tune GPT2 to generate fake job experiences

This project helps job seekers or recruiters craft realistic and relevant job experience descriptions. You provide a job title and a starting phrase, and it generates professional-sounding sentences suitable for resumes or LinkedIn profiles. This is intended for individuals who need to articulate specific past accomplishments or responsibilities for various roles.

job-seeking resume-writing recruitment career-development professional-branding

Scores updated daily from GitHub, PyPI, and npm data. How scores work