notdanna/GPoemsT
A modular Transformer-based architecture for constrained poetic generation, leveraging Full Fine-Tuning (FFT) and figure-specific LoRA adapters for precise rhetorical control.
Stars
1
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Jan 10, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/notdanna/GPoemsT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unslothai/unsloth
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama,...
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
modelscope/ms-swift
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5,...
linkedin/Liger-Kernel
Efficient Triton Kernels for LLM Training
oumi-ai/oumi
Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!