SergiuDeveloper/yoro-finetuning
YORO (You-Only-Reason-Once) - a novel LLM architecture that runs the main reasoning block once, caches its output, and reuses it for all subsequent tokens. Lightweight auxiliary networks compensate for the missing reasoning passes, keeping generation coherent while skipping the most expensive computation at every step.
Stars
—
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Mar 18, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/SergiuDeveloper/yoro-finetuning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
axolotl-ai-cloud/axolotl
Go ahead and axolotl questions
google/paxml
Pax is a Jax-based machine learning framework for training large scale models. Pax allows for...
JosefAlbers/PVM
Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon
metriccoders/one-line-llm-tuner
This repository is the source code for fine tuning any LLM in just one line 🔥
Nano-Collective/nanotune
A simple, interactive CLI for fine-tuning small language models on Apple Silicon. No YAML...