yeyupiaoling/Whisper-Finetune

Fine-tune the Whisper speech recognition model to support training without timestamp data, training with timestamp data, and training without speech data. Accelerate inference and support Web deployment, Windows desktop deployment, and Android deployment

56
/ 100
Established

Implements parameter-efficient fine-tuning using LoRA adapters while maintaining compatibility with OpenAI's base Whisper models across all variants (tiny through large-v3-turbo). Provides dual inference acceleration paths through CTranslate2 and GGML quantization, enabling deployment across heterogeneous platforms without requiring model conversion—original Whisper checkpoints convert directly. Integrates with PyTorch/Transformers ecosystem and includes end-to-end tooling: training pipelines supporting mixed data conditions, evaluation harnesses, and turnkey deployment templates for web services, Windows desktop (native apps), and Android via JNI bindings.

1,200 stars.

No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 24 / 25

How are scores calculated?

Stars

1,200

Forks

213

Language

C

License

Apache-2.0

Last pushed

Dec 17, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/yeyupiaoling/Whisper-Finetune"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.