Aratako/Irodori-TTS
A Flow Matching-based Text-to-Speech Model with Emoji-driven Style Control
Employs a Rectified Flow Diffusion Transformer over DACVAE continuous latents for 48kHz synthesis, with joint-attention conditioning for zero-shot voice cloning and emoji-driven style control. Supports distributed multi-GPU training via torchrun with mixed precision (bf16), gradient accumulation, and parameter-efficient LoRA fine-tuning. Provides inference via CLI, Gradio UI, and direct HuggingFace Hub checkpoint loading with configurable guidance modes and DACVAE codec control.
Stars
40
Forks
6
Language
Python
License
MIT
Category
Last pushed
Feb 27, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Aratako/Irodori-TTS"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
PrunaAI/pruna
Pruna is a model optimization framework built for developers, enabling you to deliver faster,...
bytedance/LatentSync
Taming Stable Diffusion for Lip Sync!
haoheliu/AudioLDM-training-finetuning
AudioLDM training, finetuning, evaluation and inference.
Text-to-Audio/Make-An-Audio
PyTorch Implementation of Make-An-Audio (ICML'23) with a Text-to-Audio Generative Model
sayakpaul/diffusers-torchao
End-to-end recipes for optimizing diffusion models with torchao and diffusers (inference and FP8...