MB-iSTFT-VITS2 and MB-iSTFT-VITS-with-AutoVocoder

These are ecosystem siblings where A represents the core MB-iSTFT-VITS2 implementation integrated into the standard vits2_pytorch framework, while B extends that same MB-iSTFT-VITS architecture with an additional AutoVocoder component as an alternative vocoding approach.

MB-iSTFT-VITS2
52
Established
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 20/25
Maintenance 0/25
Adoption 8/25
Maturity 16/25
Community 13/25
Stars: 134
Forks: 31
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 48
Forks: 7
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No Package No Dependents
Stale 6m No Package No Dependents

About MB-iSTFT-VITS2

FENRlR/MB-iSTFT-VITS2

Application of MB-iSTFT-VITS components to vits2_pytorch

Combines multi-band inverse Short-Time Fourier Transform (MB-iSTFT) vocoding with VITS2's end-to-end text-to-speech architecture, enabling subband-wise synthesis for improved audio quality. Supports multiple alignment backends including Triton-accelerated Super Monotonic Align, eliminating Cython compilation requirements. Offers variants ranging from full MB-iSTFT-VITS2 to lightweight Mini configurations, with single and multi-speaker training pipelines.

About MB-iSTFT-VITS-with-AutoVocoder

hcy71o/MB-iSTFT-VITS-with-AutoVocoder

Incorporating AutoVocoder to MB-iSTFT-VITS

Scores updated daily from GitHub, PyPI, and npm data. How scores work