whisperX and whisper-diarization

WhisperX extends Whisper with optimized word-level timestamps and integrated diarization capabilities, while whisper-diarization is a standalone diarization wrapper around base Whisper, making them competitors offering similar speaker attribution features with different implementation approaches.

whisperX
90
Verified
whisper-diarization
56
Established
Maintenance 20/25
Adoption 25/25
Maturity 25/25
Community 20/25
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 20/25
Stars: 20,758
Forks: 2,188
Downloads: 864,629
Commits (30d): 15
Language: Python
License: BSD-2-Clause
Stars: 5,437
Forks: 500
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: BSD-2-Clause
No risk flags
No Package No Dependents

About whisperX

m-bain/whisperX

WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)

Builds on OpenAI's Whisper by combining faster-whisper for batched GPU inference (70x speedup) with wav2vec2 forced phoneme alignment to achieve sub-word timing accuracy. Integrates pyannote-audio for speaker diarization and includes VAD preprocessing to reduce hallucinations while maintaining quality. Supports multiple languages with automatic language-specific alignment model selection from HuggingFace and torchaudio.

About whisper-diarization

MahmoudAshraf97/whisper-diarization

Automatic Speech Recognition with Speaker Diarization based on OpenAI Whisper

Combines Whisper with NVIDIA NeMo's voice activity detection and speaker embedding models (MarbleNet/TitaNet) to attribute transcribed text to individual speakers. Uses source separation (Demucs) for vocal extraction, CTC-forced alignment for precise timestamp correction, and punctuation-based realignment to compensate for temporal drift across segments. Outputs speaker-labeled transcriptions with segment-level timestamps, supporting configurable Whisper models and parallel inference modes for systems with sufficient VRAM.

Scores updated daily from GitHub, PyPI, and npm data. How scores work