speech-swift and mlx-swift-asr
The two tools are competitors, as both offer on-device speech recognition for Apple Silicon powered by MLX.
About speech-swift
soniqo/speech-swift
AI speech toolkit for Apple Silicon — ASR, TTS, speech-to-speech, VAD, and diarization powered by MLX and CoreML
Provides comprehensive on-device speech pipeline models (ASR, TTS, voice cloning, diarization, VAD, enhancement) optimized for MLX and CoreML, enabling sub-second streaming latency and Neural Engine acceleration on macOS/iOS without external APIs. Bundles curated models from Alibaba, NVIDIA, and others—from lightweight 82M-param TTS to 7B full-duplex speech-to-speech—with quantization profiles (4-bit/8-bit INT, FP16) and pre-compiled CoreML variants sized for on-device constraints. Installable via Homebrew or Swift Package Manager with native Swift bindings for Mac and iOS integration.
About mlx-swift-asr
ontypehq/mlx-swift-asr
On-device speech recognition for Apple Silicon, powered by MLX.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work