expo-speech-recognition and react-native-vosk
These are competitors—both provide speech recognition capabilities for React Native, with A offering cloud-based recognition via Expo's infrastructure while B provides offline recognition through the Vosk library, requiring developers to choose one approach based on their connectivity and privacy requirements.
About expo-speech-recognition
jamsch/expo-speech-recognition
Speech Recognition for React Native Expo projects
Wraps platform-specific speech APIs (iOS `SFSpeechRecognizer`, Android `SpeechRecognizer`, Web `SpeechRecognition`) with a unified interface, supporting real-time transcription with interim results, volume metering, and offline on-device recognition where available. Includes React hooks for event-driven usage, granular permission management for microphone and speech recognition separately, and can transcribe pre-recorded audio files in multiple formats. Polyfills the Web Speech API and provides language detection and platform compatibility detection across iOS, Android, and web targets.
About react-native-vosk
riderodd/react-native-vosk
Speech recognition module for react native using Vosk library
Enables offline speech recognition by bundling prebuilt Vosk models directly into the app bundle or loading them dynamically from app storage, eliminating cloud dependencies. Supports constrained grammar matching for domain-specific voice commands and provides event-driven result callbacks for real-time transcription processing. Ships with Expo config plugin support for automatic model integration and includes cross-platform model management for both Android assets and iOS bundle resources.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work