vosk and vosk-asterisk

The second is a specialized integration module that deploys the first as a server backend for Asterisk PBX systems, making them complements rather than competitors.

vosk
68
Established
vosk-asterisk
47
Emerging
Maintenance 0/25
Adoption 25/25
Maturity 25/25
Community 18/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 21/25
Stars: 493
Forks: 56
Downloads: 335,415
Commits (30d): 0
Language: C
License: Apache-2.0
Stars: 128
Forks: 41
Downloads:
Commits (30d): 0
Language: C
License: GPL-2.0
Stale 6m
Stale 6m No Package No Dependents

About vosk

alphacep/vosk

VOSK Speech Recognition Toolkit

Audio fingerprinting and LSH-based indexing enable training on massive speech datasets (100k+ hours) without neural networks, with incremental model improvement through direct sample addition. The system segments audio into chunks, stores them in a hash-indexed database for fast lookup during decoding, and integrates with Kaldi for phoneme alignment and segmentation. Supports lifelong learning paradigms with built-in verification tools to identify and correct recognition gaps.

About vosk-asterisk

alphacep/vosk-asterisk

Speech Recognition in Asterisk with Vosk Server

Integrates with Asterisk's native speech recognition framework via WebSocket connections to a separate Vosk Server instance, enabling offline speech-to-text processing through Kaldi models. Implements Asterisk dialplan applications (`SpeechCreate`, `SpeechBackground`) that interact with remote Vosk servers, supporting multiple language models deployable via Docker. Compatible across Asterisk versions 13-17+ with modular installation as `res_speech_vosk.so`, requiring `res_http_websocket.so` for transport.

Scores updated daily from GitHub, PyPI, and npm data. How scores work