whisper_android and whisper-to-input

These are complementary tools: one provides a reusable TensorFlow Lite inference engine for offline Whisper on Android, while the other is a keyboard application that consumes speech-to-text functionality (potentially using similar underlying models) to enable direct text input.

whisper_android
64
Established
whisper-to-input
50
Established
Maintenance 16/25
Adoption 10/25
Maturity 16/25
Community 22/25
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 18/25
Stars: 630
Forks: 106
Downloads:
Commits (30d): 2
Language: C++
License: MIT
Stars: 117
Forks: 21
Downloads:
Commits (30d): 0
Language: Kotlin
License: GPL-3.0
No Package No Dependents
No Package No Dependents

About whisper_android

vilassn/whisper_android

Offline Speech Recognition with OpenAI Whisper and TensorFlow Lite for Android

Provides dual implementation paths via TensorFlow Lite Java and Native APIs, allowing developers to choose between ease of integration and optimized performance. Includes a Python conversion pipeline to transform OpenAI Whisper models into TFLite format, plus support for live streaming transcription through buffer-based audio input alongside file-based batch processing. The architecture handles multilingual models with configurable vocabulary filters and manages audio preprocessing at 16kHz mono format for inference compatibility.

About whisper-to-input

j3soon/whisper-to-input

An Android keyboard that performs speech-to-text (STT/ASR) with OpenAI Whisper and input the recognized text; Supports English, Chinese, Japanese, etc. and even mixed languages.

Supports pluggable ASR backends including OpenAI API, self-hosted Whisper ASR Webservice, and NVIDIA NIM with TensorRT-LLM optimization. Implements a full Android Input Method Editor (IME) with configurable endpoints, allowing users to choose between cloud and on-device processing for privacy and cost control. The architecture decouples the recognition service layer, enabling deployment flexibility from commercial APIs to GPU-accelerated self-hosted inference.

Scores updated daily from GitHub, PyPI, and npm data. How scores work