Faster-Local-Voice-AI and Local-Voice
These are ecosystem siblings—the second is a simplified, refactored successor that reduces the first's complexity by removing JACK/PipeWire audio routing infrastructure in favor of direct Vosk/Piper integration for the same offline voice assistant use case.
About Faster-Local-Voice-AI
m15-ai/Faster-Local-Voice-AI
A real-time, fully local voice AI system optimized for low-resource devices like an 8GB Ubuntu laptop with no GPU, achieving sub-second STT-to-TTS latency using Ollama, Vosk, Piper, and JACK/PipeWire. Open-source and privacy-focused for offline conversational AI.
About Local-Voice
m15-ai/Local-Voice
A real-time, offline voice assistant for Linux and Raspberry Pi. Uses local LLMs (via Ollama), speech-to-text (Vosk), and text-to-speech (Piper) for fast, wake-free voice interaction. No cloud. No APIs. Just Python, a mic, and your voice.
Scores updated daily from GitHub, PyPI, and npm data. How scores work