Faster-Local-Voice-AI and Local-Voice

These are ecosystem siblings—the second is a simplified, refactored successor that reduces the first's complexity by removing JACK/PipeWire audio routing infrastructure in favor of direct Vosk/Piper integration for the same offline voice assistant use case.

Faster-Local-Voice-AI
31
Emerging
Local-Voice
29
Experimental
Maintenance 2/25
Adoption 6/25
Maturity 9/25
Community 14/25
Maintenance 2/25
Adoption 5/25
Maturity 9/25
Community 13/25
Stars: 23
Forks: 4
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 10
Forks: 2
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About Faster-Local-Voice-AI

m15-ai/Faster-Local-Voice-AI

A real-time, fully local voice AI system optimized for low-resource devices like an 8GB Ubuntu laptop with no GPU, achieving sub-second STT-to-TTS latency using Ollama, Vosk, Piper, and JACK/PipeWire. Open-source and privacy-focused for offline conversational AI.

About Local-Voice

m15-ai/Local-Voice

A real-time, offline voice assistant for Linux and Raspberry Pi. Uses local LLMs (via Ollama), speech-to-text (Vosk), and text-to-speech (Piper) for fast, wake-free voice interaction. No cloud. No APIs. Just Python, a mic, and your voice.

Scores updated daily from GitHub, PyPI, and npm data. How scores work