JARVIS-AI-ASSISTANT and JARVIS-AI-Assistant

These two tools are competitors, both aiming to be AI assistants inspired by JARVIS, with similar feature sets including speech recognition and AI chat capabilities.

JARVIS-AI-ASSISTANT
46
Emerging
JARVIS-AI-Assistant
44
Emerging
Maintenance 0/25
Adoption 9/25
Maturity 16/25
Community 21/25
Maintenance 0/25
Adoption 8/25
Maturity 16/25
Community 20/25
Stars: 97
Forks: 41
Downloads: โ€”
Commits (30d): 0
Language: Python
License: GPL-3.0
Stars: 42
Forks: 32
Downloads: โ€”
Commits (30d): 0
Language: Python
License: MIT
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About JARVIS-AI-ASSISTANT

JoelShine/JARVIS-AI-ASSISTANT

A true Artificial Intelligent Assistant with ALICE as backend and offline speech recognition with vosk engine and pyttsx3 as text to speech engine

Builds conversational intelligence using AIML pattern matching and Lisp-based reasoning from ALICE files, enabling context-aware dialogue beyond simple command recognition. Implements a modular architecture combining vosk for offline speech-to-text, pyttsx3 for synthesis, and a chat engine that processes natural language queries locally without cloud dependencies. Designed for Windows but supports Linux and macOS with Python 3.7+, requiring downloaded vosk language models to enable accent-specific speech recognition.

About JARVIS-AI-Assistant

rajkishorbgp/JARVIS-AI-Assistant

JARVIS AI Assistant ๐Ÿค– A virtual assistant project inspired by Tony Stark's JARVIS, powered by speech recognition, AI chat, web browsing, and more. Features: ๐ŸŽ™๏ธ Voice-based interaction using speech recognition. ๐Ÿง  AI-powered chat with OpenAI's language model. ๐ŸŒ Web browsing capabilities to open websites. ๐ŸŽต Music playback. โฐCurrent time display

Built in Python with the `speech_recognition` library for voice input and OpenAI API integration for conversational responses, it processes natural language commands to trigger predefined actions like website opening and music playback. The architecture uses a command-driven approach where voice queries are parsed and routed to specialized modules for web browsing, media control, and time retrieval, with responses synthesized back to the user.

Scores updated daily from GitHub, PyPI, and npm data. How scores work