OpenMind/OM1
Modular AI runtime for robots
Supports multimodal sensor inputs (camera, LIDAR, web data) and outputs (motion commands, speech, navigation) through a plugin-based architecture that connects to ROS2, Zenoh, and CycloneDDS middlewares. Agents are configured declaratively via JSON5 files with system prompts, enabling rapid customization across different robot form factors without code changes. Includes web-based debugging, pre-integrated LLM/VLM endpoints, and simulator support (Gazebo, Isaac Sim) for prototyping before hardware deployment.
2,681 stars. Actively maintained with 54 commits in the last 30 days.
Stars
2,681
Forks
965
Language
Python
License
MIT
Category
Last pushed
Mar 13, 2026
Commits (30d)
54
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/OpenMind/OM1"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
mesa/mesa-llm
Extension to the Mesa repository to provide with the ability to plug LLM directly into your...
AgentEra/Agently
[GenAI Application Development Framework] 🚀 Build GenAI application quick and easy 💬 Easy to...
vstorm-co/pydantic-ai-backend
File Storage & Sandbox Backends for Pydantic AI: console tools for file operations,...
zai-org/GLM-5
GLM-5: From Vibe Coding to Agentic Engineering
datapizza-labs/datapizza-ai
Build reliable Gen AI solutions without overhead 🍕