langroid and OmAgent
Both tools are frameworks for building LLM-powered agents, making them competitors in the space of multi-agent programming and multimodal language agent development.
About langroid
langroid/langroid
Harness LLMs with Multi-Agent Programming
Provides agent-to-agent asynchronous message passing inspired by the Actor model, where agents encapsulate LLM, vector store, and tool components and collaborate to solve tasks. Integrates with OpenAI APIs, local/remote LLMs via OpenAI-compatible endpoints (including Ollama), MCP servers, and vector databases—without depending on LangChain or other LLM frameworks.
About OmAgent
om-ai-lab/OmAgent
[EMNLP-2024] Build multimodal language agents for fast prototype and production
Provides a graph-based workflow orchestration engine with native support for vision-language models, video processing, and mobile device integration—beyond text-only reasoning. Implements reusable agent operators (ReAct, CoT, SC-CoT) and abstracts complex infrastructure like worker orchestration and task queuing. Supports both fully distributed deployment and lightweight "Lite mode," with local model execution via Ollama or LocalAI.
Scores updated daily from GitHub, PyPI, and npm data. How scores work