Vision-Agents and vision-agent

Stream's Vision-Agents framework (A) is the underlying open-source platform that the Vision Possible Hackathon project (B) was built upon, making them ecosystem siblings where B represents a reference implementation or downstream application of A's architecture.

Vision-Agents
88
Verified
vision-agent
20
Experimental
Maintenance 25/25
Adoption 20/25
Maturity 24/25
Community 19/25
Maintenance 10/25
Adoption 1/25
Maturity 9/25
Community 0/25
Stars: 7,366
Forks: 574
Downloads: 19,360
Commits (30d): 55
Language: Python
License: Apache-2.0
Stars: 1
Forks:
Downloads:
Commits (30d): 0
Language: HTML
License: MIT
No risk flags
No Package No Dependents

About Vision-Agents

GetStream/Vision-Agents

Open Vision Agents by Stream. Build Vision Agents quickly with any model or video provider. Uses Stream's edge network for ultra-low latency.

Features a pluggable processor pipeline for computer vision models (YOLO, Roboflow, PyTorch/ONNX) that run before LLM calls, plus native integrations with OpenAI Realtime, Gemini Live, and Claude for streaming AI responses. Includes voice integration via Deepgram/AssemblyAI for STT and ElevenLabs/Cartesia for TTS, turn detection with VAD, tool calling via MCP, and production-ready HTTP server with Prometheus metrics for horizontal scaling and Kubernetes deployment.

About vision-agent

rupac4530-creator/vision-agent

Production-grade multi-modal AI platform — 17 real-time vision & audio tabs, 22 SDK modules, 7-tier LLM cascade, 37+ endpoints. Built for Vision Possible Hackathon by WeMakeDevs x Stream.

Related comparisons

Scores updated daily from GitHub, PyPI, and npm data. How scores work