NVIDIA-AI-Blueprints/video-search-and-summarization

Blueprint for Ingesting massive volumes of live or archived videos and extract insights for summarization and interactive Q&A

61
/ 100
Established

Leverages NVIDIA NIM microservices (Vision Language Models like Cosmos Nemotron and LLMs like Llama Nemotron) orchestrated through the Model Context Protocol to provide unified tool access for VLM-based Q&A, semantic video embeddings, long-form summarization, and real-time anomaly detection. The architecture separates real-time video intelligence (extracting visual features and embeddings via microservices), downstream analytics (enriching metadata streams into actionable alerts), and agentic workflows that coordinate across video retrieval, perception, and language understanding components. Supports multiple deployment topologies from local Docker Compose to cloud instances with validated GPU configurations, targeting both video analysts needing 1-click setup and ML engineers requiring custom pipeline modifications.

432 stars.

No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

432

Forks

160

Language

Python

License

Last pushed

Feb 09, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/NVIDIA-AI-Blueprints/video-search-and-summarization"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.