Awesome-Multimodal-Large-Language-Models and Awesome-Multimodal-LLM-Autonomous-Driving

These two tools are ecosystem siblings, where B is a specialized application of the broader field surveyed by A, specifically focusing on multimodal large language models within the autonomous driving domain.

Maintenance 20/25
Adoption 10/25
Maturity 8/25
Community 18/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 10/25
Stars: 17,448
Forks: 1,112
Downloads:
Commits (30d): 7
Language:
License:
Stars: 309
Forks: 13
Downloads:
Commits (30d): 0
Language:
License: MIT
No License No Package No Dependents
Stale 6m No Package No Dependents

About Awesome-Multimodal-Large-Language-Models

BradyFU/Awesome-Multimodal-Large-Language-Models

:sparkles::sparkles:Latest Advances on Multimodal Large Language Models

Comprehensive curated repository of research papers, datasets, and benchmarks covering multimodal LLM advances across instruction tuning, hallucination mitigation, and reasoning tasks. Features proprietary evaluation frameworks (MME, Video-MME, MME-RealWorld) and the VITA series of omni-modal models supporting real-time vision-speech interaction and embodied reasoning. Targets the broader MLLM research ecosystem with extensive documentation of 750+ references and curated resources for model development and evaluation.

About Awesome-Multimodal-LLM-Autonomous-Driving

IrohXu/Awesome-Multimodal-LLM-Autonomous-Driving

[WACV 2024 Survey Paper] Multimodal Large Language Models for Autonomous Driving

Scores updated daily from GitHub, PyPI, and npm data. How scores work