awesome-vla-for-ad and awesome-knowledge-driven-AD
About awesome-vla-for-ad
worldbench/awesome-vla-for-ad
🌐 Vision-Language-Action Models for Autonomous Driving: Past, Present, and Future
This project offers a comprehensive survey of Vision-Language-Action (VLA) models for autonomous driving. It explains how these models integrate real-world visual data and natural language commands to produce driving actions, moving beyond traditional, error-prone modular systems. Robotics engineers and researchers in autonomous vehicle development would use this to understand the current state and future directions of AI-driven self-driving systems.
About awesome-knowledge-driven-AD
PJLab-ADG/awesome-knowledge-driven-AD
A curated list of awesome knowledge-driven autonomous driving (continually updated)
This is a curated collection of research papers and open-source resources focused on autonomous driving systems that use 'knowledge' to make decisions. It gathers information on datasets, benchmarks, and simulators. Researchers and engineers working on the next generation of self-driving cars would use this to quickly find relevant studies and tools.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work