Awesome-World-Models and Awesome-World-Model

These are complementary resources that serve overlapping purposes—the first provides a broader theoretical foundation across world models, video generation, and embodied AI, while the second focuses specifically on autonomous driving applications, allowing researchers to use them together for both general context and domain-specific depth.

Awesome-World-Models
56
Established
Awesome-World-Model
53
Established
Maintenance 17/25
Adoption 10/25
Maturity 16/25
Community 13/25
Maintenance 20/25
Adoption 10/25
Maturity 8/25
Community 15/25
Stars: 1,334
Forks: 41
Downloads:
Commits (30d): 19
Language:
License: BSD-3-Clause
Stars: 1,889
Forks: 75
Downloads:
Commits (30d): 10
Language:
License:
No Package No Dependents
No License No Package No Dependents

About Awesome-World-Models

leofan90/Awesome-World-Models

A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and Autonomous Driving, including papers, codes, and related websites.

About Awesome-World-Model

LMD0311/Awesome-World-Model

Collect some World Models for Autonomous Driving (and Robotic, etc.) papers.

Curated repository tracking world model research for autonomous driving and robotics, supplemented by a comprehensive survey (arXiv 2502.10498) that systematically analyzes predictive modeling approaches for 3D scene understanding and future generation. Organizes papers by technical methodology—including occupancy forecasting, generative video models, and unified perception-generation architectures—while indexing benchmark datasets (Argoverse 2, nuScenes) and CVPR workshop challenges. Serves as a living resource with community contributions, enabling researchers to track emerging model architectures (HERMES, UniFuture, GAIA-1) that unify perception and planning for embodied AI systems.

Scores updated daily from GitHub, PyPI, and npm data. How scores work