MeshAnything and MeshAnythingV2
MeshAnythingV2 is a direct successor that improves upon MeshAnything by replacing its autoregressive tokenization approach with adjacent mesh tokenization for more efficient artist-quality mesh generation.
About MeshAnything
buaacyw/MeshAnything
[ICLR 2025] From anything to mesh like human artists. Official impl. of "MeshAnything: Artist-Created Mesh Generation with Autoregressive Transformers"
Converts 3D inputs (meshes or point clouds with normals) to artist-quality topology using an autoregressive transformer with vector quantization, generating up to 800 faces. Built on PyTorch with HuggingFace model weights, it supports both local inference via command-line and interactive Gradio demos, optimized for reconstruction outputs rather than raw generative models. Handles mesh preprocessing with Marching Cubes and normalizes inputs to unit bounding boxes for consistent results.
About MeshAnythingV2
buaacyw/MeshAnythingV2
[ICCV 2025] From anything to mesh like human artists. Official impl. of "MeshAnything V2: Artist-Created Mesh Generation With Adjacent Mesh Tokenization"
Scores updated daily from GitHub, PyPI, and npm data. How scores work