Infinite Mobility: Scalable High-Fidelity Synthesis of Articulated Objects via Procedural Generation
Xinyu Lian, Zichao Yu, Ruiming Liang, Yitong Wang, Li Ray Luo, Kaixu Chen, Yuanzhen Zhou, Qihong Tang, Xudong Xu, Zhaoyang Lyu, Bo Dai, Jiangmiao Pang
2025-03-19
Summary
This paper presents a new way to create realistic 3D models of objects with moving parts (like robots or furniture) for AI to use.
What's the problem?
It's hard to get enough high-quality 3D models for AI training. Existing methods are either limited by the amount of data or require a lot of manual work.
What's the solution?
The researchers developed a system called Infinite Mobility that automatically generates these 3D models. It creates models that are as good as those made by humans, and it can make a lot of them.
Why it matters?
This work is important because it provides AI with more and better training data, which can help AI systems better understand and interact with the real world.
Abstract
Large-scale articulated objects with high quality are desperately needed for multiple tasks related to embodied AI. Most existing methods for creating articulated objects are either data-driven or simulation based, which are limited by the scale and quality of the training data or the fidelity and heavy labour of the simulation. In this paper, we propose Infinite Mobility, a novel method for synthesizing high-fidelity articulated objects through procedural generation. User study and quantitative evaluation demonstrate that our method can produce results that excel current state-of-the-art methods and are comparable to human-annotated datasets in both physics property and mesh quality. Furthermore, we show that our synthetic data can be used as training data for generative models, enabling next-step scaling up. Code is available at https://github.com/Intern-Nexus/Infinite-Mobility