< Explain other AI papers

VoMP: Predicting Volumetric Mechanical Property Fields

Rishit Dagli, Donglai Xiang, Vismay Modi, Charles Loop, Clement Fuji Tsang, Anka He Chen, Anita Hu, Gavriel State, David I. W. Levin, Maria Shugrina

2025-10-28

VoMP: Predicting Volumetric Mechanical Property Fields

Summary

This paper introduces a new method called VoMP that automatically figures out the material properties of 3D objects, like how squishy or stiff they are, directly from images.

What's the problem?

Normally, when you want to simulate how something behaves physically – like if a ball bounces or a cloth drapes – you need to manually tell the computer exactly what the object is made of, specifying things like its stiffness and density for every part. This is a really time-consuming and difficult process, especially for complex shapes.

What's the solution?

VoMP uses a type of artificial intelligence called a 'Geometry Transformer' to predict these material properties throughout the entire 3D object. It looks at the object from multiple viewpoints, gathers information from those views, and then uses that information to determine properties like Young's modulus, Poisson's ratio, and density for each tiny piece (voxel) of the object. Importantly, the AI is trained on real-world material data, so it only predicts realistic material combinations. They also created a new way to gather training data for the AI, combining existing 3D models, material databases, and even using AI to understand descriptions of materials.

Why it matters?

This is important because it makes physical simulations much faster and easier to set up. Instead of painstakingly defining materials by hand, you can just give VoMP an object and it will automatically determine the properties needed for a realistic simulation, opening the door to more complex and detailed simulations in fields like game development, engineering, and special effects.

Abstract

Physical simulation relies on spatially-varying mechanical properties, often laboriously hand-crafted. VoMP is a feed-forward method trained to predict Young's modulus (E), Poisson's ratio (nu), and density (rho) throughout the volume of 3D objects, in any representation that can be rendered and voxelized. VoMP aggregates per-voxel multi-view features and passes them to our trained Geometry Transformer to predict per-voxel material latent codes. These latents reside on a manifold of physically plausible materials, which we learn from a real-world dataset, guaranteeing the validity of decoded per-voxel materials. To obtain object-level training data, we propose an annotation pipeline combining knowledge from segmented 3D datasets, material databases, and a vision-language model, along with a new benchmark. Experiments show that VoMP estimates accurate volumetric properties, far outperforming prior art in accuracy and speed.