Coarse Correspondence Elicit 3D Spacetime Understanding in Multimodal Language Model
Benlin Liu, Yuhao Dong, Yiqin Wang, Yongming Rao, Yansong Tang, Wei-Chiu Ma, Ranjay Krishna
2024-08-02

Summary
This paper introduces Coarse Correspondence, a new method that helps multimodal language models (MLLMs) better understand 3D spaces and how things change over time by using simple visual cues from images and videos.
What's the problem?
Many advanced AI models struggle to understand the three-dimensional layout of objects and how they move through time. This limitation makes it difficult for these models to perform tasks that require spatial and temporal reasoning, such as interpreting scenes or understanding actions in videos. Current methods often require complex training and detailed data, which can be challenging to obtain.
What's the solution?
To address this problem, the authors developed Coarse Correspondence, a straightforward method that does not require extensive training. It uses a lightweight tracking model to identify and link objects across different frames in a video or from various angles in a set of images. By marking frequently appearing objects with unique identifiers, the model can learn to recognize and understand the relationships between these objects in 3D space. This approach allows MLLMs to achieve better performance on tasks related to 3D understanding without needing complicated setups.
Why it matters?
This research is significant because it enhances the ability of AI systems to interpret and interact with the world around them in a more intelligent way. By improving how models understand 3D spaces and movements, Coarse Correspondence can lead to advancements in fields like robotics, virtual reality, and automated video analysis. This could make AI applications more effective and versatile in real-world scenarios.
Abstract
Multimodal language models (MLLMs) are increasingly being implemented in real-world environments, necessitating their ability to interpret 3D spaces and comprehend temporal dynamics. Despite their potential, current top models within our community still fall short in adequately understanding spatial and temporal dimensions. We introduce Coarse Correspondence, a simple, training-free, effective, and general-purpose visual prompting method to elicit 3D and temporal understanding in multimodal LLMs. Our method uses a lightweight tracking model to find object correspondences between frames in a video or between sets of image viewpoints. It selects the most frequent object instances and visualizes them with markers with unique IDs in the image. With this simple approach, we achieve state-of-the-art results on 3D understanding benchmarks including ScanQA (+20.5\%) and a subset of OpenEQA (+9.7\%), and on long-form video benchmarks such as EgoSchema (+6.0\%). We also curate a small diagnostic dataset to evaluate whether MLLMs can reason about space from a described viewpoint other than the camera viewpoint. Again, Coarse Correspondence improves spatial perspective-taking abilities but we highlight that MLLMs struggle with this task. Together, we demonstrate that our simple prompting method can significantly aid downstream tasks that require 3D or temporal reasoning.