< Explain other AI papers

Text-guided Sparse Voxel Pruning for Efficient 3D Visual Grounding

Wenxuan Guo, Xiuwei Xu, Ziwei Wang, Jianjiang Feng, Jie Zhou, Jiwen Lu

2025-02-17

Text-guided Sparse Voxel Pruning for Efficient 3D Visual Grounding

Summary

This paper talks about a new method called Text-guided Sparse Voxel Pruning (TSP) that makes it faster and more accurate for AI to locate objects in 3D scenes based on text descriptions. It combines advanced techniques to improve both speed and precision in real-time applications.

What's the problem?

Current methods for 3D visual grounding, which is the process of finding objects in 3D spaces using text descriptions, are either too slow or not accurate enough. Many systems rely on complex, multi-step processes or struggle to efficiently connect the text with the 3D scene, making them impractical for real-world uses like robotics or virtual reality.

What's the solution?

The researchers created TSP, which uses two key ideas: text-guided pruning (TGP) and completion-based addition (CBA). TGP helps the AI focus only on the important parts of the 3D scene by gradually removing unnecessary details while linking text to the scene. CBA fixes any over-simplifications by adding back missing details without slowing down the process. Together, these techniques make the system faster and more precise. They tested this method and found it outperformed existing approaches in both speed and accuracy on several benchmarks.

Why it matters?

This matters because it could make AI systems much better at understanding and interacting with 3D environments in real time. Applications like augmented reality, robotics, and virtual reality could benefit from this technology by becoming faster and more reliable, improving how machines assist humans in complex tasks.

Abstract

In this paper, we propose an efficient multi-level convolution architecture for 3D visual grounding. Conventional methods are difficult to meet the requirements of real-time inference due to the two-stage or point-based architecture. Inspired by the success of multi-level fully sparse convolutional architecture in 3D object detection, we aim to build a new 3D visual grounding framework following this technical route. However, as in 3D visual grounding task the 3D scene representation should be deeply interacted with text features, sparse convolution-based architecture is inefficient for this interaction due to the large amount of voxel features. To this end, we propose text-guided pruning (TGP) and completion-based addition (CBA) to deeply fuse 3D scene representation and text features in an efficient way by gradual region pruning and target completion. Specifically, TGP iteratively sparsifies the 3D scene representation and thus efficiently interacts the voxel features with text features by cross-attention. To mitigate the affect of pruning on delicate geometric information, CBA adaptively fixes the over-pruned region by voxel completion with negligible computational overhead. Compared with previous single-stage methods, our method achieves top inference speed and surpasses previous fastest method by 100\% FPS. Our method also achieves state-of-the-art accuracy even compared with two-stage methods, with +1.13 lead of Acc@0.5 on ScanRefer, and +2.6 and +3.2 leads on NR3D and SR3D respectively. The code is available at https://github.com/GWxuan/TSP3D{https://github.com/GWxuan/TSP3D}.