NeuGrasp: Generalizable Neural Surface Reconstruction with Background Priors for Material-Agnostic Object Grasp Detection
Qingyu Fan, Yinghao Cai, Chao Li, Wenzhe He, Xudong Zheng, Tao Lu, Bin Liang, Shuo Wang
2025-03-11
Summary
This paper talks about NeuGrasp, a smart robot system that learns to pick up objects—even see-through or shiny ones—by using background clues and AI to 'imagine' their 3D shapes instead of relying on faulty depth sensors.
What's the problem?
Robots struggle to grab transparent or reflective objects (like glass cups) because depth cameras can’t accurately measure their surfaces, leading to mistakes.
What's the solution?
NeuGrasp uses background images and a neural network to focus on the object’s shape, combines multiple camera angles with transformers, and predicts safe grab points without needing perfect depth data.
Why it matters?
This helps robots work better in real-world settings like kitchens or factories where they need to handle all kinds of objects, not just easy-to-scan ones.
Abstract
Robotic grasping in scenes with transparent and specular objects presents great challenges for methods relying on accurate depth information. In this paper, we introduce NeuGrasp, a neural surface reconstruction method that leverages background priors for material-agnostic grasp detection. NeuGrasp integrates transformers and global prior volumes to aggregate multi-view features with spatial encoding, enabling robust surface reconstruction in narrow and sparse viewing conditions. By focusing on foreground objects through residual feature enhancement and refining spatial perception with an occupancy-prior volume, NeuGrasp excels in handling objects with transparent and specular surfaces. Extensive experiments in both simulated and real-world scenarios show that NeuGrasp outperforms state-of-the-art methods in grasping while maintaining comparable reconstruction quality. More details are available at https://neugrasp.github.io/.