Error-Driven Scene Editing for 3D Grounding in Large Language Models
Yue Zhang, Zun Wang, Han Lin, Jialu Li, Jianing Yang, Yonatan Bitton, Idan Szpektor, Mohit Bansal
2025-11-19
Summary
This paper focuses on improving how well AI models, specifically 3D Large Language Models (3D-LLMs), understand and connect language with objects and spaces in 3D virtual worlds.
What's the problem?
Current 3D-LLMs are good at understanding *what* things are described in language, but struggle with accurately pinpointing *where* things are and how they relate to each other spatially in a 3D scene. This is because they're mostly trained on data that tests language skills, not spatial reasoning, and there isn't a lot of readily available 3D data to fix this imbalance, leading to built-in errors in how they understand 3D space.
What's the solution?
The researchers propose a method of subtly changing 3D scenes to create 'counterfactuals' – slightly altered versions of the scene – that specifically target and correct the model’s spatial understanding errors. They developed a system called DEER-3D, which works by first identifying exactly *where* the model is making a mistake (like getting a color or position wrong), then making the smallest possible change to the 3D scene to fix that specific error, and finally retraining the model on this corrected scene. This is different from simply adding random variations to the data.
Why it matters?
This work is important because it shows a way to improve 3D-LLMs without needing huge amounts of new 3D data or complex scene reconstruction. By focusing on fixing specific errors through targeted scene editing, the models become much better at understanding the spatial relationships within 3D environments, which is crucial for applications like robotics, virtual reality, and creating more realistic AI assistants.
Abstract
Despite recent progress in 3D-LLMs, they remain limited in accurately grounding language to visual and spatial elements in 3D environments. This limitation stems in part from training data that focuses on language reasoning rather than spatial understanding due to scarce 3D resources, leaving inherent grounding biases unresolved. To address this, we propose 3D scene editing as a key mechanism to generate precise visual counterfactuals that mitigate these biases through fine-grained spatial manipulation, without requiring costly scene reconstruction or large-scale 3D data collection. Furthermore, to make these edits targeted and directly address the specific weaknesses of the model, we introduce DEER-3D, an error-driven framework following a structured "Decompose, Diagnostic Evaluation, Edit, and Re-train" workflow, rather than broadly or randomly augmenting data as in conventional approaches. Specifically, upon identifying a grounding failure of the 3D-LLM, our framework first diagnoses the exact predicate-level error (e.g., attribute or spatial relation). It then executes minimal, predicate-aligned 3D scene edits, such as recoloring or repositioning, to produce targeted counterfactual supervision for iterative model fine-tuning, significantly enhancing grounding accuracy. We evaluate our editing pipeline across multiple benchmarks for 3D grounding and scene understanding tasks, consistently demonstrating improvements across all evaluated datasets through iterative refinement. DEER-3D underscores the effectiveness of targeted, error-driven scene editing in bridging linguistic reasoning capabilities with spatial grounding in 3D LLMs.