World Modeling Makes a Better Planner: Dual Preference Optimization for Embodied Task Planning
Siyin Wang, Zhaoye Fei, Qinyuan Cheng, Shiduo Zhang, Panpan Cai, Jinlan Fu, Xipeng Qiu
2025-03-14
Summary
This paper talks about a new AI training method called D²PO that helps robots or virtual assistants plan tasks better by teaching them to predict how actions affect their surroundings.
What's the problem?
Current AI models for task planning (like helping robots clean or cook) either focus only on choosing actions or use world models too late, leading to mistakes in complex tasks with many steps.
What's the solution?
D²PO trains AI to predict how actions change the environment (world modeling) while also learning the best actions, using a trial-and-error tree search to gather data automatically.
Why it matters?
This helps robots or AI assistants perform tasks more reliably in real-world settings, like homes or factories, without needing constant human guidance or expensive training.
Abstract
Recent advances in large vision-language models (LVLMs) have shown promise for embodied task planning, yet they struggle with fundamental challenges like dependency constraints and efficiency. Existing approaches either solely optimize action selection or leverage world models during inference, overlooking the benefits of learning to model the world as a way to enhance planning capabilities. We propose Dual Preference Optimization (D^2PO), a new learning framework that jointly optimizes state prediction and action selection through preference learning, enabling LVLMs to understand environment dynamics for better planning. To automatically collect trajectories and stepwise preference data without human annotation, we introduce a tree search mechanism for extensive exploration via trial-and-error. Extensive experiments on VoTa-Bench demonstrate that our D^2PO-based method significantly outperforms existing methods and GPT-4o when applied to Qwen2-VL (7B), LLaVA-1.6 (7B), and LLaMA-3.2 (11B), achieving superior task success rates with more efficient execution paths.