< Explain other AI papers

WALL-E: World Alignment by Rule Learning Improves World Model-based LLM Agents

Siyu Zhou, Tianyi Zhou, Yijun Yang, Guodong Long, Deheng Ye, Jing Jiang, Chengqi Zhang

2024-10-13

WALL-E: World Alignment by Rule Learning Improves World Model-based LLM Agents

Summary

This paper presents WALL-E, a new approach that improves how large language models (LLMs) understand and interact with their environments by using rule learning to align their predictions with real-world dynamics.

What's the problem?

While LLMs are powerful tools for understanding language, they often struggle to accurately predict what will happen in specific environments, like games or simulations. This is because their training doesn't always cover the unique rules and dynamics of these environments, leading to mistakes in decision-making and actions.

What's the solution?

To solve this problem, the authors developed WALL-E, which combines LLMs with a set of learned rules to create a more accurate model of the environment. By using a method called 'world alignment,' WALL-E learns from its experiences in the environment and adjusts its understanding based on what it observes. For example, if it tries to mine a diamond with the wrong tool and fails, it learns the correct rule (like needing a specific pickaxe) for future attempts. This allows WALL-E to improve its performance significantly in tasks within open-world environments like Minecraft and ALFWorld.

Why it matters?

This research is important because it shows how LLMs can be made more effective in dynamic and complex settings. By improving their ability to learn from real-world interactions and align their predictions with actual outcomes, WALL-E can help advance AI applications in areas such as gaming, robotics, and autonomous systems, making them more reliable and efficient.

Abstract

Can large language models (LLMs) directly serve as powerful world models for model-based agents? While the gaps between the prior knowledge of LLMs and the specified environment's dynamics do exist, our study reveals that the gaps can be bridged by aligning an LLM with its deployed environment and such "world alignment" can be efficiently achieved by rule learning on LLMs. Given the rich prior knowledge of LLMs, only a few additional rules suffice to align LLM predictions with the specified environment dynamics. To this end, we propose a neurosymbolic approach to learn these rules gradient-free through LLMs, by inducing, updating, and pruning rules based on comparisons of agent-explored trajectories and world model predictions. The resulting world model is composed of the LLM and the learned rules. Our embodied LLM agent "WALL-E" is built upon model-predictive control (MPC). By optimizing look-ahead actions based on the precise world model, MPC significantly improves exploration and learning efficiency. Compared to existing LLM agents, WALL-E's reasoning only requires a few principal rules rather than verbose buffered trajectories being included in the LLM input. On open-world challenges in Minecraft and ALFWorld, WALL-E achieves higher success rates than existing methods, with lower costs on replanning time and the number of tokens used for reasoning. In Minecraft, WALL-E exceeds baselines by 15-30% in success rate while costing 8-20 fewer replanning rounds and only 60-80% of tokens. In ALFWorld, its success rate surges to a new record high of 95% only after 6 iterations.