< Explain other AI papers

One Life to Learn: Inferring Symbolic World Models for Stochastic Environments from Unguided Exploration

Zaid Khan, Archiki Prasad, Elias Stengel-Eskin, Jaemin Cho, Mohit Bansal

2025-10-15

One Life to Learn: Inferring Symbolic World Models for Stochastic Environments from Unguided Exploration

Summary

This paper presents a new way for an AI to learn how the world works around it, building an internal 'model' of that world it can use to predict what will happen next.

What's the problem?

Traditionally, teaching an AI to understand a world requires a lot of data, a simple environment, and often, help from humans. This isn't realistic. The researchers wanted to create an AI that could learn about a complex and unpredictable world all on its own, with only one chance to explore – meaning it can't just 'die' and restart to learn from mistakes.

What's the solution?

They developed a system called OneLife. It works by learning 'rules' about how the world changes, but these rules aren't always active. Each rule only kicks in when certain conditions are met. This is like how real-world things work – gravity only affects things that are falling, for example. This approach avoids the AI getting bogged down trying to process *every* possible rule at once, making it efficient even in complicated situations. They tested it in a game environment called Crafter-OO, where the AI had to learn how objects interact.

Why it matters?

This research is important because it's a step towards creating AI that can truly understand and navigate the real world without constant human intervention. If an AI can build its own world model through exploration and learning, it opens up possibilities for robots and other intelligent systems that can adapt to new and challenging environments.

Abstract

Symbolic world modeling requires inferring and representing an environment's transitional dynamics as an executable program. Prior work has focused on largely deterministic environments with abundant interaction data, simple mechanics, and human guidance. We address a more realistic and challenging setting, learning in a complex, stochastic environment where the agent has only "one life" to explore a hostile environment without human guidance. We introduce OneLife, a framework that models world dynamics through conditionally-activated programmatic laws within a probabilistic programming framework. Each law operates through a precondition-effect structure, activating in relevant world states. This creates a dynamic computation graph that routes inference and optimization only through relevant laws, avoiding scaling challenges when all laws contribute to predictions about a complex, hierarchical state, and enabling the learning of stochastic dynamics even with sparse rule activation. To evaluate our approach under these demanding constraints, we introduce a new evaluation protocol that measures (a) state ranking, the ability to distinguish plausible future states from implausible ones, and (b) state fidelity, the ability to generate future states that closely resemble reality. We develop and evaluate our framework on Crafter-OO, our reimplementation of the Crafter environment that exposes a structured, object-oriented symbolic state and a pure transition function that operates on that state alone. OneLife can successfully learn key environment dynamics from minimal, unguided interaction, outperforming a strong baseline on 16 out of 23 scenarios tested. We also test OneLife's planning ability, with simulated rollouts successfully identifying superior strategies. Our work establishes a foundation for autonomously constructing programmatic world models of unknown, complex environments.