< Explain other AI papers

LLM-in-Sandbox Elicits General Agentic Intelligence

Daixuan Cheng, Shaohan Huang, Yuxian Gu, Huatong Song, Guoxin Chen, Li Dong, Wayne Xin Zhao, Ji-Rong Wen, Furu Wei

2026-01-23

LLM-in-Sandbox Elicits General Agentic Intelligence

Summary

This paper introduces a new way to make large language models, or LLMs, smarter by letting them use a 'sandbox' – basically a safe, virtual computer – to help them solve problems, even ones that don't involve coding.

What's the problem?

LLMs are really good at understanding and generating text, but they sometimes struggle with tasks that require more than just language skills, like complex reasoning, using external knowledge, or handling very long pieces of information. They can also have trouble following detailed instructions consistently. Essentially, they're limited by what they've been trained on and how they process information internally.

What's the solution?

The researchers created a system called LLM-in-Sandbox where the LLM can write and run code within the sandbox. This allows the LLM to access tools, search the internet for information, save things to a virtual 'file system' to remember more, and even format its answers neatly. They also developed a way to further improve this system using a type of learning called reinforcement learning, but importantly, this learning doesn't require the LLM to be specifically trained on 'agent' tasks – it learns just by exploring the sandbox.

Why it matters?

This work is important because it shows that LLMs have a hidden potential for problem-solving that can be unlocked by giving them the ability to interact with a digital environment. It means we don't necessarily need to retrain these massive models from scratch to make them better at a wider range of tasks. It opens the door to more versatile and capable AI systems that can handle complex challenges in fields like science, medicine, and everyday problem-solving, and the researchers even made their system available for others to use.

Abstract

We introduce LLM-in-Sandbox, enabling LLMs to explore within a code sandbox (i.e., a virtual computer), to elicit general intelligence in non-code domains. We first demonstrate that strong LLMs, without additional training, exhibit generalization capabilities to leverage the code sandbox for non-code tasks. For example, LLMs spontaneously access external resources to acquire new knowledge, leverage the file system to handle long contexts, and execute scripts to satisfy formatting requirements. We further show that these agentic capabilities can be enhanced through LLM-in-Sandbox Reinforcement Learning (LLM-in-Sandbox-RL), which uses only non-agentic data to train models for sandbox exploration. Experiments demonstrate that LLM-in-Sandbox, in both training-free and post-trained settings, achieves robust generalization spanning mathematics, physics, chemistry, biomedicine, long-context understanding, and instruction following. Finally, we analyze LLM-in-Sandbox's efficiency from computational and system perspectives, and open-source it as a Python package to facilitate real-world deployment.