EnvScaler: Scaling Tool-Interactive Environments for LLM Agent via Programmatic Synthesis
Xiaoshuai Song, Haofei Chang, Guanting Dong, Yutao Zhu, Zhicheng Dou, Ji-Rong Wen
2026-01-12
Summary
This paper introduces EnvScaler, a new system designed to automatically create realistic environments for training large language models (LLMs) to act as helpful agents that can use different tools.
What's the problem?
Training LLMs to be good at using tools in the real world is difficult because getting access to real-world systems is often restricted, simulated environments aren't always reliable and can make things up, and building these training environments by hand is a lot of work and doesn't scale well to create many different scenarios.
What's the solution?
EnvScaler tackles this by automatically building these environments. It first creates the basic structure of an environment based on different topics, then it figures out the logic of how things work within that environment and checks the quality of the setup. Next, it generates many different tasks and creates ways to automatically check if the LLM is completing those tasks correctly using rules. They used this system to create 191 environments and around 7,000 different tasks to train Qwen3 models.
Why it matters?
This work is important because it allows for much more effective training of LLMs to be useful agents. By automatically creating diverse and realistic environments, LLMs can learn to better handle complex tasks that require using multiple tools over several steps, ultimately making them more capable and reliable in real-world applications.
Abstract
Large language models (LLMs) are expected to be trained to act as agents in various real-world environments, but this process relies on rich and varied tool-interaction sandboxes. However, access to real systems is often restricted; LLM-simulated environments are prone to hallucinations and inconsistencies; and manually built sandboxes are hard to scale. In this paper, we propose EnvScaler, an automated framework for scalable tool-interaction environments via programmatic synthesis. EnvScaler comprises two components. First, SkelBuilder constructs diverse environment skeletons through topic mining, logic modeling, and quality evaluation. Then, ScenGenerator generates multiple task scenarios and rule-based trajectory validation functions for each environment. With EnvScaler, we synthesize 191 environments and about 7K scenarios, and apply them to Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) for Qwen3 series models. Results on three benchmarks show that EnvScaler significantly improves LLMs' ability to solve tasks in complex environments involving multi-turn, multi-tool interactions. We release our code and data at https://github.com/RUC-NLPIR/EnvScaler.