< Explain other AI papers

Composition-RL: Compose Your Verifiable Prompts for Reinforcement Learning of Large Language Models

Xin Xu, Clive Bai, Kai Yang, Tianhao Chen, Yangkun Chen, Weijie Liu, Hao Chen, Yang Wang, Saiyong Yang, Can Yang

2026-02-13

Composition-RL: Compose Your Verifiable Prompts for Reinforcement Learning of Large Language Models

Summary

This paper introduces a new technique called Composition-RL to improve how AI models learn through a method called Reinforcement Learning with Verifiable Rewards. It focuses on making better use of existing training data, specifically by creating more challenging learning examples.

What's the problem?

When training AI models with rewards based on whether they can correctly solve problems, a lot of the training data becomes too easy as the model improves. The model quickly learns to solve these simple problems perfectly, and those examples don't help it learn much anymore. While focusing on the hardest problems is helpful, ignoring the easy ones means you're not using all the available data effectively. Expanding the dataset with new problems is also expensive and time-consuming.

What's the solution?

Composition-RL tackles this by automatically combining multiple simple problems into a single, more complex problem. The AI model then tries to solve this combined problem. This creates new, challenging training examples from the existing easy ones, effectively increasing the amount of useful data. They also showed that gradually increasing the complexity of these combined problems during training works even better, and that this method helps the model learn across different types of problems more easily.

Why it matters?

This research is important because it provides a way to get more out of existing training data for AI models, reducing the need to constantly create new data. This is especially valuable for large AI models which require huge datasets. By improving the model’s reasoning abilities and allowing it to learn across different areas, Composition-RL can lead to more capable and versatile AI systems.

Abstract

Large-scale verifiable prompts underpin the success of Reinforcement Learning with Verifiable Rewards (RLVR), but they contain many uninformative examples and are costly to expand further. Recent studies focus on better exploiting limited training data by prioritizing hard prompts whose rollout pass rate is 0. However, easy prompts with a pass rate of 1 also become increasingly prevalent as training progresses, thereby reducing the effective data size. To mitigate this, we propose Composition-RL, a simple yet useful approach for better utilizing limited verifiable prompts targeting pass-rate-1 prompts. More specifically, Composition-RL automatically composes multiple problems into a new verifiable question and uses these compositional prompts for RL training. Extensive experiments across model sizes from 4B to 30B show that Composition-RL consistently improves reasoning capability over RL trained on the original dataset. Performance can be further boosted with a curriculum variant of Composition-RL that gradually increases compositional depth over training. Additionally, Composition-RL enables more effective cross-domain RL by composing prompts drawn from different domains. Codes, datasets, and models are available at https://github.com/XinXU-USTC/Composition-RL.