< Explain other AI papers

EvoSyn: Generalizable Evolutionary Data Synthesis for Verifiable Learning

He Du, Bowen Li, Aijun Yang, Siyang He, Qipeng Guo, Dacheng Tao

2025-10-22

EvoSyn: Generalizable Evolutionary Data Synthesis for Verifiable Learning

Summary

This paper focuses on creating high-quality, reliable data to improve how well language models learn and perform complex tasks like coding and problem-solving.

What's the problem?

Currently, it's hard to automatically generate good training data for language models. Often, the data created by AI contains errors or isn't challenging enough to truly test the model's abilities. Existing methods for checking the data are usually specific to one type of task and don't work well across different areas, meaning there's no universal way to ensure the data is actually useful for learning.

What's the solution?

The researchers developed a new system that builds training data in a smart, step-by-step way. It doesn't just *filter* existing data, but actively *creates* problems, potential solutions, and ways to verify those solutions all at once. The system learns what makes a good check by comparing its own assessments to human feedback, and then uses that knowledge to generate increasingly reliable data without needing specific rules for each task. It's like an evolutionary process where the system gets better at creating good data over time.

Why it matters?

This work is important because better training data leads to more capable language models. By creating a system that can reliably generate verifiable data across different tasks, the researchers have opened the door to more robust and generally intelligent AI systems that can excel at things like coding, math, and acting as helpful agents.

Abstract

Reliable verifiable data has become a key driver of capability gains in modern language models, enabling stable reinforcement learning with verifiable rewards and effective distillation that transfers competence across math, coding, and agentic tasks. Yet constructing generalizable synthetic verifiable data remains difficult due to hallucination-prone generation, and weak or trivial verification artifacts that fail to separate strong from weak solutions. Existing approaches often rely on task-specific heuristics or post-hoc filters that do not transfer across domains and lack a principled, universal evaluator of verifiability. In this work, we introduce an evolutionary, task-agnostic, strategy-guided, executably-checkable data synthesis framework that, from minimal seed supervision, jointly synthesizes problems, diverse candidate solutions, and verification artifacts, and iteratively discovers strategies via a consistency-based evaluator that enforces agreement between human-annotated and strategy-induced checks. This pipeline upgrades filtering into principled synthesis: it reliably assembles coherent, verifiable training instances and generalizes without domain-specific rules. Our experiments demonstrate the effectiveness of the proposed approach under both RLVR and model distillation training paradigms. The results show that training with our synthesized data yields significant improvements on both the LiveCodeBench and AgentBench-OS tasks, highlighting the robust generalization of our framework.