Evaluating Language Models as Synthetic Data Generators
Seungone Kim, Juyoung Suk, Xiang Yue, Vijay Viswanathan, Seongyun Lee, Yizhong Wang, Kiril Gashteovski, Carolin Lawrence, Sean Welleck, Graham Neubig
2024-12-06

Summary
This paper talks about AgoraBench, a new benchmark designed to evaluate how well different language models (LMs) can generate synthetic data, which is important for training other models effectively.
What's the problem?
As synthetic data becomes more popular for training language models, it's crucial to know which models generate the best quality data. However, previous studies have not systematically compared different LMs as data generators, making it hard to understand their strengths and weaknesses.
What's the solution?
To solve this problem, the authors created AgoraBench, a standardized framework that provides clear settings and metrics for evaluating LMs' abilities to generate synthetic data. They tested six different LMs by generating over 1.26 million training examples and found that each model had unique strengths. For example, GPT-4o was great at creating new problems, while Claude-3.5-Sonnet was better at improving existing ones. They also discovered that a model's ability to generate data doesn't always match its problem-solving skills, highlighting the importance of various quality factors in the generated data.
Why it matters?
This research is important because it helps researchers and developers understand which language models are best for generating synthetic data. By providing a clear way to evaluate these models, AgoraBench can lead to better training methods for AI systems, ultimately improving their performance in real-world applications.
Abstract
Given the increasing use of synthetic data in language model (LM) post-training, an LM's ability to generate high-quality data has become nearly as crucial as its ability to solve problems directly. While prior works have focused on developing effective data generation methods, they lack systematic comparison of different LMs as data generators in a unified setting. To address this gap, we propose AgoraBench, a benchmark that provides standardized settings and metrics to evaluate LMs' data generation abilities. Through synthesizing 1.26 million training instances using 6 LMs and training 99 student models, we uncover key insights about LMs' data generation capabilities. First, we observe that LMs exhibit distinct strengths. For instance, GPT-4o excels at generating new problems, while Claude-3.5-Sonnet performs better at enhancing existing ones. Furthermore, our analysis reveals that an LM's data generation ability doesn't necessarily correlate with its problem-solving ability. Instead, multiple intrinsic features of data quality-including response quality, perplexity, and instruction difficulty-collectively serve as better indicators. Finally, we demonstrate that strategic choices in output format and cost-conscious model selection significantly impact data generation effectiveness.