CrowdSelect: Synthetic Instruction Data Selection with Multi-LLM Wisdom
Yisen Li, Lingfeng Yang, Wenxuan Shen, Pan Zhou, Yao Wan, Weiwei Lin, Dongping Chen
2025-03-06
Summary
This paper talks about CrowdSelect, a new method for choosing high-quality synthetic training data to help smaller AI models learn from bigger ones more effectively
What's the problem?
Current ways of picking training data for AI models only look at one aspect at a time, like how well the model performs on a task. This doesn't capture all the important features of good training data, especially for complex tasks that require following instructions
What's the solution?
The researchers created CrowdSelect, which uses multiple AI models to evaluate training data from different angles. They came up with three new ways to measure data quality and combined them into one system. CrowdSelect also uses a clustering approach to make sure the chosen data is diverse. They tested their method on different AI models and found it worked better than existing methods
Why it matters?
This matters because it helps make smaller AI models smarter and more capable, even when working with limited training data. By improving how we choose training examples, we can create more efficient AI systems that perform better on a wide range of tasks. This could lead to more powerful and useful AI assistants that can understand and follow instructions more accurately
Abstract
Distilling advanced Large Language Models' instruction-following capabilities into smaller models using a selected subset has become a mainstream approach in model training. While existing synthetic instruction data selection strategies rely mainly on single-dimensional signals (i.e., reward scores, model perplexity), they fail to capture the complexity of instruction-following across diverse fields. Therefore, we investigate more diverse signals to capture comprehensive instruction-response pair characteristics and propose three foundational metrics that leverage Multi-LLM wisdom, informed by (1) diverse LLM responses and (2) reward model assessment. Building upon base metrics, we propose CrowdSelect, an integrated metric incorporating a clustering-based approach to maintain response diversity. Our comprehensive experiments demonstrate that our foundation metrics consistently improve performance across 4 base models on MT-bench and Arena-Hard. CrowdSelect, efficiently incorporating all metrics, achieves state-of-the-art performance in both Full and LoRA fine-tuning, showing improvements of 4.81% on Arena-Hard and 11.1% on MT-bench with Llama-3.2-3b-instruct. We hope our findings will bring valuable insights for future research in this direction. Code are available at https://github.com/listentm/crowdselect.