Hierarchical Dataset Selection for High-Quality Data Sharing
Xiaona Zhou, Yingyan Zeng, Ran Jin, Ismini Lourentzou
2025-12-17
Summary
This paper focuses on how to smartly choose which datasets to use when training machine learning models, especially when you have lots of different options available.
What's the problem?
Machine learning models need good data to learn, but often that data comes from many different places, like different websites or research groups. These datasets aren't all created equal – some are more helpful than others. Current methods usually pick individual pieces of data randomly, treating all sources as equally good, which isn't efficient or effective. It's hard to decide which entire datasets will give the biggest boost to a model's performance, especially when you have limited resources like time or computing power.
What's the solution?
The researchers developed a method called DaSH, which stands for Dataset Selection via Hierarchies. DaSH doesn't just look at individual datasets, but also groups them together (like by the institution that created them). It figures out which datasets and groups of datasets are most useful, even if it doesn't have a lot of information to start with. It does this by modeling how helpful each dataset is at different levels, allowing it to make smart choices quickly.
Why it matters?
This work is important because it makes training machine learning models more efficient and accurate. By intelligently selecting datasets, DaSH can achieve better results with less effort, which is crucial when dealing with large amounts of data from various sources. It’s also robust, meaning it works well even when there isn’t much data available or when the available data isn’t perfectly suited to the task, making it practical for real-world applications.
Abstract
The success of modern machine learning hinges on access to high-quality training data. In many real-world scenarios, such as acquiring data from public repositories or sharing across institutions, data is naturally organized into discrete datasets that vary in relevance, quality, and utility. Selecting which repositories or institutions to search for useful datasets, and which datasets to incorporate into model training are therefore critical decisions, yet most existing methods select individual samples and treat all data as equally relevant, ignoring differences between datasets and their sources. In this work, we formalize the task of dataset selection: selecting entire datasets from a large, heterogeneous pool to improve downstream performance under resource constraints. We propose Dataset Selection via Hierarchies (DaSH), a dataset selection method that models utility at both dataset and group (e.g., collections, institutions) levels, enabling efficient generalization from limited observations. Across two public benchmarks (Digit-Five and DomainNet), DaSH outperforms state-of-the-art data selection baselines by up to 26.2% in accuracy, while requiring significantly fewer exploration steps. Ablations show DaSH is robust to low-resource settings and lack of relevant datasets, making it suitable for scalable and adaptive dataset selection in practical multi-source learning workflows.