< Explain other AI papers

SELECT: A Large-Scale Benchmark of Data Curation Strategies for Image Classification

Benjamin Feuer, Jiawei Xu, Niv Cohen, Patrick Yubeaton, Govind Mittal, Chinmay Hegde

2024-10-08

SELECT: A Large-Scale Benchmark of Data Curation Strategies for Image Classification

Summary

This paper presents SELECT, a new benchmark for evaluating different methods of organizing image data to improve image classification. It introduces a larger dataset called ImageNet++ to help compare these methods.

What's the problem?

Data curation, which involves collecting and organizing images into datasets for training machine learning models, has not been systematically studied on a large scale. This lack of research makes it difficult to know which methods work best for creating effective datasets for image classification.

What's the solution?

The authors created SELECT as the first large-scale benchmark for data curation strategies. They developed a new dataset called ImageNet++, which expands on the existing ImageNet dataset by adding five new training data shifts, each created using different curation strategies. They evaluated these strategies by training image classification models from scratch and using pre-trained models to see which methods performed better.

Why it matters?

Understanding the best ways to curate data is crucial because high-quality datasets lead to better-performing machine learning models. By providing this benchmark and dataset, the authors aim to guide future research in improving data curation techniques, ultimately enhancing the accuracy of image classification systems.

Abstract

Data curation is the problem of how to collect and organize samples into a dataset that supports efficient learning. Despite the centrality of the task, little work has been devoted towards a large-scale, systematic comparison of various curation methods. In this work, we take steps towards a formal evaluation of data curation strategies and introduce SELECT, the first large-scale benchmark of curation strategies for image classification. In order to generate baseline methods for the SELECT benchmark, we create a new dataset, ImageNet++, which constitutes the largest superset of ImageNet-1K to date. Our dataset extends ImageNet with 5 new training-data shifts, each approximately the size of ImageNet-1K itself, and each assembled using a distinct curation strategy. We evaluate our data curation baselines in two ways: (i) using each training-data shift to train identical image classification models from scratch (ii) using the data itself to fit a pretrained self-supervised representation. Our findings show interesting trends, particularly pertaining to recent methods for data curation such as synthetic data generation and lookup based on CLIP embeddings. We show that although these strategies are highly competitive for certain tasks, the curation strategy used to assemble the original ImageNet-1K dataset remains the gold standard. We anticipate that our benchmark can illuminate the path for new methods to further reduce the gap. We release our checkpoints, code, documentation, and a link to our dataset at https://github.com/jimmyxu123/SELECT.