< Explain other AI papers

DataDream: Few-shot Guided Dataset Generation

Jae Myung Kim, Jessica Bader, Stephan Alaniz, Cordelia Schmid, Zeynep Akata

2024-07-16

DataDream: Few-shot Guided Dataset Generation

Summary

This paper introduces DataDream, a new framework designed to generate high-quality datasets for training image classifiers using only a few examples.

What's the problem?

While models that create images from text descriptions are becoming very good, they haven't been effective in generating data that can be used for training image classifiers. Previous methods often fail to create images that accurately represent the real-world data needed for effective training, which can lead to problems when trying to classify images correctly.

What's the solution?

DataDream solves this problem by synthesizing classification datasets that closely match real data distributions. It does this by fine-tuning a model on a few real images before generating new training data. This approach helps ensure that the synthetic images include important details and features needed for accurate classification. The framework has been tested extensively and has shown to improve classification accuracy significantly across many different datasets compared to previous methods.

Why it matters?

This research is important because it allows for the creation of high-quality training data even when only a small number of real examples are available. By improving how synthetic datasets are generated, DataDream can help enhance the performance of image classifiers, making them more effective in applications like medical imaging, security systems, and automated tagging in social media.

Abstract

While text-to-image diffusion models have been shown to achieve state-of-the-art results in image synthesis, they have yet to prove their effectiveness in downstream applications. Previous work has proposed to generate data for image classifier training given limited real data access. However, these methods struggle to generate in-distribution images or depict fine-grained features, thereby hindering the generalization of classification models trained on synthetic datasets. We propose DataDream, a framework for synthesizing classification datasets that more faithfully represents the real data distribution when guided by few-shot examples of the target classes. DataDream fine-tunes LoRA weights for the image generation model on the few real images before generating the training data using the adapted model. We then fine-tune LoRA weights for CLIP using the synthetic data to improve downstream image classification over previous approaches on a large variety of datasets. We demonstrate the efficacy of DataDream through extensive experiments, surpassing state-of-the-art classification accuracy with few-shot data across 7 out of 10 datasets, while being competitive on the other 3. Additionally, we provide insights into the impact of various factors, such as the number of real-shot and generated images as well as the fine-tuning compute on model performance. The code is available at https://github.com/ExplainableML/DataDream.