Auto Cherry-Picker: Learning from High-quality Generative Data Driven by Language
Yicheng Chen, Xiangtai Li, Yining Li, Yanhong Zeng, Jianzong Wu, Xiangyu Zhao, Kai Chen
2024-07-02

Summary
This paper talks about Auto Cherry-Picker (ACP), a new system designed to automatically create high-quality training examples for AI models using language. It focuses on generating images that can help improve how well AI understands and processes visual information.
What's the problem?
While diffusion-based models can create impressive images, there hasn't been a fully automatic way to generate layouts and images based only on language descriptions. Existing methods often lack effective ways to measure the quality of these generated images, which makes it hard to ensure they are useful for training AI models. This gap limits the ability of AI systems to learn from diverse and high-quality visual examples.
What's the solution?
To solve this problem, the authors developed ACP, which starts with a simple list of concepts written in natural language. They use large language models (LLMs) to generate detailed descriptions and design layouts based on these concepts. Then, they employ a text-to-image model to create multiple images from these descriptions. To ensure the quality of the generated images, they introduced a new evaluation metric called Composite Layout and Image Score (CLIS). This metric helps assess the images fairly and allows for refining the generated data. The results show that using ACP can significantly enhance the performance of existing AI models by providing better training examples.
Why it matters?
This research is important because it improves how AI systems learn from visual data by providing high-quality examples that are generated automatically. By addressing issues related to imbalanced datasets and long-tailed distributions, Auto Cherry-Picker can help make AI models more robust and effective in understanding complex visual information. This advancement has potential applications in various fields, including computer vision, robotics, and any area where AI needs to interpret visual data accurately.
Abstract
Diffusion-based models have shown great potential in generating high-quality images with various layouts, which can benefit downstream perception tasks. However, a fully automatic layout generation driven only by language and a suitable metric for measuring multiple generated instances has not been well explored. In this work, we present Auto Cherry-Picker (ACP), a novel framework that generates high-quality multi-modal training examples to augment perception and multi-modal training. Starting with a simple list of natural language concepts, we prompt large language models (LLMs) to generate a detailed description and design reasonable layouts. Next, we use an off-the-shelf text-to-image model to generate multiple images. Then, the generated data are refined using a comprehensively designed metric to ensure quality. In particular, we present a new metric, Composite Layout and Image Score (CLIS), to evaluate the generated images fairly. Our synthetic high-quality examples boost performance in various scenarios by customizing the initial concept list, especially in addressing challenges associated with long-tailed distribution and imbalanced datasets. Experiment results on downstream tasks demonstrate that Auto Cherry-Picker can significantly improve the performance of existing models. In addition, we have thoroughly investigated the correlation between CLIS and performance gains in downstream tasks, and we find that a better CLIS score results in better performance. This finding shows the potential for evaluation metrics as the role for various visual perception and MLLM tasks. Code will be available.