How Many Van Goghs Does It Take to Van Gogh? Finding the Imitation Threshold
Sahil Verma, Royi Rassin, Arnav Das, Gantavya Bhatt, Preethi Seshadri, Chirag Shah, Jeff Bilmes, Hannaneh Hajishirzi, Yanai Elazar
2024-10-22

Summary
This paper explores the concept of the imitation threshold in text-to-image models, which determines how many examples of a specific style or concept are needed for an AI to imitate it effectively.
What's the problem?
Text-to-image models learn to create images based on large datasets that often include copyrighted material. This can lead to issues where the AI generates images that closely resemble the original works, raising concerns about copyright violations. However, there hasn't been a clear understanding of how many examples are required for these models to start imitating a particular style or concept, which is known as the imitation threshold.
What's the solution?
The authors propose a method called Finding the Imitation Threshold (FIT) to identify this threshold without needing to retrain multiple models from scratch. They introduce a new approach called MIMETIC2, which estimates how many instances of a visual concept (like an artist's style or a specific type of image) are necessary for effective imitation. By experimenting with datasets related to human faces and art styles, they found that the imitation threshold typically ranges from 200 to 600 images, depending on the model and the type of content.
Why it matters?
This research is important because it provides a framework for understanding how AI models learn to imitate styles and can help developers avoid copyright issues. By knowing the imitation threshold, creators of text-to-image models can ensure they comply with copyright laws and respect individual privacy while still advancing the capabilities of AI in generating creative content.
Abstract
Text-to-image models are trained using large datasets collected by scraping image-text pairs from the internet. These datasets often include private, copyrighted, and licensed material. Training models on such datasets enables them to generate images with such content, which might violate copyright laws and individual privacy. This phenomenon is termed imitation -- generation of images with content that has recognizable similarity to its training images. In this work we study the relationship between a concept's frequency in the training dataset and the ability of a model to imitate it. We seek to determine the point at which a model was trained on enough instances to imitate a concept -- the imitation threshold. We posit this question as a new problem: Finding the Imitation Threshold (FIT) and propose an efficient approach that estimates the imitation threshold without incurring the colossal cost of training multiple models from scratch. We experiment with two domains -- human faces and art styles -- for which we create four datasets, and evaluate three text-to-image models which were trained on two pretraining datasets. Our results reveal that the imitation threshold of these models is in the range of 200-600 images, depending on the domain and the model. The imitation threshold can provide an empirical basis for copyright violation claims and acts as a guiding principle for text-to-image model developers that aim to comply with copyright and privacy laws. We release the code and data at https://github.com/vsahil/MIMETIC-2.git and the project's website is hosted at https://how-many-van-goghs-does-it-take.github.io.