The Brittleness of AI-Generated Image Watermarking Techniques: Examining Their Robustness Against Visual Paraphrasing Attacks
Niyar R Barman, Krish Sharma, Ashhar Aziz, Shashwat Bajpai, Shwetangshu Biswas, Vasu Sharma, Vinija Jain, Aman Chadha, Amit Sheth, Amitava Das
2024-08-21

Summary
This paper discusses the weaknesses of current watermarking techniques used on AI-generated images, particularly how they can be easily bypassed through a method called visual paraphrasing.
What's the problem?
As AI models like Stable Diffusion and DALL-E become more popular for creating images, there's a growing concern about how these images can be misused. Companies are trying to add watermarks to these images to prevent confusion or misinformation. However, existing watermarking methods are not very strong and can be easily removed, making them ineffective.
What's the solution?
The authors propose a method called visual paraphrasing that can automatically create new images that look similar to the original but do not carry the watermark. This process involves generating a caption for the original image and then using that caption to create a new image without the watermark. The study shows that this technique successfully removes watermarks from images, highlighting the vulnerabilities in current methods.
Why it matters?
This research is important because it raises awareness about the limitations of watermarking techniques in protecting AI-generated content. By demonstrating how easily these watermarks can be circumvented, it calls for the development of stronger methods to ensure that AI-generated images are properly identified and not misused.
Abstract
The rapid advancement of text-to-image generation systems, exemplified by models like Stable Diffusion, Midjourney, Imagen, and DALL-E, has heightened concerns about their potential misuse. In response, companies like Meta and Google have intensified their efforts to implement watermarking techniques on AI-generated images to curb the circulation of potentially misleading visuals. However, in this paper, we argue that current image watermarking methods are fragile and susceptible to being circumvented through visual paraphrase attacks. The proposed visual paraphraser operates in two steps. First, it generates a caption for the given image using KOSMOS-2, one of the latest state-of-the-art image captioning systems. Second, it passes both the original image and the generated caption to an image-to-image diffusion system. During the denoising step of the diffusion pipeline, the system generates a visually similar image that is guided by the text caption. The resulting image is a visual paraphrase and is free of any watermarks. Our empirical findings demonstrate that visual paraphrase attacks can effectively remove watermarks from images. This paper provides a critical assessment, empirically revealing the vulnerability of existing watermarking techniques to visual paraphrase attacks. While we do not propose solutions to this issue, this paper serves as a call to action for the scientific community to prioritize the development of more robust watermarking techniques. Our first-of-its-kind visual paraphrase dataset and accompanying code are publicly available.