SynthID-Image: Image watermarking at internet scale
Sven Gowal, Rudy Bunel, Florian Stimberg, David Stutz, Guillermo Ortiz-Jimenez, Christina Kouridi, Mel Vecerik, Jamie Hayes, Sylvestre-Alvise Rebuffi, Paul Bernard, Chris Gamble, Miklós Z. Horváth, Fabian Kaczmarczyck, Alex Kaskasoli, Aleksandar Petrov, Ilia Shumailov, Meghana Thotakuri, Olivia Wiles, Jessica Yung, Zahra Ahmed, Victor Martin, Simon Rosen
2025-10-15
Summary
This paper introduces SynthID-Image, a system designed to secretly add a watermark to images created by AI, helping to identify them as machine-generated.
What's the problem?
With AI getting better at creating realistic images and videos, it's becoming harder to tell what's real and what's fake. This creates problems with misinformation and the need to know the origin of media. There's a need for a way to reliably mark AI-generated content without noticeably changing the image or being easily removed, and this needs to work on a massive scale like across the entire internet.
What's the solution?
The researchers developed SynthID-Image, a deep learning system that embeds an invisible watermark into AI-generated images during their creation. They’ve already used it to watermark billions of images and videos within Google’s services. They also created a separate version, SynthID-O, which they tested against other watermarking techniques and found it performed better in terms of image quality and how well the watermark holds up to common image editing. The system is designed to be secure and difficult to tamper with.
Why it matters?
This work is important because it provides a practical solution for tracking the source of AI-generated content. Knowing whether an image or video was created by AI can help combat the spread of fake information and build trust in online media. The research also outlines the challenges and considerations for deploying such a system widely, which is valuable for anyone working on similar technologies, and the principles can be applied to other types of AI-generated content like audio.
Abstract
We introduce SynthID-Image, a deep learning-based system for invisibly watermarking AI-generated imagery. This paper documents the technical desiderata, threat models, and practical challenges of deploying such a system at internet scale, addressing key requirements of effectiveness, fidelity, robustness, and security. SynthID-Image has been used to watermark over ten billion images and video frames across Google's services and its corresponding verification service is available to trusted testers. For completeness, we present an experimental evaluation of an external model variant, SynthID-O, which is available through partnerships. We benchmark SynthID-O against other post-hoc watermarking methods from the literature, demonstrating state-of-the-art performance in both visual quality and robustness to common image perturbations. While this work centers on visual media, the conclusions on deployment, constraints, and threat modeling generalize to other modalities, including audio. This paper provides a comprehensive documentation for the large-scale deployment of deep learning-based media provenance systems.