In the past year, the huge popularity of generative AI models has also brought with it the proliferation of AI-generated deepfakes, nonconsensual porn, and copyright infringements. Watermarking—a technique where you hide a signal in a piece of text or an image to identify it as AI-generated—has become one of the most popular ideas proposed to curb such harms.
In July, the White House announced it had secured voluntary commitments from leading AI companies such as OpenAI, Google, and Meta to develop watermarking tools in an effort to combat misinformation and misuse of AI-generated content.
At Google’s annual conference I/O in May, CEO Sundar Pichai said the company is building its models to include watermarking and other techniques from the start. Google DeepMind is now the first Big Tech company to publicly launch such a tool.
Traditionally images have been watermarked by adding a visible overlay onto them, or adding information into their metadata. But this method is “brittle” and the watermark can be lost when images are cropped, resized, or edited, says Pushmeet Kohli, vice president of research at Google DeepMind.
SynthID is created using two neural networks. One takes the original image and produces another image that looks almost identical to it, but with some pixels subtly modified. This creates an embedded pattern that is invisible to the human eye. The second neural network can spot the pattern and will tell users whether it detects a watermark, suspects the image has a watermark, or finds that it doesn’t have a watermark. Kohli said SynthID is designed in a way that means the watermark can still be detected even if the image is screenshotted or edited—for example, by rotating or resizing it.
Google DeepMind is not the only one working on these sorts of watermarking methods, says Ben Zhao, a professor at the University of Chicago, who has worked on systems to prevent artists’ images from being scraped by AI systems. Similar techniques already exist and are used in the open-source AI image generator Stable Diffusion. Meta has also conducted research on watermarks, although it has yet to launch any public watermarking tools.
Kohli claims Google DeepMind’s watermark is more resistant to tampering than previous attempts to create watermarks for images, although still not perfectly immune.
But Zhao is skeptical. “There are few or no watermarks that have proven robust over time,” he says. Early work on watermarks for text has found that they are easily broken, usually within a few months.