The news: Google DeepMind has launched a new watermarking tool which labels whether pictures have been generated with AI. The tool, called SynthID, will allow users to generate images using Google’s AI image generator Imagen, then choose whether to add a watermark.
Why watermarking? In the past year, the huge popularity of generative AI models has brought with it the proliferation of AI-generated deepfakes, non-consensual porn, and copyright infringements. Watermarking—a technique where you hide a signal in a piece of text or an image to identify it as AI-generated—has become one of the most popular policy suggestions to curb harms.
Why it matters: The hope is that SynthID could help people identify when AI-generated content is being passed off as real to counter misinformation, or help protect copyright. Read the full story.
Interested in the impact of generative AI? Read more about this topic:
+ These new tools could help protect our pictures from AI. PhotoGuard and Glaze are just two new systems designed to make it harder to tinker with photos using AI tools. Read the full story.
+ AI models spit out photos of real people and copyrighted images. The finding could strengthen artists’ claims that AI companies are infringing their rights. Read the full story.
+ These new tools let you see for yourself how biased AI image models are. DALL-E 2 and two recent versions of Stable Diffusion tend to produce images of people that look white and male, especially if the prompt is a word like ‘CEO’. Read the full story.