Tech News

Google Introduces Watermarks to Curb Misinformation in AI-Generated Images

Protecting Against Deepfake Images and Ensuring Authenticity

Google has unveiled an innovative solution to address the growing concern of misinformation by introducing permanent watermarks on AI-generated images. The technology, called SynthID, embeds an invisible watermark directly into images created by Imagen, one of Google’s text-to-image generators. This watermark remains intact, even if the images are modified with filters or color adjustments.

Also read:

SynthID offers a scanning feature that can identify whether incoming images were produced by Imagen, providing three levels of certainty: detected, not detected, and possibly detected. Although not flawless, Google’s internal testing demonstrates the accuracy of SynthID against various common image manipulations.

“While this technology isn’t perfect, our internal testing shows that it’s accurate against many common image manipulations,” wrote Google in a blog post Tuesday.

A beta version of SynthID is now accessible to some customers through Vertex AI, Google’s platform for generative AI development. Developed by Google’s DeepMind unit in collaboration with Google Cloud, SynthID is expected to evolve and potentially expand into other Google products or third-party platforms.

As the quality of deepfake and edited images and videos continue to improve, tech companies are urgently seeking reliable methods to identify and flag manipulated content. Recent instances, such as an AI-generated image of Pope Francis in a puffer jacket and AI-generated images depicting former President Donald Trump’s arrest, have highlighted the need for solutions. In response, Vera Jourova, Vice President of the European Commission, called for technology that can detect and label such content, urging companies like Google, Meta, Microsoft, and TikTok to take action.

Google joins a growing number of startups and major tech firms in this endeavor, with companies like Truepic and Reality Defender working towards solutions to protect the authenticity and truthfulness of our digital experiences.

While the Coalition for Content Provenance and Authenticity (C2PA), backed by Adobe, leads the digital watermark efforts, Google has taken its own approach. In May, Google introduced “About this image,” enabling users to trace the origin of images found on its platform and locate them elsewhere on the web. Additionally, every AI-generated image created by Google will carry a markup in the original file, providing contextual information if the image is discovered on other websites or platforms.

However, as AI technology advances at a pace that surpasses human comprehension, it remains uncertain whether these technical solutions will fully address the problem at hand. OpenAI, responsible for Dall-E and ChatGPT, acknowledged the imperfections in their attempt to detect AI-generated text, cautioning that their efforts should be taken with caution.

Related Articles

Back to top button