Google plans to use the “SynthID” mechanism to help determine whether an image is automatically generated by AI

Following the earlier intimation at Google I/O 2023 regarding the labeling of images produced by generative artificial intelligence, Google has announced at the Google Cloud Next ’23 event a collaboration between Google Cloud and Google DeepMind. Through a mechanism named ‘SynthID’, they intend to embed indiscernible identification data within automatically generated images.

Currently under beta testing, the timeline for the full implementation of this mechanism remains unconfirmed by Google. Nevertheless, it is anticipated to serve as a tool to discern whether the image content is post-produced or subjected to digital manipulation.

As detailed, ‘SynthID’ will overlay an image with layers imperceptible to the human eye but decipherable by systems. Importantly, this overlay won’t compromise the image’s quality, resolution, or compression ratio, and the image will remain accessible for standard use. Furthermore, the layering information doesn’t write into the image’s metadata but embeds directly into the image’s original pixels. Thus, any tampering of the image metadata can indicate possible alterations to the image content.

The results discern between images that are automatically generated, not automatically generated, or possibly generated. The primary basis for this judgment leverages Google’s machine learning platform, Vertex AI, in conjunction with Google’s text-to-image model, Imagen.

Given Google’s announcement about integrating more AI model frameworks into Vertex AI, it’s plausible that ‘SynthID’ will evolve to discern images created through various AI techniques in the future.

Previously, Adobe introduced an AI-driven technique to detect fabricated image content. In collaboration with Twitter and The New York Times, they formed an alliance to combat digital disinformation through the ‘Content Authenticity Initiative’.