Google has announced that it will begin rolling out a new feature to help users “better understand how a piece of content was created and modified.”
This comes after the company joined the Coalition for Content Provenance and Authenticity (C2PA), a group of major brands trying to combat the spread of misleading information online, and helped develop the latest content credential standard. Amazon, Adobe and Microsoft are also members of the committee.
Google says that, with a planned rollout in the coming months, it will use current content credential guidelines (i.e., an image’s metadata) within its search parameters to add a label to AI-generated or edited images, providing more transparency for users. This metadata includes information such as the source of the image, as well as when, where, and how it was created.
However, the C2PA standard, which gives users the ability to trace the origin of different types of media, has been rejected by many AI developers like Black Forrest Labs, the company behind the Flux model that X's Grok (formerly Twitter) uses for image generation.
This AI-powered signaling will be implemented through Google’s existing About This Image window, meaning it will also be available to users through Google Lens and Android’s “Search Circle” feature. When active, users will be able to click the three dots above an image and select “About This Image” to check if it was AI-generated, so it won’t be as blatant as we might have hoped.
Is this enough?
While Google needed to do something As for AI-powered images in search results, the question is whether a hidden label is enough. If the feature works as claimed, users will need to take additional steps to verify whether an image has been created with AI before Google confirms it. Those who don't already know about the About This Image feature may not even know that there's a new tool available to them.
While there have been instances of video deepfakes — such as earlier this year when a finance employee was scammed into paying $25 million to a group posing as his CFO — AI-generated images are almost as problematic. Donald Trump recently posted digitally generated images of Taylor Swift and her fans falsely endorsing his presidential campaign, and Swift became the victim of image-based sexual abuse when AI-generated nude photos of her went viral.
While it's easy to complain that Google isn't doing enough, even Meta isn't very willing to give away the secret. The social media giant recently updated its policy to make labels less visible and move relevant information to a post's menu.
While this update to the “About this Image” tool is a positive first step, additional aggressive measures will be required to keep users informed and protected. More companies, such as camera manufacturers and AI tool developers, will also need to accept and use C2PA watermarks to ensure this system is as effective as possible, as Google will be relying on that data. Few camera models, such as the Leica M-11P and Nikon Z9, have the content badging features built in, while Adobe has rolled out a beta version in both Photoshop and Lightroom. But again, it is up to the user to use the features and provide accurate information.
In a study by the University of Waterloo, only 61% of people were able to distinguish between AI-generated images and real ones. If those numbers are accurate, Google’s labeling system will offer no additional transparency to more than a third of people. Still, it’s a positive step by Google in the fight to reduce online misinformation, but it would be nice if the tech giants made these labels much more accessible.