Google will soon begin to identify when content in search results and ads is AI-generated, if you know where to look.
In a blog post on Sept. 17, the tech giant announced that in the coming months, metadata in searches, images, and ads will indicate whether an image was photographed with a camera, edited in Photoshop, or created with AI. Google joins other tech companies, including Adobe, in labeling AI-generated images.
What are C2PA and content credentials?
The AI watermarking standards were created by the Coalition for Content Provenance and Authenticity, a standards body that Google joined in February. C2PA was co-founded by Adobe and the nonprofit Joint Development Foundation to develop a standard for tracking the provenance of online content. C2PA’s most significant project so far has been its AI watermarking standard, Content Credentials.
Google helped develop version 2.1 of the C2PA standard, which the company says has improved protections against manipulation.
SEE: OpenAI said in February that its Sora AI photorealistic videos would include C2PA metadata, but Sora is not yet available to the public.
Amazon, Meta, OpenAI, Sony and other organizations serve on the C2PA Steering Committee.
“Content credentials can act as a digital nutrition label for all types of content and a foundation for rebuilding trust and transparency online,” Andy Parsons, senior director of Adobe’s Content Authenticity Initiative, wrote in a press release in October 2023.
'About this image' to display C2PA metadata in Circle to Search and Google Lens
C2PA rolled out its labeling standard faster than most online platforms. The “About this Image” feature, which allows users to view metadata, only appears in Google Images, Circle to Search, and Google Lens on supported Android devices. The user must manually access a menu to view metadata.
In Google Search Ads, “Our goal is to increase this [C2PA watermarking] “Google’s security is growing over time, and we use C2PA signals to inform how we enforce key policies,” Laurie Richardson, Google’s vice president of trust and safety, wrote in the blog post.
Google also plans to include C2PA information on YouTube videos captured with a camera. The company plans to reveal more information later this year.
Correct AI image attribution is important for businesses
Companies should ensure that employees are aware of the spread of AI-generated images and train them to verify the provenance of an image. This helps prevent the spread of misinformation and avoids potential legal issues if an employee uses images they are not authorized to use.
Using AI-generated images in business can complicate matters around copyright and attribution, as it can be difficult to determine how an AI model has been trained. AI images can sometimes be subtly inaccurate. If a customer is looking for a specific detail, any error could reduce trust in your organization or product.
C2PA should be used in accordance with your organization’s generative AI policy.
C2PA is not the only way to identify AI-generated content. Visible watermarks and perceptual hashing (or fingerprinting) are sometimes proposed as alternative options. Additionally, artists can use data poisoning filters, such as Nightshade, to confuse generative AI, preventing AI models from being trained on their work. Google launched its own AI detection tool, SynthID, which is currently in beta.