Meta will begin flagging AI-generated images on Facebook, Instagram and Threads in an effort to maintain transparency online.
The tech giant already labels content created by its Imagine AI engine with a visible watermark. In the future, it will do something similar with images coming from third-party sources like OpenAI, Google, and Midjourney, just to name a few. It's unknown exactly what these tags will look like, although when looking at the ad post, they may simply consist of the words “AI Information” next to the generated content. Meta states that this design is not final, hinting that it could change once the update is officially released.
In addition to visible labels, the company says it is also working on tools to “identify invisible markers” in images from third-party generators. Imagine AI also does this by incorporating watermarks into the metadata of your content. Its purpose is to include a unique tag that cannot be manipulated by editing tools. Meta claims that other platforms have plans to do the same and want to implement a system to detect tagged metadata.
Audio and video tagging.
So far, it's all been about brand imagery, but what about AI-generated audio and video? Google's Lumiere is capable of creating incredibly realistic clips and OpenAI is working to implement video creation in ChatGPT. Is there anything to detect more complex forms of AI content? Kind of.
Meta admits that there is currently no way to detect AI-generated audio and video at the same level as images. The technology has not arrived yet. However, the industry is working “towards this capability.” Until then, the company will rely on the honor system. It will require users to reveal whether the video clip or audio file they want to upload was produced or edited by artificial intelligence. Failure to do so will result in a “sanction.” What's more, if a medium is so realistic that it risks misleading the public, Meta will give it “a more prominent label” offering important details.
Future updates
As for its own platforms, Meta is also working to improve its own tools.
The company's AI research lab, FAIR, is developing a new type of watermarking technology called Stable Signature. Apparently, it is possible to remove invisible markers from the metadata of AI-generated content. Stable Signature is supposed to stop this by making watermarks an integral part of the “image generation process.” On top of all this, Meta has started training several LLMs (large language models) on its community standards so that AIs can determine if certain content violates policy.
Expect to see social media labels rolling out in the coming months. The timing of the publication should come as no surprise: 2024 is an important election year for many countries, most notably the United States. Meta seeks to mitigate the spread of misinformation on its platforms as much as possible.
We've reached out to the company to learn more about what kind of penalties a user may face if they don't properly mark up their post and if they plan to mark images from a third-party source with a visible watermark. This story will be updated later.
Until then, check out TechRadar's list of the best AI imagers for 2024.