You've probably noticed some AI-generated images spread across your different social media feeds, and chances are you've overlooked a few that may have escaped your watchful eyes.
For those of us who have been immersed in the world of generative AI, spotting AI images is a little easier as we develop a mental checklist of what to look out for.
However, as technology gets better and better, it will become much harder to tell. To solve this, OpenAI is developing new methods to track AI-generated images and demonstrate what has been artificially generated and what has not.
According to a blog post, the new methods proposed by OpenAI will add a tamper-resistant “watermark” that will label the content with invisible “stickers.” So if an image is generated with OpenAI's DALL-E generator, the classifier will flag it even if the image is warped or saturated.
The blog post claims that the tool will be around 98% accurate in detecting images made with DALL-E. However, it will only mark up 5-10% of images from other generators like Midjourney or Adobe Firefly.
So it's great for internal images, but not so great for anything produced outside of OpenAI. While it may not be as impressive as you might expect in some respects, it is a positive sign that OpenAI is beginning to address the avalanche of AI images that are increasingly difficult to distinguish.
Well, this may not seem like a big deal to some, as many cases of AI-generated images are memes or high-concept art that are quite harmless. But that being said, there is also now a surge of scenarios where people are creating hyper-realistic fake photos of politicians, celebrities, people in their lives, and more, which could lead to misinformation spreading at an incredibly rapid rate.
Hopefully, as these types of countermeasures get better and better, the accuracy will only improve and we can have a much more accessible way to verify the authenticity of images we encounter in our daily lives.