AI shook up the photography world in 2023, from tricking judges to win photography contests to the proliferation of deepfakes with real-world ramifications, like a depiction of an explosion (that didn’t happen) outside the Pentagon that caused markets fell briefly. .
Deepfakes can be fun (the Pope in a puffer jacket, anyone?), but they can also be used to spread misinformation, and as AI image generators become more powerful, it’s becoming increasingly difficult to distinguish between images. real photos we see online from the fake ones.
Now, according to a report from Nikkei Asia, leading camera brands Sony, Canon and Nikon are looking to address this problem by creating technology that can verify the authenticity of images on new cameras.
Last year, the Leica M11-P became the first camera with integrated content credentials: a digital signature that authenticates the time, date and location at which an image was taken, and who took it, as well as indicating if modifications have been made. made after capture.
However, not many people have an $8,000/£7,000 Leica in their bag, and now Sony, Canon and Nikon are set to introduce their own authentication technology.
We don’t yet know which new cameras will hit shelves with Content Credentials-type technology built in, although we’ve rounded up the 12 most interesting cameras for 2024, which might give us an idea. At the launch of the Sony A9 III, Sony announced that it will update that camera and two other professional models, the A1 and A7S III, with content provenance and authenticity (CP2A) support, although we don’t know when or how it plans to do this. this. (CP2A is a cross-industry coalition co-founded by Adobe that introduced content credentials and includes companies like Nikon and Getty.)
The Nikkei Asia report says future cameras will provide the information needed to authenticate an image in the free and publicly available Content Authenticity Initiative (CAI) Verify system.
Analysis: Are digital signatures on the camera enough?
Increasing the number of cameras with authenticity technology that can mark images with the official Content Credentials seal is a big step in the right direction, but is it enough?
Even with three of the biggest camera brands implementing pro-authenticity/anti-AI features, early indications are that they will be reserved primarily for professional cameras typically in the hands of journalists.
While large media organizations will be able to implement protocols to verify data and authenticate the origin of images through the Content Credentials Verify tool for enabled cameras like the M-11P, most cameras will not be properly verified, including the ubiquitous ones. cameras. on smartphones from the likes of Apple, Samsung and Google.
The biggest challenge, which these verification measures do not address, is websites and social media platforms that host misinformation, with fake images that can potentially be viewed and shared by millions of people.
I’m encouraged that camera brands are prepared to introduce digital signatures to future cameras and potentially upgrade select existing cameras with this technology. But it looks like it will be some time before these features are rolled out to cameras and phones on a larger scale, including devices used by those seeking to spread misinformation with fake images.
It’s also unclear whether bad actors will find ways to bypass digital signatures, either for AI-generated images or for real images that have been edited. And what about multi-image sequences in the camera, such as composites? Hopefully, these questions will be answered as camera brands put these authentication measures into practice.
For now, the long fight against deepfakes is just beginning.