There is a growing consensus on how to address the challenge of deepfakes in the media and companies, generated through technologies such as artificial intelligence. Earlier this year, Google announced that it would be joining the Coalition for Content Provenance and Authenticity as a steering committee member; Other C2PA organizations include OpenAI, Adobe, Microsoft, AWS, and the RIAA. With concerns growing over misinformation and AI deepfakes, IT professionals will want to pay close attention to the work of this body, and in particular Content Credentials, as the industry formalizes the standards governing how they are managed. visual and video data.
What are content credentials?
Content credentials are a form of digital metadata that creators can attach to their content to ensure proper recognition and promote transparency. This tamper-proof metadata includes information about the creator and the creative process that is embedded directly into the content at the time of export or download. Content credentials have the best chance of reaching a globally agreed upon and standardized way of labeling content so far, thanks to the weight of the companies behind the concept.
SEE: Adobe adds Firefly and content credentials to Bug Bounty program
Using Content Credentials offers several benefits. It will help build credibility and trust among the public by providing more information about the creator and the creative process. This transparency can help combat misinformation and disinformation online. By attaching identity information and contact information to their work, creators can make it easier for others to find and connect with them, improving their visibility and recognition. Likewise, it will be easier to identify and remove from the platform or delete content that is not legitimate.
Deepfakes are a challenge Australia is struggling to address
Australia, like much of the rest of the world, is battling a massive acceleration of deepfake fraud. Sumsub's third annual identity fraud report found a 1,530% increase in deepfakes in Australia over the past year and noted the sophistication of them was also increasing.
The situation has become so worrying that the government recently announced a strategy to counter some specific examples and then establish pathways to treat it like any other form of illegal content.
Deepfakes are particularly powerful sources of misinformation because the eye can be fooled very quickly. Research suggests that it takes just 13 milliseconds to identify an image, a much shorter period of time than it would take to analyze it and determine its validity. In other words, deepfakes pose a great risk because they can already have the desired impact on a person before they can be analyzed and discarded.
SEE: AI deepfakes rise as a risk for APAC organizations
For example, Australia's peak scientific body, the CSIRO, published information on “how to spot a deepfake,” and that guidance requires extensive analysis.
“If it's a video, you can check if the audio is synchronized correctly with the lip movement. Do the words match the mouth? Other things to check for include unnatural flickering or blinking around the eyes, strange lighting or shadows, and facial expressions that don't match the emotional tone of the speech,” CSIRO expert Dr Kristen Moore said in the guidance article. .
So, as helpful as that advice is, equipping the ultimate target of deepfakes on how to identify them won't be enough to prevent them from wreaking havoc throughout society.
Government and private sector must join together to combat deepfakes
The government outlawing deepfakes is a positive step to protect those who would be victims of them. However, the IT industry will have to be the one to develop ways to identify and manage this content.
There are already high-profile cases of leading business figures such as Dick Smith and Gina Rinehart “demanding” that organizations like Meta be more proactive in preventing AI scams, after their images were used in deepfakes.
As the Australian eSafety Commissioner noted, “the development of innovations to help identify deepfakes is not yet keeping pace with the technology itself.” For its part, the Australian government has committed to combating deepfakes by:
- Raise awareness about deepfakes so that Australians receive a reasoned, evidence-based overview of the problem and are well informed about the options available to them.
- Support people who have been attacked through a complaints system. Any Australian whose photo or video has been digitally altered and shared online can contact eSafety for help removing it.
- Prevent harm by developing educational content about deepfakes, so Australians can critically evaluate online content and navigate the online world more confidently.
- Support the industry through our Safety by Design initiative, which helps companies and organizations incorporate safety into their products and services.
- Support industry efforts to reduce or limit the redistribution of harmful deepfakes by encouraging them to develop: deepfake policies, terms of service and community standards, detection and takedown policies to manage abusive and illegal deepfakes, methods to identify and flag deepfakes in their community.
Ultimately, for this vision to succeed, there needs to be industry support, with organizations providing the technology and investing more deeply in AI. This is where content credentials come in.
Steps to follow to help combat deepfakes
Content credentials are the best opportunity to form standards that combat deepfakes. As this approach is industry-driven and backed by the weight of content industry heavyweights, it means that illegitimate content can be flagged across the vast majority of the internet, similar to how full websites can be flagged. of viruses to the point of becoming effectively impossible to find in search engines.
For this reason, IT professionals and others working with AI for content generation will want to understand Content Credentials in the same way that web developers understand security, SEO, and the standards expected to protect content for that is not marked. Steps they should take include:
- Implementing content credentials: First, IT professionals should ensure that their organization actively adopts and integrates content credentials into workflows to ensure content authenticity and traceability.
- Advocate for transparency: Both internally and externally, with partners and customers, advocate for organizations to be transparent about their use of AI and adopt ethical practices in content creation and distribution.
- Support regulation: Collaborate with industry bodies and government agencies to shape policies and regulations that address the challenges posed by deepfakes. This includes participating in the various research the government will conduct on AI to help shape policy.
- Collaborating: Work with other professionals and organizations to develop standardized practices and tools to identify and mitigate the risks associated with deepfakes.
- Prepare response strategies: Have a plan for when deepfakes are detected, including measures to mitigate damage and communicate with interested parties.
- Take advantage of community resources: Finally, use resources from cybersecurity communities and government bodies like the eSafety Commissioner to stay up to date and prepared.
Without a doubt, deepfakes will be one of the most important challenges for which the technology industry and IT professionals will have to develop a response. Content Credentials offers an excellent starting point around which the industry can gravitate.