Google is dramatically stepping up its efforts to combat the appearance of AI-generated explicit images and videos in search results. The company wants to make it clear that non-consensual AI-produced deepfakes are not welcome on its search engine.
The real images may be lewd or otherwise offensive, but regardless of the specifics, Google has a new approach to removing this type of material and burying it far from the first-page results if deletion isn’t possible. Notably, Google has experimented with using its own artificial intelligence to generate images for search results, but those images don’t include real people and especially nothing risqué. Google partnered with experts on the subject and those who have been victims of nonconsensual deepfakes to make its response system more robust.
Google has allowed people to request the removal of explicit deepfakes for a while, but the proliferation and improvement of image makers with generative AI means more needs to be done. The removal request system has been streamlined to make it easier to submit requests and speed up the response. When a request is received and confirmed to be valid, Google’s algorithms will also work to filter out any similar explicit results related to the person.
The victim also won’t have to manually go through every variation of a search query that might turn up the content. Google’s systems will automatically find and remove any duplicates of that image. And it won’t be limited to a specific image file. Google will proactively put a limit on related content. This is particularly important given the nature of the internet, where content can be duplicated and spread across multiple platforms and websites. This is something Google already does when it comes to real but non-consensual images, but the system will now cover deepfakes as well.
The method also shares some similarities with Google’s recent efforts to combat unauthorized deepfakes, explicit or not, on YouTube. Previously, YouTube would simply label such content as AI-created or potentially misleading, but now, the depicted person or their attorney can file a privacy complaint, and YouTube will give the video owner a couple of days to remove it before YouTube reviews the complaint to determine if it’s valid.
Deepfakes buried deep
Content removal isn't 100% perfect, as Google knows all too well. That's why search results for explicit deepfakes also include an updated ranking system. The new ranking rejects search terms with the potential to display explicit deepfakes. Google Search will now try to reduce the visibility of explicit fake content and websites associated with its spread in search results, especially if the search has someone's name in it.
For example, let’s say you’re searching for a news article about how deepfakes of a specific celebrity went viral and they testify before lawmakers about the need for regulation. Google Search will try to make sure you see those news stories and related articles on the topic and not the deepfakes being discussed.
Google is not alone
Given the complex and evolving nature of generative AI technology and its potential for abuse, addressing the spread of harmful content requires a multifaceted approach. And Google isn’t alone in tackling the problem or working on solutions. Meta has appeared on Facebook, Instagram, and other platforms, and the company has updated its policies as a result, with its Oversight Board recently recommending changing its guidelines to directly cover explicit AI-generated content and improving its own appeals process.
Lawmakers are also responding to the issue, with the New York state legislature passing a bill targeting non-consensual AI-generated pornography as part of its “revenge porn” laws. Nationally, this week, the Fostering Originality, Encouraging Art, and Keeping Entertainment Safe Act of 2024 (NO FAKES Act) was introduced in the U.S. Senate to address both explicit content and non-consensual use of deepfake images and voices. Similarly, Australia’s legislature is working on a bill to criminalize the creation and distribution of non-consensual explicit deepfakes.
Still, Google can already point to some success in the fight against explicit deepfakes. The company says its early tests with these changes are managing to reduce the appearance of explicit deepfake images by more than 70%. However, Google has not yet declared victory over explicit deepfakes.
“These changes are important updates to our protections in Search, but there is more work to do to address this issue and we will continue to develop new solutions to help people affected by this content,” Google product manager Emma Higham explained in a blog post. “And because this challenge goes beyond search engines, we will continue to invest in industry-wide partnerships and engaging experts to address it as a society.”