Social media platform X, formerly known as Twitter, has taken steps to limit the spread of explicit deepfake images of Taylor Swift after they went viral last week.
Amid protests from Swift's fans on social media, lawmakers and the actors union SAG-AFTRA, X made the Grammy winner's name unvisible on its platform over the weekend. As of Monday afternoon, searching for Swift's name without quotes produces an error page that says: “Something went wrong. Try reloading.”
“This is a temporary action and is being done out of an abundance of caution as we prioritize safety on this issue,” Joe Benarroch, head of business operations at X, said in a statement shared with the Associated Press.
Last week, several explicit AI-generated images of the “Bejeweled” and “Cruel Summer” singer circulated on X. The doctored images were pornographic and referenced the 34-year-old's high-profile romance with Kansas City Chiefs tight end Travis Kelce.
Hours after the images emerged on Thursday, the X security team remembered users of its “zero tolerance policy” on sharing “non-consensual nude (NCN) images.” The statement, which did not explicitly name Swift, also said that the users who posted the images would be held responsible.
“We are closely monitoring the situation to ensure that any additional violations are addressed immediately and the content is removed,” the X security account added. “We are committed to maintaining a safe and respectful environment for all users.”
The new X restricted search action is not without bugs. As of Monday afternoon, searching for Swift's full name in quotes or adding additional words to the end of the search phrase “Taylor Swift” evokes posts, responses and images as usual, including graphic deepfakes of Swift.
Tesla CEO Elon Musk, who officially took over Twitter in October 2022, made cuts to the platform's moderation team, which was tasked with enforcing rules against harmful content.
A representative for X did not immediately respond Monday to the Times' request for comment.
Swift has yet to publicly address the explicit images, but the controversy has reignited conversations about artificial intelligence and the need for greater oversight, especially as AI imaging continues to overwhelmingly affect women and children.
“The spread of explicit AI-generated images of Taylor Swift is appalling and sadly, it's happening to women everywhere, every day,” New York Rep. Joe Morelle said in a tweet Thursday.
“It's sexual exploitation,” he added, before promoting his proposed Intimate Images Deepfake Prevention Act, a bill that would make it illegal to share deepfake pornography without the consent of the people portrayed.
News of the Swift AI images set bells ringing, as Microsoft CEO Satya Nadella and White House Press Secretary Karine Jean-Pierre addressed the controversy separately on Friday.
“This is very alarming. Therefore, we will do everything we can to address this problem,” Jean-Pierre said during a press conference, according to Reuters. “So while social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation and intimate, non-consensual images of real people”.
SAG-AFTRA, which set terms related to artificial intelligence in its 2023 contract, called Swift's AI footage “disturbing, harmful and deeply concerning.”
“The development and dissemination of false images, especially those of a lewd nature, without someone's consent should be illegal,” the union said in a statement Friday. “As a society, we have it in our power to control these technologies, but we must act now before it is too late.”