Fake and explicit images of Taylor Swift sweep social networks


Fake, sexually explicit images of Taylor Swift likely generated by artificial intelligence spread rapidly on social media platforms this week, disturbing fans who saw them and reigniting calls from lawmakers to protect women and crack down on women. platforms and technology that disseminate such images.

An image shared by a user on X, formerly Twitter, was viewed 47 million times before the account was suspended on Thursday. X suspended several accounts that posted fake images of Ms. Swift, but the images were shared on other social media platforms and continued to spread despite those companies' efforts to remove them.

While X said he was working to remove the images, fans of the pop superstar flooded the platform in protest. They posted related keywords, along with the phrase “Protect Taylor Swift,” in an effort to drown out the explicit images and make them harder to find.

Reality Defender, a cybersecurity company focused on AI detection, determined that the images were likely created using a diffusion model, an AI-powered technology accessible through more than 100,000 publicly available applications and models, said Ben Colman, co-founder and company director. executive.

As the AI ​​industry has grown, companies have been quick to release tools that allow users to create images, videos, text, and audio recordings with simple prompts. Artificial intelligence tools are wildly popular, but they have made it easier and cheaper than ever to create so-called deepfakes, which portray people doing or saying things they have never done.

Researchers now fear that deepfakes are becoming a powerful force of misinformation, allowing everyday Internet users to create non-consensual nude images or embarrassing depictions of political candidates. Artificial intelligence was used to create fake robocalls from President Biden during the New Hampshire primary, and Ms. Swift appeared this month in fake ads selling kitchen utensils.

“There has always been a dark current on the Internet, non-consensual pornography of various kinds,” said Oren Etzioni, a computer science professor at the University of Washington who works on deepfake detection. “Now it's a new strain that is particularly harmful.”

“We're going to see a tsunami of these explicit AI-generated images. The people who generated this see it as a success,” said Etzioni.

X said it had a zero-tolerance policy towards content. “Our teams are actively removing all identified images and taking appropriate action against the accounts responsible for posting them,” a representative said in a statement. “We are closely monitoring the situation to ensure any additional violations are addressed immediately and the content is removed.”

Although many of the companies that produce generative AI tools prohibit their users from creating explicit images, people find ways to break the rules. “It's an arms race, and it seems like every time someone comes up with a guardrail, someone else figures out how to escape,” Etzioni said.

The images originated from a channel on the messaging app Telegram that is dedicated to producing these types of images, according to 404 Media, a technology news site. But deepfakes attracted widespread attention after being posted on X and other social media services, where they spread rapidly.

Some states have restricted pornographic and political deepfakes. But the restrictions have not had a strong impact and there are no federal regulations for this type of counterfeiting, Colman said. Platforms have tried to address deepfakes by asking users to report them, but that method has not worked, he added. By the time they are flagged, they have already been viewed by millions of users.

“The toothpaste is already out of the tube,” he said.

Swift's publicist, Tree Paine, did not immediately respond to requests for comment Thursday evening.

Ms Swift's deepfakes sparked fresh calls for action from lawmakers. Rep. Joe Morelle, a New York Democrat who introduced a bill last year that would make such image sharing a federal crime, said on X that the spread of the images was “appalling,” adding: ” “It's happening to women everywhere, every day.” .”

“I have repeatedly warned that AI could be used to generate non-consensual intimate images,” Senator Mark Warner, Democrat of Virginia and chairman of the Senate Intelligence Committee, said of the images of X. “This is a deplorable situation. “

Representative Yvette D. Clarke, Democrat of New York, said advances in artificial intelligence had made the creation of deepfakes easier and cheaper.

“What happened to Taylor Swift is nothing new,” he said.

scroll to top