X blocks searches for Taylor Swift: what to know about viral AI deepfakes | Internet News


Social media platform X has blocked searches for one of the world's most popular personalities, Taylor Swift, after explicit AI images of the singer-songwriter went viral.

Deepfakes flooded various social media sites, from Reddit to Facebook. This has renewed calls to strengthen legislation around AI, particularly when it is misused for sexual harassment.

Here's what you need to know about the Swift episode and the legality of deepfakes.

What happened to Taylor Swift?

On Wednesday, AI-generated sexually explicit images began circulating on social media sites, particularly gaining traction on X. One image of the megastar was viewed 47 million times during the roughly 17 hours it was live on X before being removed on Thursday.

Deepfake detection group Reality Defender told The Associated Press news agency it tracked down dozens of unique images that were spread to millions of people on the internet before being removed.

X has banned Swift searches and photo-related queries, and instead displays an error message.

Instagram and Threads, while allowing Swift searches, display a warning message when specifically searching for images.

Search warning for Taylor Swift AI on Instagram

What do platforms like X and AI sites say?

On Friday, He said the platform was removing images and taking action against accounts that violate the policy.

Meta also issued a statement condemning the content, adding that it will “take appropriate action as necessary.”

“We are closely monitoring the situation to ensure that any further violations are addressed immediately and the content is removed,” the company said.

OpenAI said it has safeguards in place to limit the generation of harmful content on its platforms like ChatGPT and rejects “requests that ask for the name of a public figure, including Taylor Swift.”

Microsoft, which offers an image generator based in part on the DALL-E website, said Friday that it was in the process of investigating whether its tool was misused.

Kate Vredenburgh, assistant professor in the Department of Philosophy, Logic and Scientific Method at the London School of Economics, noted that social media business models are often based on sharing content and their focus is usually on cleaning up after such an event. .

Swift has not issued any statement regarding the images. The pop star was seen at an NFL game in the United States on Sunday, cheering on her boyfriend Travis Kelce as his Kansas City Chiefs advanced to the Super Bowl.

What is a deepfake and how can it be misused?

Swift is not the first public figure to be attacked using deepfakes, which are a form of “synthetic media,” or virtual media manipulated and modified using artificial intelligence.

Images and videos can be generated from scratch by feeding AI tools with cues on what to display. Alternatively, someone's face in a video or image can be swapped with that of another person, such as that of a public figure.

Hoax videos are commonly made this way to show politicians endorsing certain statements or people engaging in sexual activity they never engaged in. Recently in India, sexually explicit deepfakes of actress Rashmika Mandanna went viral on social media, causing quite a stir. A gaming app used a deepfake of cricket icon Sachin Tendulkar to promote his product. And in the United States, former Republican presidential candidate Ron DeSantis' campaign produced deepfakes showing former President Donald Trump, who is leading the race for the party's presidential nomination, kissing public health specialist Anthony Fauci, a figure hated by many conservatives. due to his advocacy for masks and vaccines during the COVID-19 pandemic.

While some deepfakes are easy to identify due to their poor quality, others can be much more difficult to distinguish from real video. Several generative AI tools, such as Midjourney, web Deepfakes, and DALL-E, are available to users for free or at a reduced price.

More than 96 percent of deepfake images currently online are pornographic in nature and almost all of them target women, according to a report from Sensity AI, an intelligence company focused on detecting deepfakes.

Is there any legislation that can protect online users?

Legislation specifically addressing deepfakes varies by country and typically ranges from requiring disclosure of deepfakes to banning harmful or malicious content.

Ten US states, including Texas, California and Illinois, have criminal laws against deepfakes. Lawmakers are pushing for passage of a similar federal bill or more restrictions on the tool. U.S. Rep. Yvette Clarke, D-New York, has introduced legislation that would require creators to place a digital watermark on deepfake content.

Sexually explicit deepfakes, when non-consensual, could constitute a violation of the country's broader laws. The United States does not criminalize this type of deepfakes, but it does have state and federal laws that combat privacy, fraud and harassment.

In 2019, China implemented laws requiring disclosure of deepfake use in videos and media. In 2023, the United Kingdom banned the sharing of deepfake pornography as part of its Online Safety Act.

South Korea enacted a law in 2020 that criminalizes the distribution of deepfakes that cause harm to the public interest, imposing penalties of up to five years in prison or fines reaching around 50 million won ($43,000) to deter misuse.

In India, the federal government in December issued an advisory to social media and internet platforms to protect against deepfakes that contravene India's IT regulations. Deepfakes themselves are not illegal but, depending on the content, they may violate some of India's IT regulations.

Hesitation about stricter regulation often stems from concerns that it could slow technological progress.

“We just weren't supposed to be able to make these regulatory or design changes to at least reduce this significantly,” Vredenburgh said. Sometimes the social attitude can also be that such incidents are a price to pay for such tools, which, according to Vredenburgh, marginalizes the victims' perspective.

“It portrays them as a small minority of society that could be affected for the good of all of us,” he said. “And that is a very socially awkward position for all of us.”

How is the world reacting?

The White House said it is “alarmed” by the images, while Swift's fan base, called Swifties, mobilized to take action against them.

“While social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation and intimate, non-consensual images of real people. “said White House Press Secretary Karine Jean. -Pierre said at a press conference.

US lawmakers also expressed the need to introduce safeguards.

Since Wednesday, the singer's fans quickly denounced the accounts and launched a counteroffensive on X with the hashtag #ProtectTaylorSwift to flood it with more positive images of Swift.

“We often rely on users and those affected or people who sympathize with them to do the hard work of putting pressure on companies,” Vredenburgh said, adding that not everyone can mobilize the same kind of pressure and therefore Therefore, we rely on indignation. translate into lasting change remains a concern.



scroll to top