Beverly Hills school rocked by AI images of naked students

The new face of bullying is real. It is the body below the face that is false.

Last week, officials and parents at Beverly Vista Middle School in Beverly Hills were shocked by reports that fake images showing real faces of students on artificially generated naked bodies were circulating online. According to the Beverly Hills Unified School District, the images were created and shared by other students at Beverly Vista, the district's only school for grades six through eight. At last count, about 750 students are enrolled there.

The district that is investigating joined a growing number of educational institutions around the world dealing with fake images, videos and audio. In Westfield, New Jersey, Seattle, Winnipeg, Almendralejo, Spain, and Rio de Janeiro, people using “deepfake” technology have seamlessly spliced ​​legitimate images of students with artificial or fraudulent images of naked bodies. And in Texas, someone allegedly did the same thing to a teacher, grafting a woman's head onto a pornographic video.

Beverly Hills Unified officials said they were prepared to impose the harshest disciplinary actions allowed under state law. “Any student who is creating, disseminating, or in possession of AI-generated images of this nature will face disciplinary action, including, but not limited to, a recommendation for expulsion,” they said in a statement mailed to parents last week.

However, deterrence may be the only tool at your disposal.

There are dozens of apps available online to “undress” someone in a photo, simulating what a person would look like if they had been naked when the photo was taken. The apps use AI-powered image painting technology to remove pixels representing clothing, replacing them with an image that approximates that person's naked body, said Rijul Gupta, founder and CEO of Deep Media in San Francisco.

Other tools allow a target person to be “face swapped” with another person's naked body, said Gupta, whose company specializes in detecting AI-generated content.

Versions of these programs have existed for years, but the previous ones were expensive, more difficult to use and less realistic. Today, AI tools can clone realistic images and quickly create deepfakes; Even using a smartphone, it can be achieved in a matter of seconds..

“The ability to manipulate [images] it’s been democratized,” said Jason Crawforth, founder and CEO of Swear, whose technology authenticates video and audio recordings.

“You used to need 100 people to create something fake. Today one is needed and soon that person will be able to create 100” in the same time, she stated. “We have moved from the information age to the misinformation age.”

Artificial intelligence tools “have escaped Pandora's box,” said Seth Ruden of BioCatch, a company that specializes in detecting fraud using behavioral biometrics. “We're starting to see the magnitude of the potential damage that could be created here.”

If children can access these tools, “it won't just be a problem with deepfake images,” Ruden said. The potential risks extend to creating images of victims “doing something very illicit and using it as a way to extort money from them or blackmail them into taking a specific action,” she said.

Reflecting the wide availability of cheap and easy-to-use deepfake tools, the amount of non-consensual deepfake pornography has skyrocketed. According to Wired, a study by an independent researcher found that 113,000 deepfake porn videos were uploaded to the 35 most popular sites for this type of content in the first nine months of 2023. At that rate, the researcher found, more would be produced by the end of the year. anus. year than in all previous years combined.

At Beverly Vista, school principal Kelly Skon met with nearly all students in all three grades Monday as part of her regularly scheduled “administrative talks” to discuss a number of issues raised by the incident, she said in a statement. note to parents.

Among other things, Skon said he asked students to “reflect on how they use social media and not be afraid to leave any situation that doesn't align with their values” and to “make sure their social media accounts are private.” and don't let people you don't know follow your accounts.”

Another point he made to the students, Skon said in his note, was that “there are Bulldog students who are suffering from this event and that is to be expected given what happened. “We are also seeing courage and resilience from these students as they try to regain normalcy in their lives from this scandalous act.”

What can be done to protect against deepfake nudes?

Federal and state officials have taken some steps to combat fraudulent use of AI. According to the Associated Press, six states have banned non-consensual deepfake porn. In California and some other states that do not have criminal laws specifically against deepfake pornography, victims of this form of abuse can sue for damages.

The tech industry is also trying to find ways to combat malicious and fraudulent use of AI. DeepMedia has joined several of the world's largest media and artificial intelligence companies in the Coalition for Content Provenance and Authenticity, which has developed standards for flagging images and sounds to identify when they have been digitally manipulated.

Swear is taking a different approach to the same problem, using blockchains to maintain immutable records of files in their original state. Comparing the current version of the file with its record on the blockchain will show if and how exactly a file has been altered, Crawforth said.

Those standards could help identify and potentially block deepfake media files online. With the right combination of approaches, Gupta said, the vast majority of deepfakes could leak outside a school or company's network.

However, one of the challenges is that several AI companies have released open source versions of their applications, allowing developers to create custom versions of generative AI programs. That's how AI apps for stripping emerged, for example, Gupta said. And these developers may ignore the standards that the industry develops, just as they may attempt to remove or bypass flags that would identify their content as artificially generated.

Meanwhile, security experts warn that the images and videos people upload daily to social media provide a rich source of material that stalkers, scammers and other bad actors can exploit. And they don't need much to create a persuasive fake, Crawforth said; He has seen a demonstration of Microsoft technology that can make a persuasive clone of someone's voice from just three seconds of their online audio.

“There is no content that cannot be copied and manipulated,” he said.

The risk of being victimized probably won't deter many, if any, teens from sharing photos and videos digitally. Therefore, the best form of protection for those who want to document their lives online may be “poison pill” technology that changes the metadata of the files they upload to social networks, hiding them from online searches for photos or recordings. .

“Taking poison pills is a great idea. That is something we are also looking into,” Gupta said. But to be effective, social media platforms, smartphone photography apps and other common content-sharing tools would have to add poison pills automatically, he said, because people can't be counted on to do it themselves. systematically.

scroll to top