Future FAQ: How we fight and beat deepfakes

Direct your mind forward. It is November 8, 2028. the day after another presidential election. This went smoothly (there were no allegations of rampant manipulation, nor any significant taint of chicanery) due in large part to the defeat of deepfakes, the new enemy of democracy.

Is such a future possible? So far, neither the government nor the tech industry have agreed on effective security measures against deepfakes. But this FAQ (from five years in the future) shows that the events of 2024 may well force the issue and that a solution is possible.

Why did it take so long to find an effective way to fight deepfakes?

At the end of 2022, sophisticated low-cost artificial intelligence software appeared that made it easier to create realistic audio, videos and photographs: so-called deepfakes. As these generative AI programs rapidly improved, it became clear that deepfake content would be a danger to democracy.

Political deepfakes soon emerged, both audio and video: President Biden announced that Americans would be drafted to fight in Ukraine. A photo of Donald Trump hugging and kissing Dr. Anthony Fauci. Sen. Elizabeth Warren (D-Mass.) tells MSNBC that Republicans should not be allowed to vote in 2024. Eric Adams, the monolingual mayor of New York, speaks Spanish, Yiddish and Mandarin in AI-produced robocalls.

Very quickly, the White House, the European Union and major technology companies launched wide-ranging AI regulation proposals that included “watermarking” AI content: inserting identification tags, a permanent piece of computer code , in the digital file of any AI-generated device. content to identify its artificial origin.

But establishing AI rules proved complex, and labeling exemplified the dilemmas: Would AI watermarking be legally necessary? How would it be enforced? As early as 2023, some mobile phone cameras used AI in image processing. How much AI input into the content would an identifier require? Would an Instagram beauty influencer need to watermark her tight-fitting face selfies?

The complications were such that no system was widely adopted.

What changed?

The largest coordinated deepfake attack in history took place the day after the November 2024 election. All US social media channels were flooded with fake audio, videos and still images showing election fraud in a dozen states in dispute, highly realistic content that in a matter of hours was seen by millions. Debunking efforts by the media and government were hampered by a steady stream of new deepfakes, mostly manufactured in Russia, North Korea, China and Iran. The attack generated legal and civil chaos that lasted well into the spring of 2025.

However, none of the early authentication efforts was adopted?

Correct. In reality, the breakthrough came in early 2026 from a working group of digital journalists from American and international news organizations. Their goal was to find a way to keep deepfakes out of the news, in order to protect the credibility that the mainstream media still had. It was a logical task: Journalists are historically ruthless in punishing their peers for misconduct, breaking tar and feathers for even minor deviations from factual rigor.

Journalism organizations formed the FAC Alliance – “Factually Authenticated Content” – based on a simple idea: there were already too many AI deepfakes in the world to try to impose a watermarking system for misinformation and disinformation. And even the strictest labeling rules would simply be ignored by bad actors. But would do it will be possible to put watermarks on parts of the content that they were not deepfakes.

And thus the voluntary FACStamp was born on May 1, 2026.

What does a FACStamp look like?

For consumers, FAC-stamped content displays a small “FAC” icon in the corner of the screen or includes an audio FAC prompt. The user can turn off the signal or can set it to appear only for five or 10 seconds at the start of a media stream.

FACStamps are completely voluntary. But all FAC Alliance members committed that their physical, broadcast and Internet reports would publish only FAC-stamped media in their news sections.

How does content qualify for a FACStamp?

Newer phones, tablets, cameras, recorders and desktop computers include software that automatically inserts FACStamp code into each piece of visual or audio content as it is captured, before any AI modifications can be applied. This shows that the image, sound or video was not generated by AI. You can also download the FAC app, which does the same thing for older devices. The FACStamp is what technologists call “fragile”: the first time the AI ​​fakes an image, video, or audio file, the stamp disappears.

But the AI It is often used appropriately to do things like reduce background noise in an audio file. Is FacStamped content not editable at all?

It certainly can. But to keep the FACStamp, your computer must be connected to the nonprofit FAC Verification Center. The center’s computers detect if the edit is minor (such as a crop or even a cosmetic adjustment to the face) and the stamp remains. Any major manipulation, from swapping faces to fake backgrounds, and the FACStamp disappears.

How did FACStamps spread beyond journalism?

It turned out that many people could use FACStamp. Internet retailers adopted FACStamps for videos and images of their products. Individuals soon followed, using FACStamps to sell products online; When potential buyers judge a used truck or second-hand sofa, it’s comforting to know that the image hasn’t been deleted or deleted by AI.

In 2027 the seal began to appear on social networks. Any parent can artificially generate a perfectly realistic image of their happy family standing in front of the Eiffel Tower and post it or email it to envious friends. A FACStamp proves that the family was really there.

Dating app profiles without FACStamps are finally becoming rare. Video conferencing apps have FAC options to ensure that everyone on the call is real. And for influencers, it’s increasingly difficult to claim “authenticity” without at least the occasional FACStamp.

Even the AI ​​industry supports the use of FACStamps. During Internet training, if an AI program absorbs excessive amounts of AI-generated data instead of authentic data, it can suffer “model crash” and become wildly inaccurate. Therefore, FACStamp helps AI companies train their models only on reality.

Whats Next?

A bipartisan group of senators and House members plan to introduce the Right to Reality Act when the next Congress opens in January 2029. It will mandate the use of FACStamps in multiple sectors, including local government, commercial sites, and bidding investment and real estate. Forgery of a FACStamp would become a criminal offense. Polls indicate broad public support for the law and the FAC Alliance has already begun a branding campaign.

The slogan: “Is that a FAC?

Michael Rogers is an author and futurist whose most recent book is “Email From the Future: Notes From 2084.”

scroll to top