AI can mimic a human voice well enough that deepfakes can fool many people into thinking they are hearing a person speaking. Inevitably, AI voices have been exploited to make automated phone calls. The U.S. Federal Communications Commission (FCC) is trying to combat the most malicious versions of these attempts and has a proposal aimed at strengthening consumer protections against unwanted and illegal AI-generated robocalls.
The FCC plan would help define AI-generated calls and texts, allowing the commission to set limits and rules, such as requiring AI voices to disclose that they are fake when they call.
AI’s usefulness in communications and less-than-savory activities makes it unsurprising that the FCC is trying to establish regulations for them. It’s also part of the FCC’s effort to combat robocalls as nuisances and a way to commit fraud. AI makes these schemes harder to detect and prevent, which motivates the proposal, which would require disclosure of AI-generated voices and words. The call would have to begin with the AI explaining the artificial origins of both what it’s saying and the voice used to say it. Any group that didn’t would be fined heavily.
The new plan follows the FCC’s declaratory ruling earlier this year that voice cloning technology in robocalls is unlawful without the consent of the person receiving the call. This arose from deliberate confusion caused by a deepfake voice clone of President Joe Biden combined with caller ID spoofing to spread misleading information to New Hampshire voters ahead of the January 2024 primary election.
Calling on AI for help
In addition to going after the sources of AI calls, the FCC said it also wants to implement tools to alert people when they receive robocalls and AI-generated texts, particularly those that are unwanted or illegal. That could include better call filters that prevent them from happening, AI-based detection algorithms, or improved caller ID to identify and flag AI-generated calls. For consumers, the FCC’s proposed regulations offer a welcome layer of protection against the increasingly sophisticated tactics used by scammers. By requiring transparency and improving detection tools, the FCC aims to reduce the risk of consumers falling victim to AI-generated scams.
AI-created synthetic voices have also been used for many positive initiatives. For example, they can give people who have lost their voices the ability to speak again and open up new communication options for the visually impaired. The FCC acknowledged this in its proposal, while also cracking down on the negative impact the tools can have.
“In the face of a rising tide of misinformation, roughly three-quarters of Americans say they are concerned about misleading content generated by artificial intelligence. That’s why the Federal Communications Commission has focused its work on AI, grounding it in a key tenet of democracy: transparency,” FCC Chairwoman Jessica Rosenworcel said in a statement. “Concern about these technological advances is real. And rightly so. But by focusing on transparency and taking swift action when we find fraud, I believe we can look beyond the risks of these technologies and reap the benefits.”