The Federal Trade Commission (FTC) has moved to ban the practice of using artificial intelligence tools to spoof people, as well as announcing expanded powers to recover stolen money from scammers.
The agency said it is “taking this action in light of growing complaints about impersonation fraud, as well as public outcry over harm caused to consumers and impersonated individuals.”
The rise of public generative AI tools like ChatGPT has meant that cybercriminals can spoof brands and organizations with greater precision and ease. Unreal images, voices and videos can be generated in moments that make use of high-profile figures; These are known as deepfakes and have proliferated at a worrying rate.
New powers
The FTC also said it is “seeking comment on whether the revised rule should make it illegal for a business, such as an artificial intelligence platform that creates images, videos, or text, to provide goods or services that it knows or has reason to know.” “is being used to harm consumers through identity theft.”
FTC Chair Lina M. Khan added that the agency wants to expand the proposals in its phishing ruling, which will now include individuals, not just governments and companies, to “[strengthen] “the FTC’s toolkit to address AI-based scams impersonating people.”
The commission said it is making these expansions in response to public comments on its previous proposals, as comments made “pointed out the additional threats and harms posed by impersonation.”
The FTC says the expansion “will help the agency deter fraud and ensure redress for harmed consumers.”
He also finalized the Government and Business Impersonation Rule, which will give the agency better weapons to fight fraudsters who abuse AI to spoof real entities.
You will now be able to file cases in federal court directly to get cybercriminals to return phishing profits. The FTC believes this is a significant step, stating that a previous Supreme Court ruling (AMG Capital Management LLC v. FTC) “significantly limited the agency's ability to require defendants to return money to harmed consumers”.
Threat actors who use logos, email and web addresses, or imply false affiliation with companies and governments, can now be taken to court by the FTC to “directly seek monetary compensation.”
The commission voted on this final ruling and approved it 3-0. It will be published in the Federal Register.