Powerful technology may never have presented such a large set of regulatory challenges for the US government. Ahead of the January state primary, New Hampshire Democrats received robocalls playing generated by AI fake audio recordings of President Biden encouraging them not to vote. Imagine political deepfakes that, for example, incite Americans to violence. This scenario is not too difficult to conjure given new research from New York University that describes the distribution of false, hateful or violent content on social networks as the biggest digital risk for the 2024 elections.
The two of us have helped develop and enforce some of the most consequential social media decisions in modern history, including ban revenge porn on Reddit and ban trump On twitter. So we've seen firsthand how well relying entirely on self-regulation for social media companies to moderate their content has worked.
The verdict: not good.
Toxic content abounds on our largely unregulated social media platforms, which already helped fuel the attempted insurrection at the United States Capitol on January 6, 2021, and the tried coup in brazil on January 8, 2023. The dangers only worsen with layoffs hitting the industry, the Supreme Court and Congress not addressing these problems head-on, and inscrutable CEOs launch dramatic changes in their companies. Wide access to new and increasingly sophisticated technology to create realistic deepfakes, such as AI-generated fake pornography Taylor Swiftwill facilitate the spread of deceptions.
The status quo of social media companies in the United States is similar to having an unregulated airline industry. Imagine if we didn't track flight times or delays or record accidents and investigate why they occurred. Imagine if we never heard of dishonest pilots or passengers and those people were not blacklisted for future flights. The airlines would have less idea of what needs to be done and where the problems are. They would also face less responsibility. The social media industry's lack of standards and metrics to track safety and harm has led us to a race to the bottom.
Like the National Transportation Safety Board and the Federal Aviation Administration, there should be an agency to regulate American technology companies. Congress can create an independent authority responsible for establishing and enforcing basic security and privacy rules for social media companies. To ensure compliance, the agency must have access to relevant company information and documents and the authority to hold companies that do not comply accountable. If or when things go wrong, The agency should have the authority to investigate what happened, just as the transportation board can investigate Boeing after its recent mishaps.
Controlling the damage of social media is a difficult task. But we have to start somewhere, and attempts to ban platforms after they've already become hugely influential, as some US lawmakers are trying to do with TikTok, simply create an endless game of whack-a-mole.
Platforms can track the number of accounts removed, the number of posts removed, and the reasons why those actions were taken. It should also be feasible to create an enterprise-wide database of hidden but traceable device IDs for phones and IP addresses that have been used to commit privacy, security, and other rule violations, including links to the posts and activities that were the basis for the decision. to catalog the person and the device.
Companies should also share how algorithms are used to moderate content, along with details about their safeguards to prevent bias (investigation indicates that, for example, automated detection of hate speech shows racial bias and can amplify race-based harm). At a minimum, companies would be prohibited from accepting payments of terrorist groups seeking to verify social media accounts, as the Tech Transparency Project discovered that X (formerly Twitter) was doing.
People often forget how much content is already removed on social media, including child porn bans, spam filters, and suspensions of individual accounts like the one you tracked. Elon Musk's private jet. Regulating these private companies to prevent harassment, harmful data sharing, and misinformation is a necessary and natural extension to security, privacy, and user experience.
Protecting user privacy and security requires research and knowledge about how social media companies operate, how their current policies were written, and how their content moderation decisions have historically been made and enforced. Security teams, whose members perform the essential work of content moderation and possess vital internal knowledge, have recently been reduced at companies such as Amazon, Twitter and Google. Those layoffs, plus the growing number of people pursuing tech careers but encountering uncertainty in the private tech sector, leave numerous people in the labor market with the skills and knowledge to address these issues. They could be hired by a new agency to create practical and effective solutions.
Technology regulation is a rare issue that has bipartisan support. And in 2018, Congress created an agency to protect government cybersecurity. It can and should create another regulatory agency to address threats from legacy and emerging technologies from domestic and foreign companies. Otherwise, we will continue to experience one social media disaster after another.
Anika Collier Navaroli is a journalist, attorney, and senior fellow at the Tow Center for Digital Journalism at Columbia Journalism School. She is also a former senior policy official at Twitter and Twitch. Ellen K. Pao is a technology investor and advocate, former CEO of Reddit, and co-founder of the award-winning diversity and inclusion nonprofit Project Include.