As I write this, I feel a bit like a broken record, reporting again on how the United States Supreme Court could, with a ruling today (Monday or soon after), decide the fate of social media as we know it. Judges could choose to leave content moderation decisions in the hands of these companies or designate them as icons of free and unrestricted speech.
The two cases, one from Florida and the other from Texas, revolve around claims that Facebook, X (formerly Twitter) and other social media platforms are unfairly banning content primarily from right-wing users and sources. The claims mostly arise from a flashpoint in American history: the January 6 riots and the storming of the US Capitol. At the time, Facebook, Twitter, YouTube and other social media companies removed former President Donald Trump over fears that new posts from him would spark more violence. Around this same time, companies also sought to remove what they considered dangerous misinformation about the COVID-19 pandemic and vaccines. If the two cases are successful, they would radically alter social media companies' ability to moderate and ban content.
It's one of the most important and tense cases of the year at the US Supreme Court, but perhaps the question for the justices comes down to what these platforms really are: publishers or public services. Except I don't think it's that simple.
Is it possible to be a content and utility curator? Probably not, but even a utility company must ensure that the service it provides and the content of its delivery (electricity, gas, water, communications or television signals) are of high quality and safe for human consumption.
Social media platforms, thanks to a portion of the Communications Decency Act of 1996, Section 230, have enjoyed the protections and privileges of public service companies and have not suffered the liabilities that curators and editors of content such as The New York Times.
Moderation in a changing world
Naturally, those rules were made long before the age of social media and decades before we understood how social media consumption affects our surprisingly malleable human minds. Sure, we had AOL and bulletin boards in the '80s and '90s, but nothing really prepared us for Twitter (now X), Facebook, LinkedIn, TikTok, Instagram, YouTube, and other platforms that consume our screens and much of our days. (have you checked your screen time lately?).
Young minds are especially vulnerable to the algorithmically manipulated data that is fed to us like fire stations of highly specific content and ideas. Social media has the power to shape our perception of the world or, at the very least, reinforce it. In some ways, social media has functioned in opposition to publishing and the media, which once sought to illuminate and rationalize the world through the discovery and sharing of facts.
When politicians, businesses, global powers, and others realized that they could not simply participate in social media but also help shape the messages conveyed on it, they took advantage of the platforms' agnostic stance. Twitter and YouTube didn't care what you posted. Content moderation rules were basic and obvious back then, and usually referred primarily to adult content as if that were our biggest problem. No one foresaw the future and certainly no one foresaw January 6th.
This won't end well
If the Supreme Court rules in favor of Texas and Florida, social media will change, but not in a good way. You can already see what it could be like on misinformation (a lot of it). coming directly from Musk himself).
SCOTUS could deprive these platforms of their most basic moderation capabilities. It would be like telling the water plant that our water cannot be purified without explaining to each customer why lead and other contaminants are being removed. Social networks are in a constant battle with content and information polluters. If you can't automate some of the moderation, the platforms will be overrun and soon become unusable.
Do I think these platforms should be able, when asked, to explain their decisions? Yes, but I guarantee you that YouTube and others make millions of automated decisions every day. Most of them we don't appreciate because we can't miss what we don't see and we should be grateful for most of the things we don't see.
Over the past few years, I've heard from many people on both sides of the political spectrum who believe social media companies have banned them. I take this as a sign that there is some balance. Social networks are not specifically looking to ban you, they just block content that is objectionable, questionable, or even blatantly incorrect or dangerous. They make mistakes (many) but not as many as they get right.
Censorship is bad when it happens, but social media without content moderation would be downright dangerous. The US Supreme Court needs to know this and understand that there is no such thing as perfect content moderation and that a world without it would likely be perfectly horrible.