Last month, the Senate Judiciary Subcommittee on Crime and Counterterrorism held a hearing on what many consider a mental health crisis among adolescents. Two of the witnesses were parents of children who had committed suicide last year, and both believed that AI chatbots played a significant role in complicity in their children's deaths. a couple now alleges In one lawsuit, ChatGPT told his son about specific methods to end his life and even offered to help him write a suicide note.
In the run-up to the September Senate hearing, OpenAI co-founder Sam Altman addressed the company. blogoffering their views on how corporate principles are shaping their response to the crisis. The challenge, he wrote, is to balance OpenAI's dual commitments to security and freedom.
ChatGPT obviously shouldn't act as a de facto therapist for teens showing signs of suicidal ideation, Altman argues in the blog. But since the company values user freedom, the solution is not to insert forceful programming commands that could prevent the robot from talking about self-harm. Because? “If an adult user asks for help writing a fictional story describing a suicide, the model should help with that request.” In the same post, Altman promises that age restrictions are coming, but similar efforts I've seen to keep young users off social media have tried woefully inadequate.
I'm sure it's quite difficult to create a massive, open-access software platform that is safe for my three children and useful for me. However, I find Altman's reasoning deeply troubling, largely because if your first impulse when writing a book about suicide is to ask ChatGPT about it, you probably shouldn't write a book about suicide. More importantly, Altman's lofty speech about “freedom” reads as empty moralizing designed to obscure an unfettered drive for faster development and greater profits.
Of course, that's not what Altman would say. In a recent interview With Tucker Carlson, Altman suggested that he had thought this all through very carefully and that the company's deliberations about what questions its AI should be able to answer (and not answer) are based on conversations with “like hundreds of moral philosophers.” I contacted OpenAI to see if they could provide a list of those thinkers. They didn't respond. So, while I teach moral philosophy at Boston University, I decided to take a look at Altman's own words to see if I could get a sense of what he means when he talks about freedom.
The political philosopher Montesquieu once wrote that there is no word with as many definitions as freedom. So if the stakes are so high, it is imperative that we look to Altman's own definition. The businessman's writings give us some important but perhaps disturbing clues. Last summer, in a much-discussed post titled “The gentle singularity” Altman had this to say about the concept:
“Society is resilient, creative and adapts quickly. If we can harness the collective will and wisdom of people, then although we will make many mistakes and some things will go very wrong, we will learn and adapt quickly and be able to use this technology for maximum benefit and minimum inconvenience. Giving users a lot of freedom, within the broad limits that society has to decide, seems very important. The sooner the world can start a conversation about what these are broad boundaries and how we define collective alignment, the better.”
The CEO of OpenAI is painting this with frighteningly broad brushstrokes, and such massive generalizations about “society” tend to fall apart quickly. More importantly, this is Altman, who supposedly cares so much about freedom, entrusting the task of defining its limits to “collective wisdom.” And please, society, start that conversation quickly, he says.
Clues from other parts of the public record give us a better idea of Altman's true intentions. During the Carlson interview, for example, Altman links freedom with “personalization.” (He does the same in a recent chat with the German businessman Matthias Döpfner.) This, for example, OpenAIIt means the ability to create a specific experience for the user, complete with “the traits you want it to have, how you want it to talk to you, and the rules you want it to follow.” It's no coincidence that these features are primarily available on newer GPT models.
And yet, Altman is frustrated that users in countries with stricter AI restrictions can't access these newer models quickly enough. In Senate testimony this summer, Altman referenced a “joke” among his team about how OpenAI has “this big new thing that's not available in the EU and a handful of other countries because they have this long process before a model can come out.”
The “long process” that Altman talks about is simply regulation: rules at least in part. experts We believe in “protecting fundamental rights, ensuring justice and not undermining democracy.” But something that became more and more clear as Altman's testimony progressed is that he only wants minimal regulation of AI in the US:
“We need to give adult users a lot of freedom to use AI the way they want and trust that they will be responsible with the tool,” Altman said. “I know there is increasing pressure in other places in the world and in some of the United States not to do that, but I think this is a tool and we need to make it a powerful and capable tool. Of course, we will put some barriers within very broad limits, but I think we need to give a lot of freedom.”
There's that word again. When it comes down to it, Altman's definition of freedom is not some far-fetched philosophical notion. It's just deregulation. That's the ideal Altman is balancing with the mental health and physical safety of our children. That's why it resists putting limits on what its robots can and cannot say. And that's why regulators should step in and stop it. Because it's not worth risking our children's lives for Altman's freedom.
Joshua Pederson is a professor of humanities at Boston University and the author of “Sin Sick: Moral Injury in War and Literature.”