- Sam Altman defended OpenAI's security efforts after Elon Musk blamed ChatGPT for multiple deaths
- Altman called AI security “really difficult” and highlighted the balance between protection and usability.
- OpenAI faces multiple wrongful death lawsuits related to claims that ChatGPT worsened mental health outcomes
OpenAI CEO Sam Altman isn't known for sharing too much about the inner workings of ChatGPT. But he admitted he was struggling to keep the AI chatbot safe and useful. Elon Musk apparently sparked this idea with scathing posts on X (formerly Twitter). Musk warned people not to use ChatGPT and shared a link to an article that claimed a link between the AI assistant and nine deaths.
The passionate social media exchange between two of the most powerful figures in artificial intelligence produced more than bruised egos or legal scars. Musk's post did not address the broader context of the deaths or the lawsuits OpenAI faces related to them, but Altman clearly felt compelled to respond.
His response was much more sincere than the usual boring corporate speech. Instead, he gave a look at the thinking behind OpenAI's tightrope walk, balancing the security of ChatGPT and other AI tools for millions of people, and defended ChatGPT's architecture and security guardrails. “We need to protect vulnerable users while ensuring that our security barriers continue to allow all of our users to benefit from our tools.”
Sometimes you complain that ChatGPT is too restrictive and then in cases like this you claim that it is too relaxed. Almost a billion people use it and some of them may be in very fragile mental states. We will continue to do our best to get it right and we feel huge… https://t.co/U6r03nsHzgJanuary 20, 2026
After praising OpenAI's safety protocols and the complexity of balancing harm reduction with product utility, Altman suggested that Musk lacked the ability to make accusations because of the dangers of Tesla's Autopilot system.
He said his own experience with it was enough to convince him that he was “not at all certain that Tesla would launch it.” In a comment especially directed at Musk, he added: “I won't even get started on some of Grok's decisions.”
As the exchange bounced around the platforms, what stood out most was not the usual billionaire stance, but Altman's unusually candid formulation of what AI safety really entails. For OpenAI, a company that simultaneously implements ChatGPT for schoolchildren, therapists, programmers and CEOs, defining “secure” means threading the needle between usefulness and avoiding problems, goals that often conflict.
Altman has not commented publicly on the individual wrongful death lawsuits filed against OpenAI. However, he has insisted that recognizing the harm in the real world does not require oversimplifying the problem. AI reflects input and its evolving responses mean moderation and security require more than just the usual terms of service.
The fight for ChatGPT security
OpenAI claims to have worked hard to make ChatGPT more secure with newer versions. There is a comprehensive set of safety features designed to detect signs of distress, including suicidal ideation. ChatGPT issues disclaimers, stops certain interactions, and directs users to mental health resources when it detects warning signs. OpenAI also claims that its models will refuse to interact with violent content whenever possible.
The public might think this is simple, but Altman's post shows an underlying tension. ChatGPT is deployed in billions of unpredictable conversation spaces across languages, cultures, and emotional states. Too rigid moderation would make AI useless in many of those circumstances, but relaxing the rules too much would multiply the potential risk of dangerous and unhealthy interactions.
Comparing AI to automated car drivers isn't exactly a perfect analogy, despite Altman's comment. That said, one could argue that while the roads are regulated, regardless of whether there is a human or robot behind the wheel, AI directions go down a bumpier path. There is no central traffic authority for how a chatbot should respond to a teenager in crisis or respond to someone with paranoid delusions. In this void, companies like OpenAI must create their own rules and refine them as they go.
The personal element also adds another layer to the argument. Altman and Musk's companies are in a protracted legal battle. Musk is suing OpenAI and Altman over the company's transition from a nonprofit research lab to a limited-profit model, alleging that he was deceived when he donated $38 million to help found the organization. He claims the company now prioritizes corporate profits over public benefit. Altman says the change was necessary to build competitive models and keep AI development on a responsible path. The security conversation is one philosophical and engineering facet of a war in boardrooms and courts over what OpenAI should be.
Whether or not Musk and Altman ever agree on the risks, or even speak politely online, all AI developers would do well to follow Altman and be more transparent about what AI safety looks like and how to achieve it.
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.
The best business laptops for every budget






