to the editor: I experienced a visceral reaction to Anita Chabria's recent column (“The Pentagon demands that Claude AI be used however it pleases. Claude told me that's 'dangerous.'” February 26). It seems that the United States has taken another page from Joseph Stalin's playbook on the path to dictatorship, but now with much more sophistication with the help of artificial intelligence.
The key factor in the fall of the Romanov dynasty to the Bolsheviks was the recognition by that angry minority group of the quintessential value of opportunity and chaos. Especially in that year 1917, his carefully orchestrated communication between railroads and telegraph systems was rapid and coordinated.
Today's AI systems (like Claude) seem to be the ultimate tool for controlling today's chaos and communication. Stalin didn't have AI, but he did have his own versions of low-tech surveillance: spies, the KGB, gulags, intimidation techniques, etc. And the United States has them too: masked ICE agents, detention centers, tear gas.
The American people, if not our legislatures, must ensure that we have rules and regulations that control the unchecked use of this powerful tool by powerful individuals.
Darlene Pienta, San Marcos
..
to the editor: If Dario Amodei, CEO of Anthropic, wants the Trump administration to understand his discomfort with President Trump's demand that the Department of Defense be allowed to use Anthropic's AI for “any lawful purpose,” (“Anthropic refuses to bow to Pentagon on AI safeguards” March 3), then I suggest that Amodei cite the quote often attributed to Ralph Waldo Emerson: “What you do speaks so loudly that I can't hear what you say.”
Amodei is smart to limit the DoD license to specific purposes rather than broadly for “any legal purpose.” The reason is simple: Trump's record—in his personal life, his business career, and his role as president—clearly indicates that he cannot be trusted to act legally.
Trump recently confessed his belief that his presidential powers are limited only by his own morality. Basically, Trump believes that his presidential actions cannot be restricted by the Constitution or any law, treaty, contractual commitment or any other framework.
And it is precisely this weak moral compass that has led him, in his personal life, to become a prosecuted sexual abuser; in your business life, be sued thousands of timesbe declared fraudster and become a convicted felon; and in his political life, to be accused twice (until now).
Combine Trump's low regard for working within any legal limits with the broad immunity the Supreme Court granted him last year, then Amodei is right to worry that limiting the Defense Department's use of its AI to “any lawful purpose” is too weak a compliance standard for a Trump-led federal government.
Amodei should be commended for his bravery in walking away from a lucrative, high-profile, high-profile deal because it was inconsistent with his company's mission.
Todd Piccus, Venice
..
to the editor: What should Anthropic do now? Go to Europe (or Canada), where you could operate more successfully, free from the burdensome and childish impositions of President Trump and Secretary of Defense Pete Hegseth.
Here is a company that prides itself on the ethical use of its products and is forced by our government to betray that pride (“Trump orders federal agencies to stop using Anthropic AI after Pentagon standoff”. February 27). What does this say about the ethical character of that same government?
Ken Johnson, Santa Barbara
..
to the editor: If all the tech companies join Anthropic and say their products cannot be used for mass surveillance against Americans or in fully autonomous weapons operations, then Trump will have no choice but to rescind his order for US government agencies to stop using Anthropic's technology. There is strength in unity, even with a strong leader.
Richie Locasso, Hemet






