OpenAI and Anthropic sign agreements with the US AI Safety Institute


OpenAI and Anthropic have signed agreements with the US government, offering their cutting-edge AI models for safety testing and research. A NIST announcement on Thursday revealed that the US AI Safety Institute will have access to the technologies “both before and after their public release.”

Thanks to the respective Memoranda of Understanding (non-legally binding agreements) signed by the two AI giants, AISI can assess the capabilities of its models and identify and mitigate any security risks.

The AISI, formally established by NIST in February 2024, focuses on priority actions outlined in the AI ​​Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued in October 2023. These actions include developing standards for the safety and security of AI systems. The group is supported by the AI ​​Safety Institute Consortium, whose members include Meta, OpenAI, NVIDIA, Google, Amazon, and Microsoft.

Elizabeth Kelly, Director of AISI, said in the press release: “Security is essential to driving breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety.”

“These agreements are just the beginning, but they are an important milestone as we work to help responsibly manage the future of AI.”

SEE: Generative AI Definition: How it Works, Benefits and Dangers

Jack Clark, co-founder and chief policy officer at Anthropic, told TechRepublic via email: “Safe and trustworthy AI is crucial to the positive impact of technology. Our collaboration with the US AI Safety Institute leverages their extensive expertise to rigorously test our models before widespread deployment.

“This strengthens our ability to identify and mitigate risks, which promotes responsible AI development. We are proud to contribute to this vital work, setting new benchmarks for safe and trustworthy AI.”

Jason Kwon, OpenAI's chief strategy officer, told TechRepublic via email: “We strongly support the mission of the US AI Safety Institute and look forward to working together to inform best practices and safety standards for AI models.

“We believe the Institute has a critical role to play in defining U.S. leadership in the responsible development of artificial intelligence, and we hope that our work together will provide a framework upon which the rest of the world can build.”

AISI to work with UK AI Safety Institute

AISI also plans to collaborate with the UK’s AI Safety Institute to provide security input to OpenAI and Anthropic. In April, the two countries formally agreed to work together on developing safety tests for AI models.

This agreement was adopted to fulfill the commitments made at the first Global AI Safety Summit last November, where governments around the world accepted their role in safety testing of the next generation of AI models.

Following Thursday’s announcement, Jack Clark, co-founder and chief policy officer at Anthropic, posted on X: “Third-party testing is a really important part of the AI ​​ecosystem and it’s been amazing to see governments creating safety institutes to facilitate this.

“This work with the US AISI will build on previous work we did this year, where we worked with the UK AISI to conduct a pre-deployment test of Sonnet 3.5.”

Claude 3.5 Sonnet is Anthropic's latest AI model, released in June.

Since ChatGPT’s launch, AI companies and regulators have been at odds over the need for strict AI regulations, with the former pushing for safeguards against risks like misinformation, while the latter argue that overly strict rules could stifle innovation. Major Silicon Valley players have argued for a voluntary framework that would allow government oversight of their AI technologies rather than strict regulatory mandates.

The US approach at the national level has been more industry-friendly, focusing on voluntary guidelines and collaboration with tech companies, as seen in lax initiatives such as the AI ​​Bill of Rights and the Executive Order on AI. In contrast, the EU has taken a stricter regulatory path with the AI ​​Act, setting out legal requirements on transparency and risk management.

On Wednesday, the California State Assembly passed the Safe Innovation for Frontier Artificial Intelligence Models Act, also known as SB-1047 or the California AI Act, something that stands in contrast to the national outlook on AI regulation. The next day, it was approved by the state Senate and now only needs to be approved by Governor Gavin Newsom before becoming law.

OpenAI, Meta, and Google, two major Silicon Valley companies, have written letters to California lawmakers expressing their concerns about SB-1047, emphasizing the need for a more cautious approach to avoid hindering the growth of AI technologies.

SEE: OpenAI, Microsoft, and Adobe back California bill on watermarks for artificial intelligence

Following Thursday's announcement of his company's agreement with the US-based AISI, OpenAI CEO Sam Altman posted on X that he considered it “important that this be done at a national level,” making a dig at California's SB-1047 law. Violating the legislation at the state level would result in penalties, unlike a voluntary MOU.

Meanwhile, the UK's AI Safety Institute faces financial challenges

Since the transition from Conservative to Labour leadership in early July, the UK government has made a number of notable changes to its approach to AI.

According to Reuters sources, the company has cancelled the office it had planned to set up in San Francisco this summer, which was supposed to cement relations between the UK and the Bay Area's AI titans. Technology Minister Peter Kyle has also reportedly fired senior policy adviser and co-founder of the UK's AISI, Nitarshan Rajkumar.

WATCH: UK government announces £32m for AI projects after cutting funding for supercomputers

Reuters sources added that Kyle plans to cut direct government investment in the industry. In fact, earlier this month the government shelved £1.3 billion of funding that had been earmarked for artificial intelligence and technological innovation.

In July, Chancellor of the Exchequer Rachel Reeves said public spending was on track to exceed the budget by £22bn and immediately announced cuts of £5.5bn, including to the Investment Opportunities Fund, which supports projects in the digital and technology sectors.

Days before the Chancellor’s speech, Labour appointed tech entrepreneur Matt Clifford to develop the “AI Opportunities Action Plan”, which will identify how AI can be best harnessed nationally to drive efficiency and reduce costs. His recommendations will be published in September.

According to Reuters sources, Clifford met with ten representatives from established venture capital firms last week to discuss this plan, including how the government can adopt AI to improve public services, support university spin-offs and make it easier for startups to hire internationally.

But behind the scenes there is no calm at all, with one aide telling Reuters they were “stressing they only had a month to resolve the review.”

scroll to top