Will OpenAI's early sharing of future AI models with the government improve AI safety or simply allow AI to write the rules?

OpenAI CEO Sam Altman shared in a post on X this week that the company is partnering with the US AI Safety Institute and will give the government agency early access to its next major AI model for safety testing.

Altman described the plan as part of a broader new push for AI safety measures, which could significantly impact ChatGPT and other OpenAI products in the coming years. It could also be part of a public relations and policy push against critics who say OpenAI is no longer prioritizing AI safety.

scroll to top