OpenAI secrets were stolen in 2023 after an internal forum was hacked


The online forum that OpenAI employees use for confidential internal communications was attacked last year, anonymous sources told The New York Times. The hackers obtained details about the design of the company’s artificial intelligence technologies from forum posts, but did not infiltrate the systems where OpenAI hosts and builds its artificial intelligence.

OpenAI executives announced the incident to the entire company during a general meeting in April 2023 and also informed the board of directors. However, it was not made public because no customer or partner information had been stolen.

The executives did not inform authorities, according to the sources, because they did not believe the hacker was linked to a foreign government and therefore the incident did not pose a threat to national security.

An OpenAI spokesperson told TechRepublic in an email: “As we shared with our Board and employees last year, we identified and remediated the underlying issue and continue to invest in security.”

How did some OpenAI employees react to this attack?

News of the forum’s security breach was a cause for concern for other OpenAI employees, the NYT reported; they thought it indicated a vulnerability at the company that could be exploited by state-sponsored hackers in the future. If OpenAI’s cutting-edge technology fell into the wrong hands, it could be used for nefarious purposes that could endanger national security.

SEE: Study reveals OpenAI's GPT-4 can autonomously exploit 87% of vulnerabilities in a day

Moreover, the way executives handled the incident led some employees to question whether OpenAI was doing enough to protect its proprietary technology from foreign adversaries. Leopold Aschenbrenner, a former technical manager at the company, said he was fired after raising these concerns to the board in a podcast with Dwarkesh Patel.

OpenAI denied this in a statement to The New York Times, also saying it disagreed with Aschenbrenner’s “characterizations of our security.”

More security news from OpenAI, including the ChatGPT app for macOS

The forum breach isn’t the only recent indication that security isn’t OpenAI’s top priority. Last week, data engineer Pedro José Pereira Vieito revealed that the new ChatGPT app for macOS stored chat data in plain text, meaning malicious actors could easily access that information if they took over the Mac. After The Verge informed them of this vulnerability, OpenAI released an update that encrypts chats, the company said.

An OpenAI spokesperson told TechRepublic in an email: “We are aware of this issue and have submitted a new version of the app that encrypts these conversations. We are committed to providing a useful user experience while maintaining our high security standards as our technology evolves.”

SEE: Millions of Apple apps were vulnerable to CocoaPods supply chain attack

In May 2024, OpenAI published a statement saying it had disrupted five covert influence operations originating in Russia, China, Iran, and Israel that sought to use its models for “deceptive activities.” The activities that were detected and blocked include generating comments and articles, fabricating names and bios for social media accounts, and translating text.

That same month, the company announced that it had formed a Safety and Security Committee to develop the processes and safeguards it will use during the development of its border models.

Is the OpenAI forums hack indicative of more AI-related security incidents?

Dr. Ilia Kolochenko, partner and head of the cybersecurity practice at Platt Law LLP, said he believes this security incident on the OpenAI forums is likely one of many. He told TechRepublic in an email: “The global AI race has become a matter of national security for many countries, therefore state-backed cybercriminal groups and mercenaries are aggressively targeting AI vendors, from talented startups to tech giants like Google or OpenAI.”

Hackers target valuable AI intellectual property, such as large language models, training data sources, technical research and business information, Dr Kolochenko added. They can also deploy backdoors to control or disrupt operations, similar to recent attacks on critical national infrastructure in Western countries.

He told TechRepublic: “All corporate users of GenAI vendors need to be particularly careful and cautious when sharing or giving access to their proprietary data for LLM training or tuning, as their data – which ranges from attorney-client privileged information and trade secrets of major industrial or pharmaceutical companies to classified military information – is also in the crosshairs of AI-hungry cybercriminals who are poised to step up their attacks.”

Is it possible to mitigate security breach risks when developing AI?

There is no simple answer to alleviating all risks of security breaches by foreign adversaries when developing new AI technologies. OpenAI cannot discriminate against workers based on their nationality, and similarly does not want to limit its talent pool by hiring only in certain regions.

It’s also difficult to prevent AI systems from being used for nefarious purposes before they come to light. A study by Anthropic found that LLMs were only slightly more useful for malicious actors to acquire or design biological weapons than standard internet access. Another study by OpenAI came to a similar conclusion.

On the other hand, some experts agree that while they do not pose a threat today, AI algorithms could become dangerous when they become more advanced. In November 2023, representatives from 28 countries signed the Bletchley Declaration, which called for global cooperation to address the challenges posed by AI. “There is the potential for serious, even catastrophic harm, whether deliberate or unintended, arising from the more significant capabilities of these AI models,” it reads.

scroll to top