OpenAI, the technology company behind ChatGPT, has announced that it has formed a “Security Committee” aimed at making the company's approach to AI more responsible and consistent in terms of security.
It's no secret that OpenAI and CEO Sam Altman, who will sit on the committee, want to be the first to achieve AGI (Artificial General Intelligence), which is widely seen as achieving artificial intelligence that will resemble intelligence. human and can learn itself. . Having recently introduced GPT-4o to the public, OpenAI is already training the next-generation GPT model, which it hopes will be one step closer to AGI.
GPT-4o was introduced to the public on May 13 as a next-level multimodal (capable of processing in multiple 'modes') generative AI model, capable of handling inputs and responding with audio, text and images. It had a generally positive reception, but more debate has since emerged about the innovation about its actual capabilities, implications, and the ethics around technologies like this.
Just over a week ago, OpenAI confirmed to Wired that its previous team responsible for overseeing the security of its AI models had been disbanded and reabsorbed into other existing teams. This followed notable departures of key figures at the company, such as OpenAI co-founder and chief scientist Ilya Sutskever and co-head of the AI security “super-alignment” team Jan Leike. His departure was reportedly related to his concerns that OpenAI, and Altman in particular, were not doing enough to develop their technologies responsibly and were foregoing due diligence.
This has apparently given OpenAI a lot to think about, and in response, the oversight committee was formed. In posting the announcement about the formation of the committee, OpenAI also states that it welcomes a “robust debate at this important time.” The committee's first job will be to “evaluate and further develop OpenAI's processes and safeguards” over the next 90 days, and then share recommendations with the company's board of directors.
What happens after 90 days?
Recommendations that are subsequently agreed to be adopted will be shared publicly “in a manner that is consistent with safety and security.”
The committee will consist of Chairman Bret Taylor, Quora CEO Adam D'Angelo, and Nicole Seligman, a former Sony Entertainment executive, along with six OpenAI employees, including Sam Altman, as mentioned, and John Schulman, researcher and co-founder of OpenAI. According to Bloomberg, OpenAI stated that it will also consult external experts as part of this process.
I'll reserve my judgment for when the recommendations adopted by OpenAI are published and I can see how they are implemented, but intuitively, I don't have the utmost confidence that OpenAI (or any major tech company) is prioritizing security and ethics as much as they try. win the AI race.
It's a shame, and it's unfortunate that, generally speaking, those who strive to be the best, no matter what, are often slow to consider the cost and effects of their actions, and how they might impact others in some way. very real, even if they are big. a large number of people will potentially be affected.
I'll be happy to be proven wrong and I hope so, and in an ideal world, all technology companies, whether they're in the AI race or not, should prioritize the ethics and safety of what they do in the world. same level at which they strive to innovate. So far in the AI space, that doesn't seem to be the case from my point of view, and unless there are real consequences, I don't see companies like OpenAI being influenced that much to change their overall ethos or behavior.