OpenAI has developed an internal scale to chart the progress of its large language models toward artificial general intelligence (AGI), Bloomberg reports.
Typically, AGI stands for AI with human-like intelligence and is considered the overarching goal of AI developers. In previous references, OpenAI defined AGI as “a highly autonomous system that outperforms humans at most economically valuable tasks.” That’s a point that goes far beyond current AI capabilities. This new scale aims to provide a structured framework to track progress and set benchmarks in that pursuit.
The scale introduced by OpenAI breaks down progress into five levels or milestones on the path to AI. ChatGPT and its rival chatbots are at Level 1. OpenAI claimed to be on the cusp of reaching Level 2, which would be an AI system capable of matching a human with a PhD when it comes to solving basic problems. That could be a reference to GPT-5, which OpenAI CEO Sam Altman has said will be a “significant leap forward.” After Level 2, the levels get increasingly complex. Level 3 would be an AI agent capable of handling tasks for you without you being present, while a Level 4 AI would actually invent new ideas and concepts. At Level 5, AI could not just take over the tasks of an individual but entire organizations.
Level up
The idea of tiers makes sense for OpenAI, or any developer really. In fact, a comprehensive framework not only helps OpenAI internally, but can also set a universal standard that could be applied to evaluate other AI models.
Achieving AI won't happen immediately, though. Previous comments by Altman and others on OpenAI suggest it will take as little as five years, but timelines vary significantly among experts. The amount of computing power needed and the financial and technological challenges are substantial.
Add to this the ethical and security questions that AAG raises. There are very real concerns about what AI at that level could mean for society. And OpenAI’s recent moves may not reassure anyone. In May, the company disbanded its security team following the departure of its leader and OpenAI co-founder Ilya Sutskever. Senior researcher Jan Leike also resigned, citing concerns that OpenAI’s security culture was being ignored. But by offering a structured framework, OpenAI aims to set concrete benchmarks for its models and those of its competitors, and perhaps help us all prepare for what’s next.