- Sam Altman says Operai will soon pass 1 million GPU and point to 100 million more
- Running 100 million GPU could cost $ 3 billion and break the global energy infrastructure limits
- Openai's expansion in Oracle and TPU shows a growing impatience with current cloud limits
Operai says that it is on the way to operate more than one million GPU at the end of 2025, a figure that already places it far ahead of the rivals in terms of calculation resources.
However, for the company's CEO, Sam Altman, that milestone is simply a beginning, “we will cross more than 1 million GPU online for the end of this year,” he said.
The comment, delivered with apparent lightness, has caused a serious discussion about the viability of deploying 100 million GPUs in the predictable future.
A vision far beyond the current scale
To put this figure in perspective, Elon Musk's XAI executes Grok 4 in approximately 200,000 GPU, which means that the planned scale of millions of OpenAi units is already five times that number.
However, expanding this to 100 million would imply astronomical costs, estimated at around $ 3 billion, and would raise important challenges in manufacturing, energy consumption and physical deployment.
“Very proud of the team, but now it is better to get to work discovering how 100x than hahaha,” Altman wrote.
While Microsoft Azure remains the main OpenAi cloud platform, it has also been associated with Oracle and, according to the reports, it is exploring Google TPU accelerators.
This diversification reflects a trend throughout the industry, with goal, Amazon and Google also advancing towards internal chips and a greater dependence on high bandwidth memory.
SK Hynix is one of the companies that will probably benefit from this expansion, as GPU demand increases, so does the HBM demand, a key component in AI training.
According to privileged information from the data center industry, “in some cases, the specifications of GPU and HBMS … are determined by customers (such as OpenAi) … configured according to customer applications.”
The Hynix SK yield has already seen strong growth, with forecasts that suggest a record operational gain in the second quarter of 2025.
Openai's collaboration with SK Group seems to be deepening. President Chey Tae-Won and the CEO Kwak No-Jung recently met with Altman, according to reports to strengthen his position in the AI infrastructure supply chain.
The relationship is based on past events such as SK Telecom competition with Chatgpt and participation in the Miten Genai Impact consortium.
That said, the rapid expansion of Operai has generated concerns about financial sustainability, with reports that SoftBank may be reconsidering its investment.
If the GPU objective of 100 million openai materializes, it will require not only capital but also important advances in the efficiency of calculating, the manufacturing capacity and global energy infrastructure.
For now, the objective seems aspirational, a bold signal of intention instead of a practical roadmap.
Via Tomshardware