The most formidable supercomputer of all time is preparing for ChatGPT 5: thousands of “old” AMD GPU accelerators processed 1 trillion parameter models

The world's most powerful supercomputer has used just over 8% of the GPUs it is equipped with to train a large language model (LLM) containing a trillion parameters, comparable to OpenAI's GPT-4.

Frontier, based at Oak Ridge National Laboratory, used 3,072 of its AMD Radeon Instinct GPUs to train an AI system at a scale of trillions of parameters, and used 1,024 of these GPUs (about 2.5%) to train a model with 175 billion parameters. , essentially the same size as ChatGPT.

scroll to top