AMD has unveiled new CPU, NPU, and GPU architectures aimed at “driving end-to-end AI infrastructure from the data center to PCs,” along with an expanded AMD Instinct accelerator roadmap and a new Instinct accelerator. MI325X, which it says will be available in Q4 2024.
The new Instinct MI325X offers 288 GB of HBM3E memory and 6 TB/s of memory bandwidth. AMD says this means it will offer twice the memory capacity and 1.3 times more bandwidth than “the competition”, i.e. the Nvidia H200, as well as 1.3 times better computing performance.
The memory upgrade is the main change here as it will use the original CDNA 3 architecture like the MI300X and the clock speeds don't seem to change to 2.1GHz either.
Looking to the future
Following the Instinct MI325X will be the Instinct MI350 series. Expected to be available in 2025, it will be powered by the new CDNA 4 architecture, which AMD says will deliver up to a 35x increase in AI inference performance compared to the Instinct MI300 series.
This will be followed in 2026 by the AMD Instinct MI400 series, which will be based on AMD's CDNA Next-Gen architecture. Understandably, the company hasn't gone into too much detail here.
“AMD Instinct MI300X accelerators continue their strong adoption by numerous partners and customers, including Microsoft Azure, Meta, Dell Technologies, HPE, Lenovo and others, a direct result of the exceptional performance and value proposition of the AMD Instinct MI300X accelerator” said Brad McCredie, corporate vice president, Accelerated Data Center Computing, AMD.
“With our updated annual product cadence, we are relentless in our pace of innovation, delivering the leading capabilities and performance the AI industry and our customers expect to drive the next evolution of AI training and inference in the centers.” of data”.