Intel, Google, Microsoft, Meta, Cisco and other tech giants have announced the formation of the Ultra Accelerator Link Promoter Group (UALink), a strategic move aimed at curbing Nvidia's dominance in the AI accelerator market.
The group, which also includes AMD, Hewlett Packard Enterprise and Broadcom, seeks to develop a new industry standard for high-speed, low-latency communications for expanded AI systems in data centers, competing directly with Nvidia's NVLink.
The group's proposal, UALink 1.0, will allow the connection of up to 1,024 AI accelerators within a computing capsule, allowing direct memory loads and stores between accelerators such as GPUs. The UALink consortium, which is expected to come on board in the third quarter of 2024, will oversee the development. UALink 1.0 is expected to be available around the same time, with a higher bandwidth upgrade, UALink 1.1, scheduled for Q4 2024.
Nvidia under attack
Sachin Katti, senior vice president and general manager of Network and Edge Group, Intel Corporation, said: “ULink is an important milestone for the advancement of AI computing. Intel is proud to co-lead this new technology and contribute our expertise in creating an open and dynamic AI ecosystem. As a founding member of this new consortium, we look forward to a new wave of industry innovation and customer value delivered through the UALink standard.”
Gartner estimates that AI accelerators used in servers will total $21 billion this year, rising to $33 billion in 2028. AI chip revenue from computer electronics is expected to reach $33.4 billion. dollars in 2025. Microsoft, Meta and Google have already invested billions in Nvidia hardware. for their AI clouds and models, and they are understandably looking to reduce their dependence on the company that controls approximately 70% to 95% of the AI accelerator market.
Nvidia is notably absent from this initiative, for understandable reasons. Naturally, the company is reluctant to support a rival standard that could challenge its proprietary NVLink technology and potentially dilute its considerable market influence.
Forrest Norrod, general manager of data center solutions at AMD, said: “The work companies are doing at UALink to create an open, scalable, high-performance accelerator fabric is critical to the future of AI. Together, we bring extensive experience building large-scale AI and high-performance computing solutions based on open standards, efficiency, and strong ecosystem support. “AMD is committed to contributing our expertise, technologies and capabilities to the group, as well as other open industry efforts to advance all aspects of AI technology and solidify an open AI ecosystem.”