It's no secret that the AI accelerator business is hot today, with semiconductor manufacturers developing neural processing units and the AI PC initiative driving more powerful processors in laptops, desktops and workstations. .
Gartner studied the AI chip industry and found that in 2024, global AI chip revenue is projected to grow by 33%. Specifically, Gartner's report “Forecast Analysis: AI Semiconductors Worldwide” details competition among hyperscalers (some of which are developing their own chips and turning to semiconductor suppliers), AI chip use cases and the demand for AI accelerators on chips. .
“In the long term, AI-based applications will move from data centers to PCs, smartphones, and edge and endpoint devices,” Gartner analyst Alan Priestley wrote in the report.
Where are all these AI chips going?
Gartner predicted that total AI chip revenue in 2024 will be $71.3 billion (up from $53.7 billion in 2023) and will rise to $92 billion in 2025. Of the total AI chip revenue, computer electronics It will likely account for $33.4 billion in 2024, or 47% of all revenue. Revenue from AI chips. Other sources of revenue from AI chips will be automotive electronics ($7.1 billion) and consumer electronics ($1.8 billion).
Of the $71.3 billion in AI semiconductor revenue in 2024, the majority will come from discrete and integrated application processes, discrete GPUs, and computing microprocessors, as opposed to embedded microprocessors.
In terms of AI semiconductor revenue from applications in 2024, the majority will come from computing electronics, wired communications electronics, and automotive electronics.
Gartner noted a shift in computing needs from initial AI model training to inference, which is the process of refining everything the AI model has learned during training. Gartner predicted that more than 80% of workload accelerators deployed in data centers will be used to run AI inference workloads by 2028, an increase of 40% from 2023.
SEE: Microsoft's new PC category, Copilot+, will use Qualcomm processors to run AI on the device.
AI and workload accelerators go hand in hand
AI accelerators in servers will be a $21 billion industry by 2024, Gartner predicted.
“Today, Generative AI (GenAI) is driving demand for high-performance AI chips in data centers. In 2024, the value of AI accelerators used in servers, which offload data processing from microprocessors, will rise to $21 billion, rising to $33 billion in 2028,” Priestley said in a press release.
AI workloads will also require beefing up standard microprocessing units, Gartner predicted.
“Many of these AI-enabled applications can run on standard microprocessing units (MPUs), and MPU vendors are expanding their processor architectures with dedicated on-chip AI accelerators to better handle these processing tasks,” Priestley wrote in a May 4 forecast analysis. of AI semiconductors around the world.
Additionally, the rise of artificial intelligence techniques in data center applications will drive demand for workload accelerators: 25% of new servers are expected to have workload accelerators in 2028, compared to 10% in 2023.
The dawn of the AI PC?
Gartner is bullish on AI PCs, the push to run large language models locally in the background on laptops, workstations and desktops. Gartner defines AI PCs as those that have a neural processing unit that allows people to use AI for “everyday activities.”
The analyst firm predicted that by 2026, every business PC purchase will be an AI-enabled PC. Whether this turns out to be true is still unknown, but hyperscalers are certainly incorporating AI into their next-generation devices.
AI among hyperscalers encourages both competition and collaboration
AWS, Google, Meta, and Microsoft are all looking for in-house AI chips today, while also looking at hardware from NVIDIA, AMD, Qualcomm, IBM, Intel, and more. For example, Dell announced a selection of new laptops that use Qualcomm's Snapdragon X Series processor to run AI, while both Microsoft and Apple are looking to add OpenAI products to their hardware. Gartner expects the trend of developing custom-designed AI chips to continue.
Hyperscalers are designing their own chips to gain better control of their product roadmaps, control costs, reduce their dependence on off-the-shelf chips, leverage IP synergies, and optimize performance for their specific workloads. said Gaurav Gupta, an analyst at Gartner.
“Semiconductor chip foundries, such as TSMC and Samsung, have given technology companies access to cutting-edge manufacturing processes,” said Gupta.
At the same time, “Arm and other companies, such as Synopsys, have provided access to advanced intellectual property that makes custom chip design relatively easy,” he said. Easy access to the cloud and a changing culture of semiconductor assembly and test services (SATS) providers have also made it easier for hyperscalers to get into chip design.
“While chip development is expensive, the use of custom-designed chips can improve operational efficiency, reduce the costs of providing AI-based services to users, and reduce costs for users to access new applications based in AI,” Gartner wrote in a press release.