Meta Platforms, Facebook's parent company, plans to deploy its own custom-designed artificial intelligence chips, codenamed Artemis, in its data centers this year, according to an internal document seen by Reuters. This move could potentially reduce Meta's dependence on Nvidia's market-dominant H100 chips and rein in the rising costs of running AI workloads.
Meta has been investing billions of dollars to increase its computing capacity for the power-hungry generative AI products it is integrating into services such as Facebook, Instagram and WhatsApp. This involves purchasing specialized chips and reconfiguring data centers to accommodate them.
According to Dylan Patel, founder of silicon research group SemiAnalysis, successful deployment of Meta's own chip could save the company hundreds of millions of dollars in annual energy costs and billions in chip purchasing costs.
I still depend on Nvidia, for now
Despite this move toward self-sufficiency, Meta will continue to use Nvidia's H100 GPUs in its data centers for the foreseeable future. CEO Mark Zuckerberg has stated that by the end of this year, the company plans to have approximately 350,000 H100 processors in service.
The deployment of its own chip marks a positive turn for Meta's internal AI silicon project, following the decision in 2022 to discontinue the first iteration of the chip in favor of Nvidia GPUs.
The new chip, Artemis, like its predecessor, is designed for AI inference, which involves using algorithms to make classification judgments and generate responses to user prompts.
“We consider our internally developed accelerators to be highly complementary to commercially available GPUs by offering the optimal combination of performance and efficiency in Meta-specific workloads,” a Meta spokesperson said.
While Meta's decision to reduce its reliance on Nvidia processors could perhaps signal the first crack in Nvidia's AI wall, it's clear that, for now, Nvidia GPUs will continue to play an important role in the AI infrastructure. of Goal.