Meta sheds more light on how Llama 3 training is evolving: for now it relies on almost 50,000 Nvidia H100 GPUs, but how long will it be until Meta switches to its own AI chip?

Meta has revealed details about its AI training infrastructure, revealing that it currently relies on nearly 50,000 Nvidia H100 GPUs to train its open source Llama 3 LLM.

The company says it will have more than 350,000 Nvidia H100 GPUs in service by the end of 2024 and computing power equivalent to nearly 600,000 H100s when combined with hardware from other sources.

scroll to top