- Nvidia commits capital and hardware to accelerate CoreWeave AI factory expansion
- CoreWeave Gains Early Access to Vera Rubin Platforms in Multiple Data Centers
- Financial backing ties Nvidia's balance sheet directly to AI infrastructure growth
Nvidia and CoreWeave have expanded their long-standing relationship with an agreement that links infrastructure deployment, capital investment and early access to future computing platforms.
The deal places CoreWeave among the first cloud providers expected to deploy Nvidia's Vera Rubin generation, reinforcing its role as a preferred partner for large-scale AI infrastructure.
Nvidia has also committed $2 billion to CoreWeave through a direct share purchase, underscoring the financial depth of the collaboration.
Scale AI factories through aligned infrastructure
The deal focuses on accelerating the construction of AI factories, with CoreWeave planning to have more than five gigawatts of capacity by 2030.
Nvidia's involvement extends beyond the supply of accelerators, supporting the acquisition of land, energy and physical infrastructure.
This approach ties capital availability directly to hardware deployment schedules, reflecting how the expansion of AI increasingly depends on the coordination between computing financing and delivery.
“AI is entering its next frontier and driving the largest infrastructure buildout in human history,” said Jensen Huang, founder and CEO of Nvidia.
“CoreWeave's deep AI factory expertise, platform software and unmatched execution speed are recognized throughout the industry. Together, we are racing to meet the extraordinary demand for Nvidia's AI factories – the foundation of the industrial AI revolution.”
Nvidia and CoreWeave are also deepening alignment between the infrastructure and software layers.
CoreWeave's cloud stack and operational tools will be tested and validated alongside Nvidia reference architectures.
“From the beginning, our collaboration has been guided by a simple belief: AI succeeds when software, infrastructure and operations are designed together,” said Michael Intrator, co-founder, president and CEO of CoreWeave.
CoreWeave is expected to deploy multiple generations of the Nvidia platform in its data centers, including early adoption of the Rubin platform, Vera CPUs, and BlueField storage systems.
This multi-generational strategy suggests that Nvidia is using CoreWeave as a testing ground for full-stack implementations rather than isolated components.
Vera CPUs are expected to be offered as a standalone option, signaling Nvidia's intention to address CPU limitations that are becoming more visible as agent AI workloads grow.
These CPUs use a custom Arm architecture with a high core count, large coherent memory capacity, and high-bandwidth interconnects.
“For the first time, we're going to offer CPU Vera. Vera is an incredible CPU. We're going to offer CPU Vera as a standalone part of the infrastructure. And so not only can you run your compute stack on Nvidia GPUs, now you can also run your compute stack, no matter what your CPU workload is, run it on Nvidia CPUs… Vera is completely revolutionary,” said Jensen Huang via Ed Ludlow on X.
In practical terms, the collaboration reflects two narratives that shape the current AI market.
Server CPUs are emerging as another pressure point in the supply chain, particularly for agent-driven applications.
At the same time, offering high-end CPUs separately offers customers an alternative to full rack-scale systems, which can reduce entry costs for certain deployments.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.






