Analyst(s): Nick Patience
Publication Date: January 27, 2026
NVIDIA has deepened its commitment to CoreWeave with an additional $2 billion investment, aiming to fast-track the development of more than 5 gigawatts of specialized so-called AI factories by 2030. This partnership signals a strategic shift for NVIDIA, moving beyond its traditional role as a chip supplier to become a co-developer, while reinforcing a capital-intensive financing approach. By potentially incorporating CoreWeave’s specialized software into its standard reference architectures, NVIDIA is seeking to standardize the design for industrial-scale AI deployment while simultaneously securing its most dedicated distribution partner.
What is Covered in this Article:
- NVIDIA’s $2 billion strategic investment in CoreWeave at $87.20 per share.
- The ambitious roadmap to build 5 gigawatts (GW) of AI factory capacity by 2030.
- Early adoption of NVIDIA’s Rubin platform, Vera CPUs, and BlueField-4 storage.
- Potential inclusion of CoreWeave’s SUNK and Mission Control software in NVIDIA reference architectures.
- The strategic pivot of NVIDIA using its balance sheet to solve land and power bottlenecks for its partners.
The News: NVIDIA and CoreWeave announced an expansion of their long-standing relationship. NVIDIA has invested $2 billion in CoreWeave’s Class A common stock at $87.20 per share, increasing its stake in the company and expressing confidence in the cloud platform built on NVIDIA infrastructure – the news pushed CoreWeave shares up by 12% in early trading. The partnership centers on accelerating the buildout of more than 5 gigawatts (GW) of so-called AI factories (datacenters) by 2030 to advance AI adoption globally. NVIDIA will leverage its financial strength to help CoreWeave procure land, power, and shell facilities. Additionally, CoreWeave will be an early adopter of multiple generations of NVIDIA hardware, including the Rubin platform, Vera CPUs, and BlueField storage systems.
NVIDIA and CoreWeave Team to Break Through Data Center Real Estate Bottlenecks
Analyst Take: The latest collaboration between NVIDIA and CoreWeave AI Factories represents a win for CoreWeave. And while the industry has spent years focusing on chip availability, NVIDIA is signaling that the primary battle has moved to the physical and software substrate of the AI economy. By putting $2 billion on the table and committing to a 5GW roadmap, NVIDIA is trying to bypass traditional data center bottlenecks.
A critical element of this news is NVIDIA’s commitment to leverage its financial strength to help CoreWeave procure land and power. It suggests a lack of confidence in the traditional hyperscale market to build out facilities fast enough. However, the goal of 5GW by 2030 faces hard physical and regulatory limits. In some regions, wait times for grid connections already range from two to ten years. Even with NVIDIA’s balance sheet, brute-forcing the electrical grid remains a gamble against aging infrastructure.
Software Endorsement: Beyond the GPU Landlord Model
The formal testing and validation of SUNK (Slurm on Kubernetes) and CoreWeave Mission Control for inclusion in NVIDIA’s reference architectures is a significant moment for CoreWeave, assuming NVIDIA ends up validating CoreWeave’s software. The endorsement has the potential to move Coreweave from a pure GPU neocloud provider to a technology partner. It provides a critical value-add that prevents a race to the bottom on GPU pricing and positions CoreWeave as a would-be rival to the hyperscalers, though it still has a long way to go. It’s worth noting that NVIDIA acquired SchedMD, the developer of Slurm, an open-source, vendor-neutral workload management system, in December, though committed to keeping it as open source, the model NVIDIA also prefers for its own Nempotron family of AI models.
Early Adoption of the Rubin Platform and Vera CPUs
CoreWeave’s role as an early adopter of the NVIDIA Rubin platform in the second half of 2026 is central to NVIDIA’s aggressive hardware roadmap. The Rubin architecture is designed to deliver a 10x reduction in inference token costs and a 4x reduction in the GPUs needed to train large Mixture-of-Experts (MoE) models compared to Blackwell. Crucially, CoreWeave is not just getting the GPUs; it is adopting the NVIDIA Vera CPU and BlueField-4 data processing units (DPUs), which are designed to manage the complex reasoning and agentic AI workloads that the next frontier of AI demands. As such, CoreWeave is positioning itself as the primary industrial testing ground for NVIDIA’s most advanced rack-scale systems.
What to Watch:
- Building 5GW requires unprecedented power access; watch for project delays in hubs like Dublin or Amsterdam, where grid moratoriums are already in place – we suspect NVIDIA and CoreWeave will look at other markets as a result.
- CoreWeave has been diversifying its revenue sources but remains reliant on Microsoft and OpenAI, and has recently signed a long-term commitment with Meta. Wall Street remains wary of massive customer concentration.
- It will be interesting to see how AWS and Google, with their own silicon rivals to GPUs, respond as NVIDIA effectively picks a software and infrastructure winner in CoreWeave.
- As the debut partner for the Rubin platform, all eyes will be on CoreWeave as it rolls it out.
See the complete press release on the collaboration between CoreWeave and NVIDIA on the CoreWeave website.
Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.
Other insights from Futurum:
NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy
AI Platforms Market $292B by 2030, Mapping Risks & Bull Market Scenarios
CoreWeave Files for IPO: Specialized AI Cloud Provider Eyes Next Phase of Growth
Author Information
Nick Patience is VP and Practice Lead for AI Platforms at The Futurum Group. Nick is a thought leader on AI development, deployment, and adoption - an area he has researched for 25 years. Before Futurum, Nick was a Managing Analyst with S&P Global Market Intelligence, responsible for 451 Research’s coverage of Data, AI, Analytics, Information Security, and Risk. Nick became part of S&P Global through its 2019 acquisition of 451 Research, a pioneering analyst firm that Nick co-founded in 1999. He is a sought-after speaker and advisor, known for his expertise in the drivers of AI adoption, industry use cases, and the infrastructure behind its development and deployment. Nick also spent three years as a product marketing lead at Recommind (now part of OpenText), a machine learning-driven eDiscovery software company. Nick is based in London.