Analyst(s): Brendan Burke
Publication Date: February 20, 2026
NVIDIA announced a multiyear, multigenerational strategic partnership with Meta spanning on-premises, cloud, and AI infrastructure. The deal includes the first large-scale deployment of standalone NVIDIA Grace CPUs, a roadmap for Vera CPU-only servers in 2027, millions of Blackwell and Rubin GPUs, NVIDIA Spectrum-X Ethernet networking across Meta’s infrastructure footprint, and NVIDIA Confidential Computing for WhatsApp private processing.
What is Covered in This Article:
- NVIDIA and Meta’s expanded infrastructure deal, including standalone Grace and Vera CPU deployments
- The architectural rationale for discrete CPUs in agentic AI workloads
- The adoption of NVIDIA Spectrum-X Ethernet and Confidential Computing across Meta’s infrastructure
- Meta’s Manus acquisition and its demand virtualized agent sandbox environments
- Market implications for CPU supply chains and NVIDIA’s competitive positioning against Intel and AMD
The News: On February 17, 2026, NVIDIA and Meta Platforms announced a significant expansion of their AI infrastructure partnership. The deal encompasses the deployment of millions of NVIDIA Blackwell and Rubin GPUs, but the strategically notable element is Meta’s commitment to deploy standalone NVIDIA Grace CPUs in production immediately, with Vera CPU-only servers to follow. NVIDIA’s press release states the partnership “will enable the large-scale deployment of NVIDIA CPUs and millions of NVIDIA Blackwell and Rubin GPUs, as well as the Vera Rubin NVL72 rack-scale systems.” Engineering teams across NVIDIA and Meta are engaged in deep codesign to optimize and accelerate state-of-the-art AI models across Meta’s core workloads, combining NVIDIA’s full-stack platform with Meta’s hyperscale systems.
“No one deploys AI at Meta’s scale—integrating frontier research with industrial-scale infrastructure to power the world’s largest personalization and recommendation systems for billions of users,” said Jensen Huang, founder and CEO of NVIDIA. Meta CEO Mark Zuckerberg stated the companies would “build leading-edge clusters using [NVIDIA’s] Vera Rubin platform to deliver personal superintelligence to everyone in the world.”
Will NVIDIA’s Meta Deal Ignite a CPU Supercycle?
Analyst Take: This partnership is the strongest market signal to date that NVIDIA intends to be a CPU company, not merely a GPU company that bundles a host processor. The unbundling of Vera as a standalone product and Meta’s willingness to deploy Grace CPU-only servers in production confirm Futurum’s thesis that discrete CPUs are emerging as first-class compute resources for a new class of workloads that GPU-centric systems cannot efficiently serve.
Scaling Manus Agents
Meta’s acquisition of Manus amplifies demand for standalone CPUs. Manus operates containerized virtual machines, each running a parallel agent experiment that writes code, debugs it, browses the web, and retries autonomously. We believe this workload requires discrete, high-core-count CPUs capable of low-latency context switching, memory coherence across distributed simulations, and real-time aggregation of reward signals. These are workloads that GPUs are architecturally ill-suited to perform.
By pairing 88-core Vera CPUs with Rubin GPUs at production scale, NVIDIA and Meta are institutionalizing a CPU-per-GPU architecture that treats general-purpose compute as essential infrastructure rather than a vestigial housekeeping layer. Futurum’s research suggests that as simulation fidelity increases and agents perform longer-horizon reasoning chains, CPU demand per GPU will continue to rise, potentially requiring supplemental CPU-only racks adjacent to GPU clusters.
NVIDIA’s CPU Expansion
For NVIDIA, the strategic implications extend beyond Meta. Futurum’s Data Center Semiconductor Forecast projects that CPUs will exceed GPU and XPU growth by 2028, reaching 33.7% growth, driven by the structural dynamics this partnership now validates at hyperscale. NVIDIA’s entry into the standalone CPU market with Vera positions it to capture share from Intel and AMD at a moment when both vendors have confirmed they are effectively sold out of high-core-count server processors.
NVIDIA’s ability to offer a vertically integrated CPU-GPU interconnect stack, where Vera CPUs communicate natively over the NVLink fabric with Rubin GPUs, creates a coherence advantage that disaggregated x86 solutions cannot match without custom integration. The broader market question is whether NVIDIA can turn the Vera unbundling from a single-hyperscaler deal into a platform-level shift. If Vera CPU-only instances appear on major cloud providers alongside GPU instances, NVIDIA would effectively compete with AWS Graviton, Google Axion, and Microsoft Cobalt in the Arm server CPU market, while simultaneously selling GPUs to those same customers. That dual revenue stream, earning on both sides of the CPU-GPU ratio, is the strategic prize this partnership signals NVIDIA is pursuing.
With TSMC’s advanced packaging lines (CoWoS) and N2 nodes already seeing unprecedented utilization through 2027, the supply-side constraints will likely sustain premium pricing for standalone CPUs. For hyperscalers, the cost of a slow agent roll-out outweighs the silicon premium, creating a period of demand inelasticity that could define the next three years of infrastructure spend.
Solidifying the NVIDIA Platform with Networking and Encryption
The Spectrum-X Ethernet adoption across Meta’s infrastructure is equally consequential. By standardizing on NVIDIA’s networking platform rather than competing Ethernet solutions, Meta deepens its architectural lock-in across compute, networking, and software, making the full NVIDIA stack increasingly difficult to displace. This partnership was already announced at OCP in October, and we do not believe it comes at the expense of other networking vendors. Rather, it validates NVIDIA’s bundling of open Ethernet networking with its rack-scale architecture.
The Confidential Computing integration with WhatsApp extends NVIDIA’s relevance beyond raw compute into privacy-preserving AI, a capability that becomes critical as agentic systems handle sensitive user data at scale. As autonomous agents process sensitive user data within WhatsApp and other Meta applications, hardware-level confidentiality becomes a prerequisite for deployment at scale. NVIDIA’s ability to provide confidential computing across both GPU and CPU platforms gives it a privacy and compliance moat that custom silicon alternatives from hyperscalers cannot easily replicate.
Outlook
The bottom line is that NVIDIA is no longer selling training clusters. It is selling an integrated AI operating system spanning CPUs, GPUs, networking, confidential computing, custom models, and software orchestration. Meta’s willingness to adopt every layer of this stack at a scale measured in millions of GPUs suggests that the total addressable market for NVIDIA’s platform is substantially larger than GPU revenue alone has indicated. Investors and competitors should pay close attention to Grace and Vera CPU attach rates in future earnings disclosures. The ratio of standalone CPU deployments to GPU deployments will be the leading indicator of how quickly the agentic computing thesis is materializing into silicon demand.
What to Watch:
- Whether other hyperscalers follow Meta in deploying standalone NVIDIA CPUs, particularly Microsoft Azure and Oracle, which have deep NVIDIA GPU relationships but currently rely on Intel and AMD for host processors
- The production timeline and volume ramp for Vera CPU-only servers in 2027, which will determine whether NVIDIA can translate design wins into meaningful CPU market share
- How Intel and AMD respond to NVIDIA’s CPU encroachment, particularly whether Intel accelerates its 18A foundry strategy to offer competitive Arm-based alternatives or doubles down on Xeon differentiation
- The expansion of NVIDIA Confidential Computing beyond WhatsApp across Meta’s portfolio, which could establish a new category of privacy-preserving agentic AI infrastructure
- Meta’s Manus integration roadmap and the resulting scale of virtualized agent environments, which will serve as a real-world benchmark for CPU-to-GPU ratio requirements in production agentic workloads
Read the full press release in NVIDIA’s Newsroom.
Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.
Other Insights from Futurum:
AI Is the Largest Infrastructure Buildout Ever—Are Investments Keeping Up?
At CES, NVIDIA Rubin and AMD “Helios” Made Memory the Future of AI
NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy
Author Information
Brendan is Research Director, Semiconductors, Supply Chain, and Emerging Tech. He advises clients on strategic initiatives and leads the Futurum Semiconductors Practice. He is an experienced tech industry analyst who has guided tech leaders in identifying market opportunities spanning edge processors, generative AI applications, and hyperscale data centers.
Before joining Futurum, Brendan consulted with global AI leaders and served as a Senior Analyst in Emerging Technology Research at PitchBook. At PitchBook, he developed market intelligence tools for AI, highlighted by one of the industry’s most comprehensive AI semiconductor market landscapes encompassing both public and private companies. He has advised Fortune 100 tech giants, growth-stage innovators, global investors, and leading market research firms. Before PitchBook, he led research teams in tech investment banking and market research.
Brendan is based in Seattle, Washington. He has a Bachelor of Arts Degree from Amherst College.
