At the OCP EMEA Summit, Arm AGI CPU took center stage as Supermicro expanded its Data Center Building Block Solutions (DCBBS) with new Arm AGI CPU-based servers and OCP ORv3-compliant racks, while Verda announced adoption of Arm AGI CPUs paired with NVIDIA GB300 GPUs for agentic AI workloads. These moves highlight how OCP standards and open validation are now critical for integrating heterogeneous architectures, specifically Arm CPUs and NVIDIA GPUs, for agentic inference at scale. With data center CPU revenue projected to reach $100B by 2030, the market is shifting from integrated host nodes to orchestrated systems optimized for agentic workloads.
What is Covered in this Article
- Supermicro’s launch of Arm AGI CPU-based servers and OCP ORv3-compliant racks
- Verda’s adoption of Arm AGI CPU and NVIDIA GB300 for agentic AI workloads
- Why OCP standards and open validation matter for agentic AI
- How integrated, heterogeneous architectures are reshaping the AI data center
The News: At the OCP EMEA Summit, Supermicro announced new Arm AGI CPU-based servers and OCP ORv3-compliant racks, expanding its server portfolio to over 20 OCP-compliant systems. The launch includes 2U and 5U Arm-based systems built on Arm Neoverse V3 cores with up to 136 cores, support for up to 6TB DDR5, 8 front NVMe bays, and an ORv3-compatible 2U GPU system using NVIDIA HGX B300 with 5th Gen NVLink. The company highlighted high-density, liquid-cooled options and over 20 OCP-inspired systems for modular AI and HPC deployments. Supermicro CEO Charles Liang and Arm Executive Vice President Mohamed Awad both endorsed the flexibility and performance of these new platforms
Verda announced the adoption of Arm AGI CPUs tightly integrated with NVIDIA GB300 GPUs, aiming to deliver a fully Arm-native stack for agentic AI workloads. In this architecture, GPUs handle model execution, while Arm CPUs orchestrate workflows, manage data movement, and coordinate system behavior. Meta and OCP leaders emphasized that open collaboration and standardized reference designs are essential for scaling AI infrastructure. Verda’s AI cloud reflects a broader trend toward integrated, heterogeneous systems where CPUs play a central, strategic role in agentic AI.
Arm AGI CPU Goes to Market via Supermicro and Verda at 2026 OCP EMEA Summit
Analyst Take: The Supermicro and Verda announcements at OCP EMEA Summit underscore that OCP standards and validation are now the linchpin for integrating Arm CPUs and NVIDIA GPUs in agentic AI infrastructure. This vote of confidence, rapidly following the announcement of the Arm AGI CPU, shows how OCP standards are enabling real-world, production-ready deployments that meet the demands of agentic inference at scale.
OCP Standards: The Glue for Arm-NVIDIA Agentic AI Integration
OCP standards are foundational for building interoperable, scalable AI systems. By validating Arm AGI CPUs and NVIDIA GB300 GPUs within OCP reference designs, vendors such as Supermicro and Verda can deliver integrated systems ready for multi-agent workloads. This reduces integration risk and ensures that performance gains are not lost to system bottlenecks. Meta and OCP leaders stress that standardization across the stack is increasingly important for enabling interoperability and efficiency as AI infrastructure scales.
Verda Adoption: AI Infrastructure in Practice
Verda’s adoption of the Arm AGI CPU shows how next-generation AI systems are being architected. By combining Arm-based CPU infrastructure with NVIDIA GB300 GPU platforms, Verda is enabling a tightly coupled architecture designed for agentic AI workloads at scale. In this model, accelerators deliver performance for model execution, while the CPU orchestrates workflows, manages data movement, and coordinates system behavior. This balance is essential for agent-based systems, where efficiency depends on smooth coordination across the stack, not just raw compute. Verda’s Arm-native stack, powered by renewable energy, is designed for the density and efficiency required to run agents cost-effectively. Industry voices from Meta and OCP emphasize that standardization and open collaboration are key to scaling such integrated, heterogeneous systems.
Supermicro Expands OCP-Validated Arm Server Portfolio
Supermicro’s launch of new Arm AGI CPU-based servers and OCP ORv3-compliant racks brings high-density, modular options to the market. With up to 136 Neoverse V3 cores, 6TB DDR5, and ORv3-compatible GPU systems using NVIDIA HGX B300, Supermicro is targeting both AI and HPC workloads. The company’s emphasis on liquid cooling and OCP-inspired systems reflects growing demand for scalable, energy-efficient infrastructure that supports heterogeneous, agentic AI deployments.
Open, Heterogeneous Architectures Are the New Normal
The industry is moving toward integrated, heterogeneous systems validated for agentic workloads. OCP’s work on chiplets, system readiness, and reference designs is enabling broader adoption of open AI infrastructure. Enterprises now expect CPUs to serve as the real-time control plane, coordinating increasingly complex, multi-agent workflows. This shift is reflected in CPU-to-GPU ratios trending back toward 1:1 for some agentic workloads, and in the growing demand for high-core-count CPUs with advanced memory bandwidth.
Ecosystem, Supply, and Adoption Risks
While OCP validation accelerates integration, supply constraints remain a risk. Both x86 and Arm vendors face tight availability for high-core-count CPUs, and memory supply is under pressure as DRAM production shifts to HBM for GPUs. Arm’s AGI CPU will only gain traction if it is available at scale and supported by strong firmware, OS, and workload integration. Enterprises will watch closely to see if OCP-validated Arm platforms can deliver on performance and ecosystem maturity, or if x86 incumbents and hyperscaler custom silicon will maintain their lead.
What to Watch
- How quickly will Supermicro’s Arm AGI CPU-based servers and OCP ORv3 racks gain traction in enterprise and hyperscale deployments?
- How fast will Arm’s firmware, OS, and software ecosystem close the gap for real-world agentic AI deployments?
- Can Arm and its partners deliver AGI CPUs at scale amid ongoing DRAM and manufacturing constraints?
- Will large enterprises shift agentic AI workloads to Arm-based, OCP-validated platforms, or stick with x86 and custom silicon?
- How will OCP’s work on chiplets, system readiness, and open reference designs accelerate the adoption of heterogeneous AI infrastructure?
Declaration of generative AI and AI-assisted technologies in the writing process: This content has been generated with the support of artificial intelligence technologies. Due to the fast pace of content creation and the continuous evolution of data and information, The Futurum Group and its analysts strive to ensure the accuracy and factual integrity of the information presented. However, the opinions and interpretations expressed in this content reflect those of the individual author/analyst. The Futurum Group makes no guarantees regarding the completeness, accuracy, or reliability of any information contained herein. Readers are encouraged to verify facts independently and consult relevant sources for further clarification.
Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.
Read the full Futurum Group Disclosure.
Other Insights from Futurum:
Arm At The Center Of The AI & Data Center Revolution
Arm Q3 FY 2026 Earnings Highlight AI-Driven Royalty Momentum
Can Agentic ITops Transform IT Incident Management or Will Complexity Stall Progress?
Author Information
Brendan is Research Director, Semiconductors, Supply Chain, and Emerging Tech. He advises clients on strategic initiatives and leads the Futurum Semiconductors Practice. He is an experienced tech industry analyst who has guided tech leaders in identifying market opportunities spanning edge processors, generative AI applications, and hyperscale data centers.
Before joining Futurum, Brendan consulted with global AI leaders and served as a Senior Analyst in Emerging Technology Research at PitchBook. At PitchBook, he developed market intelligence tools for AI, highlighted by one of the industry’s most comprehensive AI semiconductor market landscapes encompassing both public and private companies. He has advised Fortune 100 tech giants, growth-stage innovators, global investors, and leading market research firms. Before PitchBook, he led research teams in tech investment banking and market research.
Brendan is based in Seattle, Washington. He has a Bachelor of Arts Degree from Amherst College.
