Menu

2025 OCP Summit—AI Infrastructure Buildout Consisted of Three Pillars: AI Servers Rack, Power & Cooling, and Networking

2025 OCP Summit—AI Infrastructure Buildout Consisted of Three Pillars: AI Servers Rack, Power & Cooling, and Networking

Analyst(s): Ray Wang
Publication Date: November 3, 2025

What is Covered in this Article

  • AI Rack-Level Design Evolution: NVIDIA, AMD, and Meta showcased diverse AI rack architectures as the industry accelerates toward higher FLOP density and optimized data center efficiency.
  • Next-Gen Power & Cooling Solutions: Vertiv, Delta Electronics, and Flex unveiled advanced power supply systems, liquid cooling technologies, and integrated infrastructure solutions for AI data centers.
  • Networking Innovation & Alliances: Marvell, Astera Labs, Credo, and Ciena introduced high-speed interconnect and networking solutions, while Broadcom led the formation of the new ESUN alliance alongside Marvell, AMD, NVIDIA, and other key ecosystem players.

2025 OCP Summit—AI Infrastructure Buildout Consisted of Three Pillars: AI Servers Rack, Power & Cooling, and Networking

The Event – Major Themes & Vendor Moves: The 2025 OCP Summit highlighted the shift from discrete GPU servers to fully integrated, rack-scale AI infrastructure, where power, cooling, and interconnects define the next performance frontier. From leading AI compute providers such as NVIDIA and AMD to critical supply chain partners such as Delta Electronics, Flex, Vertiv, Astera Labs, and Credo, every layer of the stack—from silicon to system and facility—showcased innovation aimed at scaling AI efficiently and sustainably.

Analyst Take: There are three dominant themes at this year’s Open Compute Project (OCP) Summit—Power, Cooling, and AI Rack Servers. NVIDIA’s opening keynote set the tone for the entire event, unveiling its next-generation AI racks that demand up to 1 MW per rack, a dramatic leap from today’s standard power envelopes. This escalation in power and thermal density is expected to trigger a wave of innovation across the supply chain, as ecosystem partners in power delivery, cooling systems, and rack architecture race to align their roadmaps and specifications with NVIDIA’s next-generation design standards.

AI Rack-Level Design Evolution: At the 2025 OCP Summit, one key spotlight on AI servers is the latest rack-scale AI systems. NVIDIA, AMD, and Meta each presented distinct strategies converging on the same goal: maximizing FLOPs per rack while balancing power, cooling, and modularity. As AI models scale to trillions of parameters, the industry’s design philosophy is clearly evolving from “GPU boxes” to full-stack, rack-level infrastructure optimized for energy efficiency and serviceability.

NVIDIA introduced its Vera Rubin NVL144 platform—an evolution of its MGX reference design—featuring liquid-cooled, high-power trays capable of over a gigawatt of compute in “AI factory” deployments. Its modular approach integrates compute, networking, and power delivery within an open rack footprint, signaling NVIDIA’s shift toward system-level openness even as it retains its vertically integrated model. More importantly, it unveiled an AI rack roadmap that presents its Rubin Kyber Rack as requiring up to 1MW per AI server, signaling a major power requirement and revolution needed to support next-gen AI servers.

In contrast, AMD and Meta introduced the Helios AI rack, built on Meta’s new Open Rack Wide (ORW) standard and powered by AMD’s Instinct MI450 GPUs. Supporting up to 72 accelerators per rack and over 30 TB of HBM4 memory, Helios emphasizes open interoperability—enabling hyperscalers to customize configurations without vendor lock-in.

Meanwhile, Arm underscored its growing influence beyond CPUs by contributing the Foundation Chiplet System Architecture (FCSA) to OCP, a blueprint for standardized chiplet integration that supports multi-vendor rack designs. This complements the energy-efficiency advantage of Arm-based CPUs, increasingly used as AI orchestration layers in hyperscaler data centers.

Power, Cooling & Facility-Kevel Design Moving Center Stage

Besides the discussion around AI rack servers, the conversation around AI infrastructure shifted decisively from compute performance to power and thermal efficiency. As rack-level power densities continue to increase, infrastructure vendors such as Flex, Delta Electronics, and Vertiv are stepping into the spotlight, transforming data center design for hyperscalers and AI operators. The new generation of AI racks showcased by NVIDIA, AMD, and Meta demands not only higher electrical throughput but also advanced liquid-cooling systems, intelligent power management, and facility-wide thermal orchestration.

Flex presented a modular power and cooling architecture engineered to simplify deployment of AI “pods” at scale. Its design integrates rack-mounted cold-plate liquid loops, CDU (Coolant Distribution Unit) enclosures, and hot-swappable power shelves—allowing hyperscalers to add AI capacity without full-site retrofits.

Delta Electronics took center stage at OCP 2025 with the debut of its next-generation 800 VDC “AI Power Cube” ecosystem, developed in collaboration with NVIDIA to power 1.1 MW-scale AI racks. Representing a full re-architecture of data-center power delivery—from grid to chip—the system integrates Solid-State Transformers (SSTs) for direct MV-to-DC conversion, HVDC/DC distribution boards, and liquid-cooled busbars, achieving up to 98.5% efficiency. Complemented by 90 kW DC/DC shelves, super-capacitor backup modules, and telemetry-enabled monitoring, Delta’s 800 VDC design minimizes copper use, reduces thermal loss, and simplifies scalability.

Together with its 480 VDC and immersion-cooling extensions, the platform sets a new benchmark for energy-efficient, high-density AI factories, aligning closely with NVIDIA’s roadmap for next-generation megawatt-class AI infrastructure.

Vertiv, on the other hand, highlighted its leadership in facility-level engineering with the Vertiv DynaFlex platform, a fully liquid-cooled infrastructure solution supporting both direct-to-chip and rear-door heat-exchange systems. Its hybrid design allows operators to scale from 200 kW to beyond 1 MW per rack row while maintaining thermal stability through AI-driven coolant flow control.

Together, these companies are redefining the physical limits of the data center. At OCP 2025, it became clear that the next wave of AI performance will not be dictated by transistor counts alone—but by the power, cooling, and facility ecosystems that enable those chips to operate at scale and sustainability. These areas of development will be key to watch.

Networking and Interconnects for Scale-Up AI Workload

As hyperscalers scale their AI infrastructure, networking has become a central focus. At OCP, discussions centered on advancing Ethernet and optical interconnects to meet the bandwidth and power demands of next-generation AI clusters.

Industry networking leader Marvell showcased its expanding role as a core enabler of AI data-center infrastructure, emphasizing CXL memory expansion, co-packaged optics (CPO), and high-speed networking. The company introduced its Structera™ CXL platform—supporting memory pooling and near-memory acceleration—to address growing bottlenecks in large AI clusters. The development of CXL could be vital in the future, as industry participants have been thinking about the memory offload.

Marvell also showcased its latest 800 G/1.6 T Ethernet and active electrical cable (AEC) solutions, optimized for low-latency, high-efficiency rack-scale connectivity. As the industry shifts toward 800G and even 1.6T networking solutions, Marvell aims to capture the growing networking market, which is increasing in dollar content and margin.

Its CPO-based switch silicon and optical integration demos underscored Marvell’s ambition to lead the industry’s transition from pluggable optics to fully integrated optical I/O for next-generation AI fabrics. Marvell’s CPO solutions are strategically significant, as competitors are also accelerating their own CPO roadmaps—signaling that the next major battleground in AI networking, centered on optical integration and bandwidth efficiency, may be only a few years away.

On the other hand, Astera Labs showcased its PCIe 6/CXL 3.0 connectivity through its Aries retimers, Leo memory controllers, and Taurus smart cables, while also highlighting progress in UALink and PCIe 6 security to enable deterministic, low-latency rack-scale communication.

In the short-reach domain, Credo emphasized low-power, serviceable connectivity using active electrical cables (AEC) and LPO optics, offering up to 400 Gbps with better energy efficiency and cost profiles than AOCs. The company’s work on high-speed PCIe links and optical monitoring reflected OCP’s broader focus on operational visibility and maintainability as AI networks become increasingly dense.

For longer-reach, intra–data-center links, Ciena presented next-gen DCI solutions supporting ~2 km optical spans and high-density pluggables with advanced thermal designs.

What to Watch

  • 1 MW Power Challenge: NVIDIA’s Rubin Ultra marks the beginning of a 1 MW-per-rack era, demanding new standards in power delivery, liquid cooling, and facility architecture.
  • Liquid-Cooling Supply Chain Inflection: As rack densities approach 1 MW, expect a structural shift in the cooling value chain—from cold plates and CDUs to immersion and hybrid systems. Track ecosystem partnerships between GPU vendors, integrators, and facility specialists (e.g., Delta Electronics, Vertiv, Flex) as well as associated innovations and adoptions (e.g., 800 HVDC) as they move from pilot to mass deployment.
  • Networking Bottlenecks and Breakthroughs: Watch Marvell alongside Astera Labs (CXL 3.0/PCIe 6 retimers), Credo (AEC/LPO optics), and Ciena (DCI fabrics) as Ethernet-based AI networks scale toward 1.6 T speeds. It is also important to pay close attention to Marvell’s CPO, 800 G/1.6 T Ethernet platforms, and Structera™ CXL, given its significance in the networking world.

Read more about the OCP Global Summit 2025 here.

Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.

Other insights from Futurum:

AMD OpenAI Partnership: Scale Win or Execution Risk at 6 GW?

Hybrid Bonding at Scale: Powering the Next Era of Semiconductor Packaging

Micron Q4 FY 2025 Earnings Top Estimates on DRAM and HBM Strength

Image Credit: Open Compute Project

Author Information

Ray Wang is the Research Director for Semiconductors, Supply Chain, and Emerging Technology at Futurum. His coverage focuses on the global semiconductor industry and frontier technologies. He also advises clients on global compute distribution, deployment, and supply chain. In addition to his main coverage and expertise, Wang also specializes in global technology policy, supply chain dynamics, and U.S.-China relations.

He has been quoted or interviewed regularly by leading media outlets across the globe, including CNBC, CNN, MarketWatch, Nikkei Asia, South China Morning Post, Business Insider, Science, Al Jazeera, Fast Company, and TaiwanPlus.

Prior to joining Futurum, Wang worked as an independent semiconductor and technology analyst, advising technology firms and institutional investors on industry development, regulations, and geopolitics. He also held positions at leading consulting firms and think tanks in Washington, D.C., including DGA–Albright Stonebridge Group, the Center for Strategic and International Studies (CSIS), and the Carnegie Endowment for International Peace.

Related Insights
Micron Technology Q1 FY 2026 Sets Records; Strong Q2 Outlook
December 18, 2025

Micron Technology Q1 FY 2026 Sets Records; Strong Q2 Outlook

Futurum Research analyzes Micron’s Q1 FY 2026, focusing on AI-led demand, HBM commitments, and a pulled-forward capacity roadmap, with guidance signaling continued strength into FY 2026 amid persistent industry supply...
NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy
December 16, 2025

NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy

Nick Patience, AI Platforms Practice Lead at Futurum, shares his insights on NVIDIA's release of its Nemotron 3 family of open-source models and the acquisition of SchedMD, the developer of...
Broadcom Q4 FY 2025 Earnings AI And Software Drive Beat
December 15, 2025

Broadcom Q4 FY 2025 Earnings: AI And Software Drive Beat

Futurum Research analyzes Broadcom’s Q4 FY 2025 results, highlighting accelerating AI semiconductor momentum, Ethernet AI switching backlog, and VMware Cloud Foundation gains, alongside system-level deliveries....
Synopsys Q4 FY 2025 Earnings Highlight Resilient Demand, Ansys Integration
December 12, 2025

Synopsys Q4 FY 2025 Earnings Highlight Resilient Demand, Ansys Integration

Futurum Research analyzes Synopsys’ Q4 FY 2025 results, highlighting AI-era EDA demand, Ansys integration momentum, and the NVIDIA partnership....
Hewlett Packard Enterprise Q4 FY 2025 ARR Surges as AI Orders Build
December 8, 2025

Hewlett Packard Enterprise Q4 FY 2025: ARR Surges as AI Orders Build

Futurum Research analyzes HPE’s Q4 FY 2025 results, highlighting networking-led margin resiliency, AI server order momentum, and GreenLake ARR growth....
AWS re:Invent 2025: Wrestling Back AI Leadership
December 5, 2025

AWS re:Invent 2025: Wrestling Back AI Leadership

Futurum analysts share their insights on how AWS re:Invent 2025 redefines the cloud giant as an AI manufacturer. We analyze Nova models, Trainium silicon, and AI Factories as AWS moves...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.