Marvell Unveils CPO Innovations Prepared to Drive XPU Architecture Breakthroughs

Marvell Unveils CPO Innovations Prepared to Drive XPU Architecture Breakthroughs

Analyst(s): Ron Westfall
Publication Date: January 14, 2025

What is Covered in this Article:

  • Marvell’s any processing unit (XPU) architecture empowers cloud hyperscalers to expand AI server connectivity by increasing bandwidth and extending interconnection distances across multiple server racks.
  • XPUs with integrated Co-Packaged Optics (CPO) augment AI server performance by boosting XPU density from tens within a rack to hundreds across multiple racks.
  • Marvell CPO uses multiple generations of silicon photonics capabilities backed by a track record of more than eight years of shipments and more than 10 billion device hours of field operation.

The News: Marvell Technology, Inc. announced the advancement of its custom XPU architecture with co-packaged optics (CPO) technology.

Marvell Unveils CPO Innovations Prepared to Drive XPU Architecture Breakthroughs

Analyst Take: Marvell Technology, Inc. kicks off 2025 with unveiling its new CPO capabilities focused on the advancement of its custom XPU architecture. The move builds on its custom high-bandwidth memory (HBM) compute architecture initiative, announced at its Marvell Industry Analyst event in December 2024. It aimed at improving the total cost of ownership (TCO), efficiency, and performance of the custom XPU offerings of its custom silicon customers. Marvell is now enabling customers to integrate CPO into their next-generation custom XPUs and scale up the size of their AI servers from tens of XPUs within a rack currently using copper interconnects to hundreds across multiple racks using CPO, enhancing AI server performance.

Marvell’s AI accelerator design integrates XPU compute silicon, high-bandwidth memory (HBM), and additional chiplets alongside Marvell’s 3D Silicon Photonics (SiPho) Engines on a single substrate, using advanced packaging techniques, high-speed serializer/deserializer (SerDes) technology, and sophisticated die-to-die interfaces. This approach removes the need for electrical signals to leave the XPU package into copper cables or across a printed circuit board.

Why CPO Technology Is Driving XPU Architecture to the Next Level Set

Integrated optics technology can transform XPU interconnectivity by offering key advantages over traditional electrical cabling. These advantages consist of significantly enhanced data transfer speeds and the ability to maintain connections across distances up to 100 times greater. Such improvements enable AI servers to achieve scale-up connectivity across multiple racks. The result is a system that optimizes both latency and power consumption, enabling more efficient and powerful AI computing infrastructures.

CPO technology combines optical components and electronic elements in a single package, significantly shortening the electrical signal path. This integration offers several key benefits:

  • Reduced signal loss
  • Improved high-speed signal integrity
  • Minimized latency

Also, CPO boosts data throughput by utilizing high-bandwidth silicon photonics optical engines. These engines offer two main advantages over traditional copper connections:

  • Higher data transfer rates
  • Lower susceptibility to electromagnetic interference

This innovative approach to packaging optical and electronic components together enables more efficient and faster data transmission in advanced computing systems.

This integration also improves power efficiency by reducing the need for high-power electrical drivers, repeaters and retimers. By enabling longer reach and higher density XPU-to-XPU connections, CPO technology facilitates the development of high-performance, high-capacity scale-up AI servers, optimizing both compute performance and power consumption for next-generation accelerated infrastructure.

CPO technology offers multiple benefits for high-performance computing systems. By integrating optical components directly with processors, CPO reduces the reliance on energy-intensive electrical components such as drivers, repeaters, and retimers. This integration results in improved power efficiency across the system.

Furthermore, CPO enables XPU-to-XPU connections to span greater distances and achieve higher density. This capability is particularly advantageous for the development of advanced AI servers that require both scale-up architecture and high capacity. As a result, CPO technology plays a crucial role in optimizing two key aspects of next-generation accelerated infrastructure:

  • Compute performance
  • Power consumption

By addressing these critical factors, CPO contributes to the creation of more efficient and powerful AI computing systems.

Marvell’s CPO Portfolio Advantages

The Marvell 3D SiPho Engine is an optical technology that supports 200Gbps electrical and optical interfaces, serving as a crucial component for integrating CPO into XPUs. Specifically, the Marvell 6.4T 3D SiPho Engine features 32 channels of 200G electrical and optical interfaces. It consolidates hundreds of components, including modulators, photodetectors, modulator drivers, trans-impedance amplifiers, microcontrollers, and various passive components, into a single, unified device.

In my view, compared to devices with 100G electrical and optical interfaces, the Marvell 6.4T 3D SiPho Engine offers significant improvements in the following areas:

  • Doubles the bandwidth
  • Doubles the input/output bandwidth density
  • Reduces power consumption by 30% per bit

Currently, Marvell’s advanced technology is currently being evaluated by multiple customers for incorporation into their next-generation solutions, highlighting its potential impact on future optical interconnect designs.

I find that Marvell’s disciplined portfolio development has transformed interconnect technology to augment accelerated infrastructure’s performance, scalability, and cost-advantages consisting of:

  • High-performance SerDes and die-to-die technology IP for efficient communication within custom XPUs.
  • PCIe retimers enabling short-reach connections between CPUs and XPUs on the same board.
  • CXL devices meet memory requirements.
  • Active Electrical Cable (AEC) and Active Optical Cable (AOC) digital signal processors for short-reach, in-rack connections.
  • PAM optical DSPs for rack-to-rack connections within data centers.
  • Coherent DSPs and data center interconnect modules for long-distance connections between data centers.

As such, this diverse array of solutions fulfills various data infrastructure needs, from chip-level communication to inter-data center connectivity, positioning Marvell as a pacesetter in the field of advanced interconnect technologies.

Looking Ahead

As hyperscalers and organizations scale up their XPU architectures underpinning data center AI server fabrics, Marvell’s CPO portfolio capabilities provide the foundation for greater flexibility and broadness in XPU architectural design. Advances in scale will usher in more creativity in how XPUs can be mixed and matched to attain breakthrough capabilities across key metrics such as performance, HBM/memory distribution, energy efficiencies, and AI workload optimization.

Marvell’s CPO proposition can help expand the acceptance of customization in XPU development and AI server implementations. In my point of view, the industry is working toward standardizing the form factor and design of remote light sources, catalyzing multi-vendor sourcing. This progress in standardization can reduce concerns related to vendor lock-in and increase flexibility in silicon customization.

Moreover, Marvell CPO technology enables a more modular approach to system design, allowing for easier customization and upgrades. For instance, lasers can be replaced from the chassis template, improving system maintenance and reliability. CPO can also be integrated with various advanced packaging technologies, such as chip-on-wafer-on-substrate (CoWoS) or small outline integrated circuit (SOIC), providing more options for customization and optimization.

Overall, I believe Marvell’s CPO portfolio can play an integral role in further demystifying using customization in the XPU architecture design process and outcomes. As a result, the Marvell custom AI accelerator with CPO architecture can further incentivize cloud hyperscalers to develop custom XPUs that will significantly increase the density and performance of their AI servers. The integration of optics directly into XPUs represents a significant advancement in custom accelerated infrastructure, enabling hyperscalers to meet the escalating demands of AI applications through enhanced scalability and optimization.

See the complete Marvell press release on the Marvell site.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Marvell Right Sizes AEC Connections to Meet New AI Acceleration Demands

Marvell Unveils Structera CXL Solutions to Meet Hyperscaler Memory Needs

Marvell HBM Compute Architecture Ready to Transform Cloud AI Acceleration

Author Information

Ron is an experienced, customer-focused research expert and analyst, with over 20 years of experience in the digital and IT transformation markets, working with businesses to drive consistent revenue and sales growth.

He is a recognized authority at tracking the evolution of and identifying the key disruptive trends within the service enablement ecosystem, including a wide range of topics across software and services, infrastructure, 5G communications, Internet of Things (IoT), Artificial Intelligence (AI), analytics, security, cloud computing, revenue management, and regulatory issues.

Prior to his work with The Futurum Group, Ron worked with GlobalData Technology creating syndicated and custom research across a wide variety of technical fields. His work with Current Analysis focused on the broadband and service provider infrastructure markets.

Ron holds a Master of Arts in Public Policy from University of Nevada — Las Vegas and a Bachelor of Arts in political science/government from William and Mary.

SHARE:

Latest Insights:

Novin Kaihani from Intel joins Six Five hosts to discuss the transformative impact of Intel vPro on IT strategies, backed by real-world examples and comprehensive research from Forrester Consulting.
Messaging Growth and Cost Discipline Drive Twilio’s Q4 FY 2024 Profitability Gains
Keith Kirkpatrick highlights Twilio’s Q4 FY 2024 performance driven by messaging growth, AI innovation, and strong profitability gains.
Strong Demand From Webscale and Enterprise Segments Positions Cisco for Continued AI-Driven Growth
Ron Westfall, Research Director at The Futurum Group, shares insights on Cisco’s Q2 FY 2025 results, focusing on AI infrastructure growth, Splunk’s impact on security, and innovations like AI PODs and HyperFabric driving future opportunities.
Major Partnership Sees Databricks Offered as a First-Party Data Service; Aims to Modernize SAP Data Access and Accelerate AI Adoption Through Business Data Cloud
Nick Patience, AI Practice Lead at The Futurum Group, examines the strategic partnership between SAP and Databricks that combines SAP's enterprise data assets with Databricks' data platform capabilities through SAP Business Data Cloud, marking a significant shift in enterprise data accessibility and AI innovation.

Thank you, we received your request, a member of our team will be in contact with you.