The News: Marvell, a provider of data infrastructure semiconductor solutions, expanded its connectivity portfolio with the launch of the new Alaska P PCIe retimer product line built to scale data center compute fabrics inside accelerated servers, general-purpose servers, Compute Express Link (CXL) systems, and disaggregated infrastructure. Read the full press release on the Marvell website.
Marvell Sees Time Has Come for Alaska-sized Retimer Innovation
Analyst Take: Marvell is expanding its connectivity portfolio with a new peripheral component interconnect express (PCIe) retimer product line that is designed to scale the compute fabrics of accelerated infrastructure. The debut offering is built on Marvell 5nm PAM4 technology aimed at enabling the new Alaska P PCIe retimer product line, starting with the 8- and 16-lane PCIe Gen 6 retimers, to scale connections between AI accelerators, GPUs, CPUs, and other components inside servers.
PCIe is the industry standard for inside-server system connections between AI accelerators, GPUs, CPUs, and other server components. The move aligns adroitly with AI and machine learning (ML) applications driving data flows and connections inside server systems at substantially higher bandwidth, swiftly fostering demand for PCIe retimers to fulfill required connection distances at faster speeds. Key to Marvell’s decision to enter the PCIe retimer segment is that AI models are identified as doubling their computation requirements every six months and are currently the primary driver of the PCIe roadmaps, necessitating PCIe Gen 6 capabilities.
The Retiming is Right for PAM4 PCIe Gen 6 Innovation
I see AI driving bandwidth growth across the full array of connectivity tiers including data center-to-data center, cluster-to-cluster, server-to-server, and inside server environments. PCIe Gen 6 is the first PCIe standard to use PAM4 signaling, replacing the non-return-to-zero (NRZ) modulation used over the last two decades. In accord, inside AI server compute fabrics are accelerating migration to PCIe Gen 6 to attain the faster connections needed between AI accelerators, GPUs, CPUs, and other server components.
A new category of PCIe retimers are emerging to enable the requisite compute fabric connections as higher speed inside server system copper connections need retiming. The growing number of XPUs (i.e., CPUs, GPUs) per single server computing domain requires new system architectures that can leverage disaggregated systems to address the new level set of performance and power demands.
However, this presents a challenge for inside server component connections. PCIe Gen 6 ushers in 64 Gbps/lane in contrast to PCIe Gen 5’s 32 Gbps/lane and PCIe Gen 4’s 16 Gbps/lane. However, distance challenges for inside server component connections impact Gen 6 where connection distances between AI accelerators, GPUs, and CPUs are potentially limited to less than four inches in length, whereas Gen 5 supports around four inches and Gen 4 up to around eight inches using existing solutions.
The Marvell Alaska P PCIe retimer solution directly addresses the distance gap challenge by extending PCIe Gen 6 connection distances greater than fivefold. Specifically, a single Alaska P retimer provides over 16 inches in length for PCIe Gen 6 64 Gbps/lane implementations alongside more than 18 inches for PCIe Gen 5 32Gbps/lane and PCIe Gen 4 16 Gbps/lane.
The new solution is based on a 5nm PAM4 PCIe Gen 6 design that supports both 16-lane and 8-lane products as well as require only 10W for the 16-lane product to deliver breakthrough power efficiencies. In my view, the company’s in-house PAM4 IP acumen is well established and field proven as its shipping in multiple Marvell 5nm high-volume products delivering key capabilities such as SerDes supporting >40dB insertion loss compensation. Moreover, Marvell is capitalizing on the Ethernet technology transition away from NRZ modulation, used in the 16G Gen 4 and 32G Gen 5 products, to the PAM4 modulation needed in 64G Gen 6 and further out for 128G Gen 7 products.
As a result, PCIe retimers are becoming essential for AI servers including the implementation of multiple retimers per AI server with greater than one retimer per XPU. This fully aligns with expanding the use of CXL-enabled disaggregated memory with PCIe retimers enabling server CXL memory expansion. Today, a single rack approach supports 1N of XPUs needed through direct attach copper cable (DAC) technology at ~3m interconnects. On the horizon, two racks can support 2N of XPUs through active electrical cable (AEC) technology at ~7m interconnects. Further out, multiple racks can support 4N of XPUs through active optical cable (AOC) at ~30m interconnects. PCIe retimers are essential for enabling multiple interconnects throughout emerging inside-the-rack and multi-rack PCIe use cases.
The solution is purpose-designed to assure scaling inside server compute fabric connections including between AI accelerators, GPUs, CPUs, and other server components plus on motherboards, accelerator baseboards, and in copper and optical cables. It also supports CXL 3.x for rapidly evolving disaggregated systems alongside advanced diagnostics and telemetry, which I expect can augment Marvell sales cycles.
Solidifying the Alaska P PCIe Retimer launch is the robust ecosystem support that Marvell has already assembled for the new offering, encompassing top-tier data center and AI infrastructure suppliers Intel, Arm, AMD, Innolight, and TE. This bodes well for Marvell’s ability to use its channels to help catalyze adoption of its PCIe retimer solutions, which is expected to be deployed commercially by mid-2025.
Key Takeaway: Marvell Ready to Turbocharge PCIe and CXL Compute Fabric Environments
I believe that the Alaska P PCIe launch positions Marvell to drive PAM4 PCIe innovations throughout AI server compute fabrics. Marvell is building on more than a decade of PAM4 technology expertise exemplified by its 5nm PAM4 IP portfolio innovation prowess. Marvell is capitalizing on a major industry inflection point as compute fabrics for PCIe and CXL transition from NRZ to PAM4.
In my view, Marvell now offers the comprehensive data center connectivity portfolio as key to meeting the continuum of top priority data center interconnect (DCI), such as DCI networks (e.g., Marvell Coherent DSP), inside data centers, such as frontend and backend networks (e.g., Marvell PAM4 DSP), and inside servers, such as compute fabrics (Alaska P PCIe Retimer) demands.
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Other Insights from The Futurum Group:
Marvell Q4 and Fiscal Year 2024: AI Takes Center Stage
OFC 2024: Marvell Displays Accelerated Infrastructure Portfolio Gems
Marvell TSMC: Stimulating 2nm Accelerated Infrastructure Innovation
Author Information
Ron is an experienced, customer-focused research expert and analyst, with over 20 years of experience in the digital and IT transformation markets, working with businesses to drive consistent revenue and sales growth.
He is a recognized authority at tracking the evolution of and identifying the key disruptive trends within the service enablement ecosystem, including a wide range of topics across software and services, infrastructure, 5G communications, Internet of Things (IoT), Artificial Intelligence (AI), analytics, security, cloud computing, revenue management, and regulatory issues.
Prior to his work with The Futurum Group, Ron worked with GlobalData Technology creating syndicated and custom research across a wide variety of technical fields. His work with Current Analysis focused on the broadband and service provider infrastructure markets.
Ron holds a Master of Arts in Public Policy from University of Nevada — Las Vegas and a Bachelor of Arts in political science/government from William and Mary.