OFC 2025: Marvell Interconnecting the AI Era

OFC 2025: Marvell Interconnecting the AI Era

Analyst(s): Ron Westfall
Publication Date: April 10, 2025

At OFC 2025, Marvell unveils its comprehensive portfolio and robust ecosystem support designed to deliver higher performance, customization, and flexibility for rack-, row- and cloud-scale AI networks.

What is Covered in this Article:

  • At OFC 2025, Marvell showcased the momentous strides it’s making, advancing the company’s interconnect portfolio to support scale-up and scale-out fabrics, particularly for AI-driven and cloud-based infrastructure.
  • Marvell’s 400G/lane technology with a complete electrical-to-optical link operating at 224 Gbaud is a breakthrough since it fundamentally transforms the landscape of high-speed data connectivity, particularly for AI and cloud infrastructure.
  • Marvell’s new 1.6T silicon photonics light engine, integrated into a 1.6T LPO reference design module operating at 200G per lane, represents a major innovation by directly addressing the evolving demands of AI-driven data centers and high-performance computing environments.

The News: Marvell, a supplier of data infrastructure semiconductor solutions, showcased its interconnect portfolio for scale-up and scale-out AI deployments at OFC 2025 in San Francisco, CA.

OFC 2025: Marvell Interconnecting the AI Era

Analyst Take: Marvell demonstrated why interconnect technology, the semiconductors, software, and systems that serve as nervous systems for reliably and rapidly transmitting data across accelerated infrastructure, is transforming scale-up and scale-out AI deployments at OFC 2025. interconnect technology is in the process of undergoing a fundamental transformation, advancing to support higher speeds, extended distances, reduced latencies, and increased data traffic volumes essential for training, inference, and other AI and cloud services in the coming years.

For scale-up fabrics in racks and rows, co-packaged optics (CPO), linear packaged optics (LPO), co-packaged copper (CPC), active electrical cable (AEC) and PCIe retimer interconnect technologies will pave the way for row-scale computing systems containing hundreds of XPUs and CPUs several meters apart spread over multiple racks that can outperform today’s systems in performance-per-watt and time-to-completion. New scale-out networking technologies such as 400G/lane networking and coherent-lite DSPs can greatly increase the carrying capacity of networks while extending the reach of links from meters to kilometers for more powerful, versatile cloud-scale infrastructure.

Marvell: Portfolio Innovations Fuel Ecosystem-wide Interconnect Innovations

Products, technology, and ecosystem initiatives Marvell showcased at OFC 2025 included:

  • 400G PAM4 Technology. A live demonstration of the company’s breakthrough 400G/lane technology with a complete electrical-to-optical link operating at 224 Gbaud. I see this 400G capability as a critical step towards 3.2T optical interconnects and 204.8T switches.
  • Co-Packaged Platforms for AI Scale Up and Scale Out. Marvell demonstrated its CPO and chip-packaged co-design (CPC) technologies for achieving higher interconnect densities and longer reach. Marvell also showcased system-level XPU and switch implementations supporting rapid servicing and ease of manufacture.
  • 1.6T Silicon Photonics Light Engine for Scale-up Networks. Marvell showed its new 1.6T silicon photonics light engine integrated into a 1.6T LPO reference design module operating at 200G/lane.
  • 200G/Lambda 1.6T PAM4 Optical Interconnect for AI Scale Out. Marvell demonstrated Ara, its innovative 3nm 1.6T PAM4 interconnect platform featuring 200 Gbps electrical and optical interfaces.
  • 800G ZR/ZR+ Pluggable Optics for Multi-Site AI Training. A live demonstration of COLORZ 800 pluggable DCI module operating in several modes, including 800G ZR and 800G ZR+ with probabilistic constellation shaping, allowing for 800 Gbps transmission over distances of up to 1,000km.
  • 200G/Lane 1.6T AEC for AI Scale Up and Scale Out. A live demonstration showing production-ready Alaska A AEC DSPs operating at 8 x 200G delivering 1.6T total bandwidth for multi-rack scale-up systems.
  • PCIe Gen 6 and PCIe Gen 7 SerDes End-to-End Over Optics. A hands-on demonstration of an Alaska P PCIe Gen 6 retimer driving PCIe signals between the root complex and endpoint over an optical fiber, using a 100G per lane LPO module supplied by TeraHop. The second technology demonstration highlighted the performance of a 128G SerDes circuit designed for integration into future PCIe Gen 7 retimer devices using a TeraHop 200G per lane LPO module.
  • 51.2T Scale-out Fabric. Showcasing its accelerated infrastructure portfolio’s switching and interconnect capabilities for AI clusters, this demonstration utilizes CPU-powered servers as compute elements to show data traffic traversing a model AI cluster. The demonstration includes RDMA-enabled network interface cards, copper and optical interconnects, including 7m active electrical cables, and SONiC-enabled switches representing both frontend top-of-rack, middle-of-row, end-of-row leaf and spine applications, and backend switch fabrics.

Drill Down: Marvell Unleashing 400G PAM4 Innovation

From my view, Marvell’s 400G/lane technology with a complete electrical-to-optical link operating at 224 Gbaud is a breakthrough for several key reasons. It fundamentally transforms the landscape of high-speed data connectivity, particularly for AI and cloud infrastructure.

First, it achieves an unprecedented data rate per lane, 400 Gbps, by leveraging a symbol rate of 224 Gbaud with PAM4 modulation (Pulse Amplitude Modulation with 4 levels), where each symbol carries 2 bits of data. This doubles the throughput of the deployed 200G/lane systems and quadruples the 100G/lane infrastructure widely used today. Such a leap in bandwidth density enables networks to handle significantly more data within the same physical footprint, a critical factor as demands for AI, machine learning, and cloud computing skyrocket.

Also, this technology was once thought to be impractical or impossible due to the immense technical challenges involved. Operating at 224 Gbaud requires overcoming signal integrity issues, such as noise, jitter, and attenuation, over electrical and optical domains. Marvell’s success in proving this in real silicon, not just a theory, demonstrates a mastery of advanced DSP, trans-impedance amplifiers, modulator drivers, and photonics. This complete link, from electrical input to optical output, ensures uninterrupted integration across the network stack, a feat that pushes the boundaries of what was previously achievable.

Moreover, it paves the way for next-generation network architectures, such as 3.2 Tbps optical interconnects and 204.8 Tbps switches. These are essential for scaling data centers to meet the exponential growth in data-intensive workloads, like AI training and inference, where massive datasets need to be processed quickly and efficiently. Doubling the per-lane speed reduces the number of lanes or fibers needed, lowering power consumption, cost, and complexity while maximizing performance—a crucial advantage for hyperscale data centers.

Finally, Marvell’s collaboration with ecosystem partners such as Lumentum and TeraHop to demonstrate this technology at scale highlights its real-world viability. This affirms the offering is a practical solution poised to redefine network connectivity, enabling faster, more efficient, and more scalable infrastructure for the future. In essence, this breakthrough marks a turning point in PAM4-based connectivity, setting a new standard for what high-speed networks can achieve.

Drill Down: Marvell Pushes Boundaries with Silicon Photonics Light Engine Innovation

From my perspective, Marvell’s new 1.6T silicon photonics light engine, integrated into a 1.6T LPO reference design module operating at 200G per lane, represents a major innovation for key reasons that address the evolving demands of AI-driven data centers and high-performance computing environments.

This technology achieves a significant leap in bandwidth capacity. By operating at 200G per lane across eight lanes, it delivers a total throughput of 1.6 Tbps in a single module. This doubles the bandwidth of many existing 800G solutions, enabling faster data transfer rates critical for scaling AI clusters and cloud infrastructure, where massive datasets and real-time processing are the norm.

Secondly, its highly integrated design sets it apart. The light engine consolidates hundreds of components, such as modulators, photodetectors, linear drivers, trans-impedance amplifiers (TIAs), and microcontrollers, into a single silicon photonics package. This integration reduces complexity, shrinks the physical footprint, and simplifies system design compared to traditional discrete solutions. For module vendors and hyperscalers, this means faster time-to-market and easier deployment in rack-scale AI server setups.

Another critical innovation is its power efficiency. The 1.6T light engine achieves a low power consumption of less than 5 picojoules per bit (pJ/bit), including laser power, under typical conditions. This is a substantial improvement over earlier technologies, addressing a major challenge in data centers where power usage directly impacts operational costs and sustainability goals. By reducing power per bit, it supports the shift from copper to optical interconnects without the energy overhead of traditional optics.

The use of LPO further enhances its value. Unlike fully retimed optics, LPO eliminates the need for power-hungry digital signal processing (DSP) retiming on the optical side, relying instead on the host device’s signal processing. This results in lower latency and power consumption while maintaining performance, making it well-suited for short-reach, rack-scale interconnects where copper cables fall short in reach and bandwidth.

Additionally, the 200G-per-lane capability aligns with the industry’s transition to higher-speed interfaces and future-proofing infrastructure for next-generation AI and cloud applications. It bridges the gap between current rack-scale solutions and emerging row-scale architectures, offering longer reach than passive copper while remaining cost-effective compared to CPO.

Finally, this innovation accelerates the shift from copper to optical interconnects in data centers. As AI workloads push compute density and scale, traditional copper solutions struggle with signal degradation over distance. Marvell’s light engine, with its low latency, extended reach, and high bandwidth, provides a practical alternative, enabling more flexible and efficient network designs.

In essence, this 1.6T silicon photonics light engine combines unprecedented bandwidth, integration, efficiency, and adaptability, positioning it as a cornerstone for scaling data center infrastructure to meet the demands of modern AI and hyperscale computing.

Looking Ahead

Overall, I believe Marvell is making significant strides in advancing its interconnect portfolio to support scale-up and scale-out fabrics, particularly for AI-driven and cloud-based infrastructure. Marvell has introduced groundbreaking technologies, such as the industry’s first PCIe Gen 6 over optics, demonstrated in collaboration with TeraHop. This advancement supports faster data transfer over longer distances (e.g., 10m optical cables), crucial for scaling AI infrastructure across multiple racks or rows.

As such, the company is enhancing its interconnect solutions to meet the demands of rack-, row-, and cloud-scale AI networks, including technologies designed for higher performance, greater customization, and increased flexibility, addressing the rapidly expanding scaling needs of AI clusters and data centers.

What to Watch:

  • As AI scale-up domains expand from rack-scale to row-scale, Marvell’s ecosystem role in driving the shift from passive copper interconnects to optimized optical solutions will strengthen.
  • Marvell’s portfolio spans multiple technologies, including CPO, LPO, CPC, AEC, and PCIe retimers, which together will prove integral in accelerating row-scale computing systems with hundreds of processors (XPUs and CPUs), improving performance-per-watt and reducing time-to-completion for AI workloads.
  • Marvell’s 51.2T scale-out fabric demonstration highlights its ability to handle massive data traffic in AI clusters, providing ecosystem impetus to broaden advanced switching and interconnect capabilities, such as RDMA-enabled network interface cards, optical and copper interconnects, and SONiC-enabled switches for both frontend and backend applications.

See the complete Marvell summary OFC announcement on the Marvell website.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Marvell HBM Compute Architecture Ready to Transform Cloud AI Acceleration

Marvell Q4 FY 2025 Reports 27% Revenue Growth, AI Custom Chips Drive Momentum

Marvell Unveils CPO Innovations Prepared to Drive XPU Architecture Breakthroughs

Author Information

Ron is an experienced, customer-focused research expert and analyst, with over 20 years of experience in the digital and IT transformation markets, working with businesses to drive consistent revenue and sales growth.

He is a recognized authority at tracking the evolution of and identifying the key disruptive trends within the service enablement ecosystem, including a wide range of topics across software and services, infrastructure, 5G communications, Internet of Things (IoT), Artificial Intelligence (AI), analytics, security, cloud computing, revenue management, and regulatory issues.

Prior to his work with The Futurum Group, Ron worked with GlobalData Technology creating syndicated and custom research across a wide variety of technical fields. His work with Current Analysis focused on the broadband and service provider infrastructure markets.

Ron holds a Master of Arts in Public Policy from University of Nevada — Las Vegas and a Bachelor of Arts in political science/government from William and Mary.

SHARE:

Latest Insights:

Synopsys Deepens NVIDIA Collaboration to Accelerate EDA Workloads on Grace Blackwell Platform
Richard Gordon, VP & Practice Lead, Semiconductors at The Futurum Group, examines how Synopsys and NVIDIA aim to accelerate chip design with Grace Blackwell, targeting 30x EDA speedups and enhanced AI productivity.
Custom Arm Neoverse V2 Chip Posts Gains in AI, HPC, and General Compute Across C4A VMs
Richard Gordon, VP & Practice Lead, Semiconductors at The Futurum Group, unpacks Google Axion’s strong benchmarks across AI, HPC, and cloud workloads, showing how Google’s custom Arm CPU could reshape enterprise infrastructure.
Intel’s New CEO Lip-Bu Tan Spotlighted Physical AI and Its Importance at Intel Vision in Providing First Glimpses of His Unfolding Strategy
Ron Westfall, Research Director at The Futurum Group, shares insights on why Intel possesses the key portfolio building blocks, such as AI accelerators, edge computing expertise, and a legacy of powering complex systems, to become an integral player in the nascent Physical AI market.
Google’s AI Ambitions Expand Beyond the Cloud as It Builds an Interoperable Ecosystem for Agent Communication, Application Management, and On-Premises AI Deployment
Nick Patience, AI Practice Lead at The Futurum Group, shares his insights - along with those of colleagues - on Google’s Cloud Next 25 event, held last week in Las Vegas.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.