Search
Close this search box.

Marvell Industry Analyst Day 2023: Accelerated Computing Takes Off

Marvell Industry Analyst Day 2023: Accelerated Computing Takes Off

The News: Marvell conducted its annual Industry Analyst Day 2023 on December 5 at its Santa Clara, California, headquarters. The event provided an in-depth look into Marvell’s strategic vision to develop infrastructure silicon technologies for accelerated computing. Read the full press release on the Marvell website.

Marvell Industry Analyst Day 2023: Accelerated Computing Takes Off

Analyst Take: Marvell adroitly sharpened its organization-wide messaging and vision at its annual industry analyst event. Key to Marvell’s portfolio development strategy and vision is that accelerated infrastructure is integral to scaling AI. Marvell Chief Development Officer, Central Engineering, Sandeep Bharathi provided Marvell’s perspective on what it takes to deliver a silicon platform in the AI world that can provide the foundation for accelerated infrastructure.

That AI is accelerating computing cadence is undisputed. I see computing capacity increasingly driving connectivity innovation as computing capacity used in training AI systems is now doubling every 3-4 months, a more frenetic pace than during the 2-year interval that paced the classical Moore’s Law era (pre-2012).

Marvell Industry Analyst Day 2023: Accelerated Computing Takes Off
Image Source: ISSCC adaptation from OpenAI, The Economist

In the post-Moore’s Law epoch, new approaches are essential to optimizing costs—on a normalized costs per transistor basis—using scaling and integration, materials innovation, and advanced packaging to ensure competitiveness as the semiconductor industry surpasses the sub-2 nanometer (nm) event horizon.

As such, I expect that advanced high-performance complementary metal oxide semiconductor (CMOS) technologies, including silicon-germanium (SiGe) bipolar CMOS (BiCMOS), silicon photonics (SiPho), and power management integrated circuit (PMIC), are integral to providing the foundation for fueling interconnect innovation across digital environments. Moreover, demonstrating leadership in signaling technologies, such as non-return-to-zero (NRZ), pulse-amplitude modulation 4 (PAM4), and coherent digital signal processing (DSP) is critical to broadening ecosystem influence and adoption of accelerated computing.

Plus, advanced packaging initiatives are key to differentiating advanced substrates, 2.5D/3D, connectors and sockets, advanced thermal solutions, and co-packaged optics (CPO) components of accelerated infrastructure solutions. This includes enabling large 2.5D interposer and ABF with chiplets for complex ASICs, enabling 3D/substrate products and technologies, and enabling CPO technology.

The TSMC Connection: Foundation for 3nm/5nm Process Innovations

Plus, Marvell has developed and demonstrated high-speed, ultra-high-bandwidth silicon interconnects produced on Taiwan Semiconductor Manufacturing Company (TSMC) 3nm process. Marvell’s silicon building blocks in this node include 112G XSR SerDes (serializer/de-serializer), Long Reach SerDes, PCIe Gen 6/CXL 3.0 SerDes, and a 240 Tbps parallel die-to-die interconnect.

The building blocks are part of Marvell’s continued execution of its strategy to develop a comprehensive silicon IP portfolio for designing chips that can increase the bandwidth, performance, and energy efficiency of rapidly evolving data infrastructure. These technologies also support all semiconductor packaging options from standard and low-cost Redistribution Layers (RDL) to silicon-based high-density interconnect.

From my view, Marvell achieved a semiconductor industry breakthrough by sampling and commercially releasing 112G SerDes, following on advancing the market presence of its data infrastructure portfolio based on TSMC’s 5nm process. To review, SerDes and parallel interconnects serve as high-speed pathways for exchanging data between chips or silicon components inside chiplets. Together with 2.5D and 3D packaging, these technologies are built to eliminate system-level bottlenecks to advance the most intricate semiconductor designs. SerDes also helps reduce pins, traces, and circuit board space to reduce expenses. A rack in a hyperscale data center, for instance, could contain tens of thousands of SerDes links.

For SerDes process node innovations, Marvell 5nm SerDes produce 224G long-reach (LR) in 2023 as well as using four instances of 224G SerDes to enable optical DSPs using integrated optical drivers. For TSMC 3nm, Marvell’s SerDes portfolio provides the critical building blocks for cloud-optimized silicon.

Marvell incorporates its SerDes and interconnect technologies into its flagship silicon solutions including Teralynx switches, PAM4 and coherent DSPs, Alaska Ethernet physical layer (PHY) devices, OCTEON processors, Bravera storage controllers, Brightlane automotive Ethernet chipsets, and custom ASICs. Moving to a 3nm process enables engineers to lower the cost and power consumption of chips and computing systems while maintaining signal integrity and performance.

In conjunction with the event, Marvell launched two cloud-optimized PAM4 optical DSPs—Perseus and Spica Gen2. I see Perseus providing a breakthrough 400/800 Gbps 5nm PAM4 optical DSP that integrates both a transimpedance amplifier (TIA) and a vertical-cavity surface-emitting laser (VCSEL) driver into a single die to reduce power, cost, and space. By monolithically integrating components, module manufacturers can reduce manufacturing complexity and accelerate scaling of their offerings. Perseus is also available with an integrated silicon photonics driver.

Perseus is optimized for both active optical cables (AOCs), which replace passive copper cables for connecting equipment within racks, and short-reach single mode and multi-mode optical interconnects for distances of 5 to 500 meters.

In relation to Spica Gen2, Marvell expanded its long-standing relationship with NVIDIA by playing an integral role in NVIDIA’s Israel-1 hyperscale generative AI supercomputer system powered by the NVIDIA HGX H100 eight-GPU platform with Bluefield-3 DPUs and Spectrum-X networking by Marvell-enabled optical interconnects. Isreal-1 is purpose-designed to reduce run times of massive transformer-based generative AI models and is expected to be partly operational by the end of 2023.

Specifically, NVIDIA pluggable optical modules use Marvell Spica optical DSPs to move data at up to 800 Gbps on the server side, then plug into NVIDIA Bluefield-3 SuperNICs. At the other end of the fiber, the same modules connect to NVIDIA Spectrum-4 switches. As a result, Israel-1 is expected to deliver performance of up to eight exaflops of AI computing to make it one of planet’s fastest AI supercomputers, vital to meeting the staggering scaling challenges of AI.

The Foundation for Accelerated Infrastructure

Both Perseus and Spica Gen2 are based on the Marvell PAM4 optical DSP architecture, currently the most widely deployed optical DSP in cloud data centers and AI clusters. Linking back to building a silicon platform in the AI world that provides the foundation for accelerated infrastructure, Perseus supports quad/octal 100 Gbps/channel optical PAM4 DSP and an integrated linear drive (VCSEL and SiPho PIC) that further differentiate the offering.

Likewise, Spica Gen2 benefits from the qualified and proven 800G Spica DSP architecture for data center connectivity alongside low power DSP with integrated drivers supporting EML and SiPho differentiators as well as bare die design for cost-sensitive application and small BGA package designs for faster time to market.

Key Takeaway: Marvell Is Ready to Fuel Infrastructure Silicon Innovation for Accelerated Computing

I believe that Marvell is solidly positioned to drive infrastructure silicon innovation for accelerated computing throughout 2024 and beyond, especially as the advanced computing opportunity expands swiftly primarily through AI’s unparalleled ascent. Marvell’s portfolio development and marketing focus on process, IP connectivity, and packaging underpins its ability to execute and gain first-to-market advantages in fulfilling rapidly evolving AI and other complex networking and workload demands.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Marvell Q3 Fiscal 2024: Datacenter Business Shines

DSPs Go the Distance: Marvell’s Strategy for Connecting Carriers, Clouds – Futurum Tech Webcast

Marvell Q2 Fiscal 2024: AI and Cloud Are Top Growth Drivers

Author Information

Ron is an experienced, customer-focused research expert and analyst, with over 20 years of experience in the digital and IT transformation markets, working with businesses to drive consistent revenue and sales growth.

He is a recognized authority at tracking the evolution of and identifying the key disruptive trends within the service enablement ecosystem, including a wide range of topics across software and services, infrastructure, 5G communications, Internet of Things (IoT), Artificial Intelligence (AI), analytics, security, cloud computing, revenue management, and regulatory issues.

Prior to his work with The Futurum Group, Ron worked with GlobalData Technology creating syndicated and custom research across a wide variety of technical fields. His work with Current Analysis focused on the broadband and service provider infrastructure markets.

Ron holds a Master of Arts in Public Policy from University of Nevada — Las Vegas and a Bachelor of Arts in political science/government from William and Mary.

SHARE:

Latest Insights:

Krista Case of The Futurum Group reflects on lessons learned and shares her expected impacts from the July 2024 CrowdStrike outage.
Steven Dickens and Ron Westfall from The Futurum Group highlighted that HPE Private Cloud AI’s ability to rapidly deploy generative AI applications, along with its solution accelerators and partner ecosystem, can greatly simplify AI adoption for enterprises, helping them scale quickly and achieve faster results.
Uma Ramadoss and Eric Johnson from AWS join Daniel Newman and Patrick Moorhead to share their insights on the evolution and the future of Building Generative AI Applications with Serverless, highlighting AWS's role in simplifying this futuristic technology.
Steven Dickens, Chief Technology Advisor at The Futurum Group, explores how AWS is transforming sports with AI and cloud technology, enhancing fan engagement and team performance while raising concerns around privacy and commercialization. Discover the future challenges and opportunities in sports tech.