Marvell Pumps up the Teralynx 10 Ethernet Switch Volume

Marvell Pumps up the Teralynx 10 Ethernet Switch Volume

The News: Marvell, a provider of data infrastructure semiconductor solutions, announced the Marvell Teralynx 10 Ethernet switch device is in volume production with customer deployment underway. Read the full press release on the Marvell website.

Marvell Pumps up the Teralynx 10 Ethernet Switch Volume

Analyst Take: Marvell is spotlighting that its Teralynx 10x 51.2 Ethernet switch is entering volume production for global AI cloud deployments. The Teralynx 10 switch is a low power, programmable 51.2 Tbps Ethernet device with breakthrough low latency and performance for training, inference, general-purpose compute and other workloads to scale accelerated infrastructure in cloud data centers. The Teralynx 10 Ethernet switch offers new performance and high scale for the most advanced cloud and AI workloads:

  • Lowest latency: Demonstrates 51.2 terabit-per-second Ethernet throughput with latency as low as 500ns, and sub-600 nanoseconds latency across all packet sizes. Low latency is essential for meeting the demands of AI, ML, and distributed workloads and directly impacts job completion time (JCT) and algorithmic efficiency.
  • Top-tier industry radix: 512 switching radix enables operators to reduce the number of switch tiers in large clusters, yielding dramatically lower power and total cost of ownership (TCO).
  • Low power consumption: The switch consumes 1 watt per 100 gigabits-per-second of bandwidth.
  • Programmable: A switch architecture that is fully programmable with no impact on packet processing capacity or latency. The Teralynx 10 device can be used in multiple use cases at 51.2 Tbps. This flexibility allows data center operators investment protection as networking technologies evolve to handle new protocols.

From my view, the Marvell Teralynx 10 offering is directly addressing the accelerating demand for switches across large AI cluster environments. Today, for instance, up to 640 switches supporting up to 25K xPUs are deployed to support and scale AI clusters. On the horizon, networks are expected to expand exponentially as cluster size increases dramatically with 2.5K switches supporting up to 100K xPUs and 40K switches supporting up to 1 million xPUs already being mapped out.

Marvell’s Teralynx 10 Ethernet switch offering enables the clean-sheet architecture vital to ensuring cloud data centers can fulfill the unique demands of optimizing AI clusters. I find the offering delivers such fulfillment by providing intricate and robust balance across low latency, programmable, high bandwidth, and low power demands to assure an AI cloud switch that minimizes compromises.

AI calls for deterministic low latency to ensure higher performance compute across accelerated infrastructure fabrics that use many connected processors to meet specific workload demand. This means low latency under any condition is key to predictable fabric performance. The low latency Teralynx switch can give cloud operators the ability to reduce operating expenses and to increase their capacity for performing revenue-generating activity.

As a result, I find Marvell’s 512 switching radix capabilities can have network-level net positive impact on latency reduction, cost, and power metrics. For example, in relation to 256 radix in a 64K cluster, up to 40% lower latency is attained by replacing 5-hop counts with 3-hop counts, up to 44% fewer connections by using only 80K connections versus 144K connections, up to 40% fewer switches by requiring only 768 switches in relation to 1280 switches, and 33% fewer networking layers by using only two layers instead of three.

For all-critical power consumption advances, 51.2 Teralynx 10 delivers up 50% lower power consumption in relation to the 12.8 Teralynx 7 on a Watts per 100GbE basis. Marvell’s power-efficient architecture credentials are bolstered through its portfolio-wide 5nm process capabilities and Delta Networks independent validation of Marvell power consumption outcomes across typical <520W power scenarios.

Marvell Teralynx 10 is reconfigurable for cross-cloud applications enabling one device to serve multiple data center use cases such as AI cluster, data center edge, data center leaf/spine, and data center top of rack (ToR) applications. The solution combines silicon, system (i.e., high-speed characterized reference designs), and software (e.g., open-source SONiC/SAI, ODM/OEM design tools) to provide a complete deployment-ready solution vital to meeting fast-expanding workload demands.

Marvell Teralynx 10 Capitalizes on Industry Shift to Open Operating System Software

As a testament to the company’s growing influence in the market shift to open networking platforms, Marvell has been an active member of Software for Open Networking in the Cloud (SONiC), holding a governing board position and seats on multiple technical committees including chairing the platform working group. In addition to the Teralynx switch, other contributions from Marvell include SONiC running on Arm-based systems, aimed at lowering customer TCO by eliminating expensive hardware components and reducing power requirements.

From my view, the deployment readiness fully aligns with the industry shift to open operating system software as seen in the growing presence of SONiC across the network OS realm and Linux across the server OS realm. Open software is key to enabling deployment flexibility for hyperscale network deployments that can assure faster development cycles, ecosystem-wide normalized feature set, multi-vendor interoperability, rapid supply chain scaling, and of course freedom from proprietary lock-in.

As such, data center network infrastructure becomes increasingly democratized enabling the multi-vendor hardware environment that predicates swift network scaling, including throughout demanding AI cluster environments.

Key Takeaway: Marvell Teralynx 10 Prepares Ecosystem to Scale Accelerated Infrastructure

Overall, I believe that the Marvell Teralynx 10 Ethernet switch delivers the low-latency, low-power, high-bandwidth, programmable platform and an architecture optimized for AI and cloud network demands, which can assure customers use a comprehensive hardware/software solution that fully aligns with the cloud AI shift to open networking.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Marvell Q1 Fiscal 2025: Custom AI Silicon Plays the Starring Role

Marvell Right Sizes AEC Connections to Meet New AI Acceleration Demands

Marvell Sees Time Has Come for Alaska-sized Retimer Innovation

Author Information

Ron is an experienced, customer-focused research expert and analyst, with over 20 years of experience in the digital and IT transformation markets, working with businesses to drive consistent revenue and sales growth.

He is a recognized authority at tracking the evolution of and identifying the key disruptive trends within the service enablement ecosystem, including a wide range of topics across software and services, infrastructure, 5G communications, Internet of Things (IoT), Artificial Intelligence (AI), analytics, security, cloud computing, revenue management, and regulatory issues.

Prior to his work with The Futurum Group, Ron worked with GlobalData Technology creating syndicated and custom research across a wide variety of technical fields. His work with Current Analysis focused on the broadband and service provider infrastructure markets.

Ron holds a Master of Arts in Public Policy from University of Nevada — Las Vegas and a Bachelor of Arts in political science/government from William and Mary.

SHARE:

Latest Insights:

Novin Kaihani from Intel joins Six Five hosts to discuss the transformative impact of Intel vPro on IT strategies, backed by real-world examples and comprehensive research from Forrester Consulting.
Messaging Growth and Cost Discipline Drive Twilio’s Q4 FY 2024 Profitability Gains
Keith Kirkpatrick highlights Twilio’s Q4 FY 2024 performance driven by messaging growth, AI innovation, and strong profitability gains.
Strong Demand From Webscale and Enterprise Segments Positions Cisco for Continued AI-Driven Growth
Ron Westfall, Research Director at The Futurum Group, shares insights on Cisco’s Q2 FY 2025 results, focusing on AI infrastructure growth, Splunk’s impact on security, and innovations like AI PODs and HyperFabric driving future opportunities.
Major Partnership Sees Databricks Offered as a First-Party Data Service; Aims to Modernize SAP Data Access and Accelerate AI Adoption Through Business Data Cloud
Nick Patience, AI Practice Lead at The Futurum Group, examines the strategic partnership between SAP and Databricks that combines SAP's enterprise data assets with Databricks' data platform capabilities through SAP Business Data Cloud, marking a significant shift in enterprise data accessibility and AI innovation.

Thank you, we received your request, a member of our team will be in contact with you.