Broadcom Unleashes New Trident 5-X12 Chip Fueled by NetGNT Engine

Broadcom Unleashes New Trident 5-X12 Chip Fueled by NetGNT Engine

The News: Broadcom announced an on-chip, neural-network inference engine called NetGNT (Networking General-purpose Neural-network Traffic-analyzer) in its new, software-programmable Trident 5-X12 chip. Read the full press release on the Broadcom website.

Broadcom Unleashes New Trident 5-X12 Chip Fueled by NetGNT Engine

Analyst Take: Broadcom sets it sights on curtailing network congestion by building a neural networking engine into the new Trident 5-X12 chip that can be trained dynamically and at line speeds without diminishing throughput or latency. NetGNT works in parallel to bolster the standard packet-processing pipeline, typically used in switch ASICs, which takes a one-packet/one path approach that looks at one packet as it takes a specific path through the chip’s ports and buffers. NetGNT, however, is a machine learning (ML) inference engine that can be trained to look for different types of traffic patterns that traverse the entire chip.

Chip Excels at AI Workload Identification

Broadcom developed the new chip to excel at identifying traffic patterns associated with AI workloads (i.e., incast), where packets assemble on a single port and buffer congestion, preventing it before workload disruption emerges. As such, the chip can improve telemetry and network security alongside traffic management. Of key importance, the Trident 5-X12 is purpose designed to optimize AI workloads in hardware at full line rate to avoid any detrimental impact on latency or throughput.

Moreover, Trident 5-X12 is software-programmable and field-upgradable and provides 16.0 Terabits/second of bandwidth, which is double that of the Trident 4-X9. Strikingly, the bandwidth is distributed throughout 100G PAM4 serializers/deserializers, enabling for a wide range of port configurations that meet growing customer demand for deployment flexibility.

It also adds support for 800G ports, allowing direct connection to Broadcom’s Tomahawk 5, which is used as the spine/fabric in fast expanding AI/ML data and compute center environments. As such, this chip is positioned to enable a 1RU data center top of rack (ToR) supporting 48x200G QSFP-DD downlink ports and 8x800G uplink ports for aggregation back to a spine switch.

Trident Chip Winning Mindshare

From my view, the new Trident chip can aid Broadcom in winning more mindshare across the rapidly evolving AI-optimized networking solutions landscape with key players such as Cisco and NVIDIA already delivering switches, superNICs, and DPUs to directly address the congestion and latency issues that can result in elongated training times.

For example, Broadcom is countering Cisco’s G200 and G202 offerings, which can be used to train large language models (LLMs) such as ChatGPT and can be used for inference of ChatGPT and other LLMs when customers interact with them. Overall, Cisco Silicon One devices can provide improved connectivity between GPUs to enable ChatGPT and other advanced AI/ML models.
Key Takeaway: Broadcom Moves the ToR Needle with Trident 5-X12 Debut

I believe that Broadcom’s NetGNT innovation can meet the increasingly intense AI/ML workload requirements across the most demanding data center environments, including ToR. Of key importance, Trident 5-X12 is designed to fulfill an expanding array of customer demands by delivering a flexible array of chips, customizable according to application needs. Now Broadcom is better positioned to broaden its market presence across the ToR market segment as AI workloads continue to push new bandwidth, performance, and latency boundaries.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Broadcom Revenue in Q3 2023 Hits $8.88 Billion, Beating Estimates

VMware Acquisition Close: Q&A with Hock Tan, President and CEO, Broadcom – Six Five Insider

Dell and Broadcom Deliver Scale-Out AI Platform for Industry

Author Information

Ron is an experienced, customer-focused research expert and analyst, with over 20 years of experience in the digital and IT transformation markets, working with businesses to drive consistent revenue and sales growth.

He is a recognized authority at tracking the evolution of and identifying the key disruptive trends within the service enablement ecosystem, including a wide range of topics across software and services, infrastructure, 5G communications, Internet of Things (IoT), Artificial Intelligence (AI), analytics, security, cloud computing, revenue management, and regulatory issues.

Prior to his work with The Futurum Group, Ron worked with GlobalData Technology creating syndicated and custom research across a wide variety of technical fields. His work with Current Analysis focused on the broadband and service provider infrastructure markets.

Ron holds a Master of Arts in Public Policy from University of Nevada — Las Vegas and a Bachelor of Arts in political science/government from William and Mary.

SHARE:

Latest Insights:

Brad Shimmin, VP and Practice Lead at The Futurum Group, examines why investors behind NVIDIA and Meta are backing Hammerspace to remove AI data bottlenecks and improve performance at scale.
Looking Beyond the Dashboard: Tableau Bets Big on AI Grounded in Semantic Data to Define Its Next Chapter
Futurum analysts Brad Shimmin and Keith Kirkpatrick cover the latest developments from Tableau Conference, focused on the new AI and data-management enhancements to the visualization platform.
Colleen Kapase, VP at Google Cloud, joins Tiffani Bova to share insights on enhancing partner opportunities and harnessing AI for growth.
Ericsson Introduces Wireless-First Branch Architecture for Agile, Secure Connectivity to Support AI-Driven Enterprise Innovation
The Futurum Group’s Ron Westfall shares his insights on why Ericsson’s new wireless-first architecture and the E400 fulfill key emerging enterprise trends, such as 5G Advanced, IoT proliferation, and increased reliance on wireless-first implementations.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.