Broadcom Unleashes New Trident 5-X12 Chip Fueled by NetGNT Engine

Broadcom Unleashes New Trident 5-X12 Chip Fueled by NetGNT Engine

The News: Broadcom announced an on-chip, neural-network inference engine called NetGNT (Networking General-purpose Neural-network Traffic-analyzer) in its new, software-programmable Trident 5-X12 chip. Read the full press release on the Broadcom website.

Broadcom Unleashes New Trident 5-X12 Chip Fueled by NetGNT Engine

Analyst Take: Broadcom sets it sights on curtailing network congestion by building a neural networking engine into the new Trident 5-X12 chip that can be trained dynamically and at line speeds without diminishing throughput or latency. NetGNT works in parallel to bolster the standard packet-processing pipeline, typically used in switch ASICs, which takes a one-packet/one path approach that looks at one packet as it takes a specific path through the chip’s ports and buffers. NetGNT, however, is a machine learning (ML) inference engine that can be trained to look for different types of traffic patterns that traverse the entire chip.

Chip Excels at AI Workload Identification

Broadcom developed the new chip to excel at identifying traffic patterns associated with AI workloads (i.e., incast), where packets assemble on a single port and buffer congestion, preventing it before workload disruption emerges. As such, the chip can improve telemetry and network security alongside traffic management. Of key importance, the Trident 5-X12 is purpose designed to optimize AI workloads in hardware at full line rate to avoid any detrimental impact on latency or throughput.

Moreover, Trident 5-X12 is software-programmable and field-upgradable and provides 16.0 Terabits/second of bandwidth, which is double that of the Trident 4-X9. Strikingly, the bandwidth is distributed throughout 100G PAM4 serializers/deserializers, enabling for a wide range of port configurations that meet growing customer demand for deployment flexibility.

It also adds support for 800G ports, allowing direct connection to Broadcom’s Tomahawk 5, which is used as the spine/fabric in fast expanding AI/ML data and compute center environments. As such, this chip is positioned to enable a 1RU data center top of rack (ToR) supporting 48x200G QSFP-DD downlink ports and 8x800G uplink ports for aggregation back to a spine switch.

Trident Chip Winning Mindshare

From my view, the new Trident chip can aid Broadcom in winning more mindshare across the rapidly evolving AI-optimized networking solutions landscape with key players such as Cisco and NVIDIA already delivering switches, superNICs, and DPUs to directly address the congestion and latency issues that can result in elongated training times.

For example, Broadcom is countering Cisco’s G200 and G202 offerings, which can be used to train large language models (LLMs) such as ChatGPT and can be used for inference of ChatGPT and other LLMs when customers interact with them. Overall, Cisco Silicon One devices can provide improved connectivity between GPUs to enable ChatGPT and other advanced AI/ML models.
Key Takeaway: Broadcom Moves the ToR Needle with Trident 5-X12 Debut

I believe that Broadcom’s NetGNT innovation can meet the increasingly intense AI/ML workload requirements across the most demanding data center environments, including ToR. Of key importance, Trident 5-X12 is designed to fulfill an expanding array of customer demands by delivering a flexible array of chips, customizable according to application needs. Now Broadcom is better positioned to broaden its market presence across the ToR market segment as AI workloads continue to push new bandwidth, performance, and latency boundaries.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Broadcom Revenue in Q3 2023 Hits $8.88 Billion, Beating Estimates

VMware Acquisition Close: Q&A with Hock Tan, President and CEO, Broadcom – Six Five Insider

Dell and Broadcom Deliver Scale-Out AI Platform for Industry

Author Information

Ron is an experienced, customer-focused research expert and analyst, with over 20 years of experience in the digital and IT transformation markets, working with businesses to drive consistent revenue and sales growth.

Ron holds a Master of Arts in Public Policy from University of Nevada — Las Vegas and a Bachelor of Arts in political science/government from William and Mary.

Related Insights
Acer’s Q1 2026 Result Highlights Value of Diversification and Product Focus
May 12, 2026

Acer’s Q1 2026 Result Highlights Value of Diversification and Product Focus

Olivier Blanchard is the Research Director & Practice Lead, Intelligent Devices at Futurum, reviews Acer’s Q1 2026 results, focusing on diversification, PC momentum, and operating profit pressure....
Memgraph Zero Sidesteps the Data Movement Grind to Give AI Agents Immediate Context
May 12, 2026

Memgraph Zero Sidesteps the Data Movement Grind to Give AI Agents Immediate Context

Brad Shimmin, Vice President and Practice Lead at Futurum, shares insights on Memgraph Zero and MemGQL. This federated graph engine addresses the integration complexity bottleneck currently stalling agentic AI....
MRC Protocol
May 11, 2026

Can OpenAI’s MRC Networking Protocol Redefine the Economics of AI Training?

Brendan Burke, Research Director at Futurum, analyzes how OpenAI's MRC Networking Protocol addresses supercomputer bottlenecks and could redefine large-scale AI training economics through improved efficiency....
Agentic AI
May 8, 2026

Netskope Bets Agentic AI Can Solve the SOC Capacity Crisis

Fernando Montenegro, Vice President & Practice Lead, Cybersecurity & Resilience at Futurum, Netskope's AgentSkope deploys agentic AI agents to automate security workflows, relieving analyst overload and addressing the SOC capacity...
Arm Q4 FY 2026: Agentic CPU Shift Boosts Demand Outlook
May 8, 2026

Arm Q4 FY 2026: Agentic CPU Shift Boosts Demand Outlook

Brendan Burke, Research Director at Futurum, reviews Arm Q4 FY 2026 earnings, focusing on Arm AGI CPU traction, cloud royalty momentum, and what agentic AI implies for CPU demand in...
SiTime Q1 FY 2026: AI Inference Demand Drives Timing Content Expansion
May 8, 2026

SiTime Q1 FY 2026: AI Inference Demand Drives Timing Content Expansion

Futurum Research analyzes SiTime’s Q1 FY 2026 earnings, focusing on AI data center timing demand, inference-driven synchronization needs, and what the Q2 outlook implies for precision timing content growth....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.