Menu

Broadcom Unleashes New Trident 5-X12 Chip Fueled by NetGNT Engine

Broadcom Unleashes New Trident 5-X12 Chip Fueled by NetGNT Engine

The News: Broadcom announced an on-chip, neural-network inference engine called NetGNT (Networking General-purpose Neural-network Traffic-analyzer) in its new, software-programmable Trident 5-X12 chip. Read the full press release on the Broadcom website.

Broadcom Unleashes New Trident 5-X12 Chip Fueled by NetGNT Engine

Analyst Take: Broadcom sets it sights on curtailing network congestion by building a neural networking engine into the new Trident 5-X12 chip that can be trained dynamically and at line speeds without diminishing throughput or latency. NetGNT works in parallel to bolster the standard packet-processing pipeline, typically used in switch ASICs, which takes a one-packet/one path approach that looks at one packet as it takes a specific path through the chip’s ports and buffers. NetGNT, however, is a machine learning (ML) inference engine that can be trained to look for different types of traffic patterns that traverse the entire chip.

Chip Excels at AI Workload Identification

Broadcom developed the new chip to excel at identifying traffic patterns associated with AI workloads (i.e., incast), where packets assemble on a single port and buffer congestion, preventing it before workload disruption emerges. As such, the chip can improve telemetry and network security alongside traffic management. Of key importance, the Trident 5-X12 is purpose designed to optimize AI workloads in hardware at full line rate to avoid any detrimental impact on latency or throughput.

Moreover, Trident 5-X12 is software-programmable and field-upgradable and provides 16.0 Terabits/second of bandwidth, which is double that of the Trident 4-X9. Strikingly, the bandwidth is distributed throughout 100G PAM4 serializers/deserializers, enabling for a wide range of port configurations that meet growing customer demand for deployment flexibility.

It also adds support for 800G ports, allowing direct connection to Broadcom’s Tomahawk 5, which is used as the spine/fabric in fast expanding AI/ML data and compute center environments. As such, this chip is positioned to enable a 1RU data center top of rack (ToR) supporting 48x200G QSFP-DD downlink ports and 8x800G uplink ports for aggregation back to a spine switch.

Trident Chip Winning Mindshare

From my view, the new Trident chip can aid Broadcom in winning more mindshare across the rapidly evolving AI-optimized networking solutions landscape with key players such as Cisco and NVIDIA already delivering switches, superNICs, and DPUs to directly address the congestion and latency issues that can result in elongated training times.

For example, Broadcom is countering Cisco’s G200 and G202 offerings, which can be used to train large language models (LLMs) such as ChatGPT and can be used for inference of ChatGPT and other LLMs when customers interact with them. Overall, Cisco Silicon One devices can provide improved connectivity between GPUs to enable ChatGPT and other advanced AI/ML models.
Key Takeaway: Broadcom Moves the ToR Needle with Trident 5-X12 Debut

I believe that Broadcom’s NetGNT innovation can meet the increasingly intense AI/ML workload requirements across the most demanding data center environments, including ToR. Of key importance, Trident 5-X12 is designed to fulfill an expanding array of customer demands by delivering a flexible array of chips, customizable according to application needs. Now Broadcom is better positioned to broaden its market presence across the ToR market segment as AI workloads continue to push new bandwidth, performance, and latency boundaries.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Broadcom Revenue in Q3 2023 Hits $8.88 Billion, Beating Estimates

VMware Acquisition Close: Q&A with Hock Tan, President and CEO, Broadcom – Six Five Insider

Dell and Broadcom Deliver Scale-Out AI Platform for Industry

Author Information

Ron is an experienced, customer-focused research expert and analyst, with over 20 years of experience in the digital and IT transformation markets, working with businesses to drive consistent revenue and sales growth.

Ron holds a Master of Arts in Public Policy from University of Nevada — Las Vegas and a Bachelor of Arts in political science/government from William and Mary.

Related Insights
OpenAI Frontier Close the Enterprise AI Opportunity Gap—or Widen It
February 9, 2026

OpenAI Frontier: Close the Enterprise AI Opportunity Gap—or Widen It?

Futurum Research Analysts Mitch Ashley, Keith Kirkpatrick, Fernando Montenegro, Nick Patience, and Brad Shimmin examine OpenAI Frontier and whether enterprise AI agents can finally move from pilots to production. The...
Amazon Q4 FY 2025 Revenue Beat, AWS +24% Amid $200B Capex Plan
February 9, 2026

Amazon Q4 FY 2025: Revenue Beat, AWS +24% Amid $200B Capex Plan

Futurum Research reviews Amazon’s Q4 FY 2025 results, highlighting AWS acceleration from AI workloads, expanding custom silicon use, and an AI-led FY 2026 capex plan shaped by satellite and international...
Arm Q3 FY 2026 Earnings Highlight AI-Driven Royalty Momentum
February 6, 2026

Arm Q3 FY 2026 Earnings Highlight AI-Driven Royalty Momentum

Futurum Research analyzes Arm’s Q3 FY 2026 results, highlighting CPU-led AI inference momentum, CSS-driven royalty leverage, and diversification across data center, edge, and automotive, with guidance pointing to continued growth....
Qualcomm Q1 FY 2026 Earnings Record Revenue, Memory Headwinds
February 6, 2026

Qualcomm Q1 FY 2026 Earnings: Record Revenue, Memory Headwinds

Futurum Research analyzes Qualcomm’s Q1 FY 2026 earnings, highlighting AI-native device momentum, Snapdragon X PCs, and automotive SDV traction amid near-term handset build constraints from industry-wide memory tightness....
Alphabet Q4 FY 2025 Highlights Cloud Acceleration and Enterprise AI Momentum
February 6, 2026

Alphabet Q4 FY 2025 Highlights Cloud Acceleration and Enterprise AI Momentum

Nick Patience, VP and AI Practice Lead at Futurum analyzes Alphabet’s Q4 FY 2025 results, highlighting AI-driven momentum across Cloud and Search, Gemini scale, and 2026 capex priorities to expand...
Amazon CES 2026 Do Ring, Fire TV, and Alexa+ Add Up to One Strategy
February 5, 2026

Amazon CES 2026: Do Ring, Fire TV, and Alexa+ Add Up to One Strategy?

Olivier Blanchard, Research Director at The Futurum Group, examines Amazon’s CES 2026 announcements across Ring, Fire TV, and Alexa+, focusing on AI-powered security, faster interfaces, and expanded assistant access across...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.