Broadcom Unleashes New Trident 5-X12 Chip Fueled by NetGNT Engine

Broadcom Unleashes New Trident 5-X12 Chip Fueled by NetGNT Engine

The News: Broadcom announced an on-chip, neural-network inference engine called NetGNT (Networking General-purpose Neural-network Traffic-analyzer) in its new, software-programmable Trident 5-X12 chip. Read the full press release on the Broadcom website.

Broadcom Unleashes New Trident 5-X12 Chip Fueled by NetGNT Engine

Analyst Take: Broadcom sets it sights on curtailing network congestion by building a neural networking engine into the new Trident 5-X12 chip that can be trained dynamically and at line speeds without diminishing throughput or latency. NetGNT works in parallel to bolster the standard packet-processing pipeline, typically used in switch ASICs, which takes a one-packet/one path approach that looks at one packet as it takes a specific path through the chip’s ports and buffers. NetGNT, however, is a machine learning (ML) inference engine that can be trained to look for different types of traffic patterns that traverse the entire chip.

Chip Excels at AI Workload Identification

Broadcom developed the new chip to excel at identifying traffic patterns associated with AI workloads (i.e., incast), where packets assemble on a single port and buffer congestion, preventing it before workload disruption emerges. As such, the chip can improve telemetry and network security alongside traffic management. Of key importance, the Trident 5-X12 is purpose designed to optimize AI workloads in hardware at full line rate to avoid any detrimental impact on latency or throughput.

Moreover, Trident 5-X12 is software-programmable and field-upgradable and provides 16.0 Terabits/second of bandwidth, which is double that of the Trident 4-X9. Strikingly, the bandwidth is distributed throughout 100G PAM4 serializers/deserializers, enabling for a wide range of port configurations that meet growing customer demand for deployment flexibility.

It also adds support for 800G ports, allowing direct connection to Broadcom’s Tomahawk 5, which is used as the spine/fabric in fast expanding AI/ML data and compute center environments. As such, this chip is positioned to enable a 1RU data center top of rack (ToR) supporting 48x200G QSFP-DD downlink ports and 8x800G uplink ports for aggregation back to a spine switch.

Trident Chip Winning Mindshare

From my view, the new Trident chip can aid Broadcom in winning more mindshare across the rapidly evolving AI-optimized networking solutions landscape with key players such as Cisco and NVIDIA already delivering switches, superNICs, and DPUs to directly address the congestion and latency issues that can result in elongated training times.

For example, Broadcom is countering Cisco’s G200 and G202 offerings, which can be used to train large language models (LLMs) such as ChatGPT and can be used for inference of ChatGPT and other LLMs when customers interact with them. Overall, Cisco Silicon One devices can provide improved connectivity between GPUs to enable ChatGPT and other advanced AI/ML models.
Key Takeaway: Broadcom Moves the ToR Needle with Trident 5-X12 Debut

I believe that Broadcom’s NetGNT innovation can meet the increasingly intense AI/ML workload requirements across the most demanding data center environments, including ToR. Of key importance, Trident 5-X12 is designed to fulfill an expanding array of customer demands by delivering a flexible array of chips, customizable according to application needs. Now Broadcom is better positioned to broaden its market presence across the ToR market segment as AI workloads continue to push new bandwidth, performance, and latency boundaries.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Broadcom Revenue in Q3 2023 Hits $8.88 Billion, Beating Estimates

VMware Acquisition Close: Q&A with Hock Tan, President and CEO, Broadcom – Six Five Insider

Dell and Broadcom Deliver Scale-Out AI Platform for Industry

Author Information

Ron is an experienced, customer-focused research expert and analyst, with over 20 years of experience in the digital and IT transformation markets, working with businesses to drive consistent revenue and sales growth.

Ron holds a Master of Arts in Public Policy from University of Nevada — Las Vegas and a Bachelor of Arts in political science/government from William and Mary.

Related Insights
Embeddable Contact
April 20, 2026

Twilio Flex as an Embeddable Contact Center: Will Platform Integration Redefine CX Sourcing for the Enterprise?

Twilio Flex's embeddable contact center capability intensifies CCaaS competition, offering enterprises deeper integration and AI-driven customization while challenging legacy providers....
Unlock Faster AI
April 20, 2026

Can Eridu’s AI Networking Break the Data Center Bottleneck—or Just Move It?

With 78% of organizations boosting AI budgets, Eridu emerges from stealth with $200M+ in funding, claiming to break the data center bottleneck—but whether new architectures solve the problem or just...
Sovereign Cloud
April 20, 2026

Can NetApp and Google Cloud Redefine Distributed Cloud Data Infrastructure for the AI Era?

NetApp and Google Cloud partnered to deliver unified sovereign cloud infrastructure for government agencies and regulated enterprises, integrating NetApp's data platform into Google Distributed Cloud for compliant, distributed AI solutions....
Cadence and NVIDIA
April 20, 2026

Cadence and NVIDIA Double Down on AI-Driven Engineering—Accelerated Computing Bridges Simulation and Verification

Cadence and NVIDIA have announced an expanded partnership embedding agentic AI and GPU acceleration into simulation and verification platforms, reshaping engineering productivity across RTL design, analog, and 3D IC workflows....
Hybrid Data
April 20, 2026

Can Cloudera’s Stability Bet Win the Hybrid Data War?

Cloudera's platform enhancements enable hybrid data environments with stability, elastic scaling, and Apache Iceberg interoperability, positioning the company to serve enterprises balancing cloud and on-premises infrastructure....
Meta’s MTIA Partnership With Broadcom Solidifies the Future of XPUs in Inference Optimization
April 20, 2026

Meta’s MTIA Partnership With Broadcom Solidifies the Future of XPUs in Inference Optimization

Brendan Burke, Research Director at Futurum, examines how the Meta Broadcom MTIA partnership expands custom AI silicon and tests whether multi-gigawatt infrastructure can scale efficiently....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.