Menu

Will Jericho4 Help Broadcom Lead the Next Era of AI Networking?

Analyst(s): Ray Wang
Publication Date: August 8, 2025

Broadcom’s Jericho4 chip enables AI workloads to scale across data centers using Ethernet, with a strong focus on performance, security, and energy efficiency.

What is Covered in this Article:

  • Broadcom has launched Jericho4, a new Ethernet fabric router built for distributed AI infrastructure.
  • Jericho4 supports interconnectivity of over one million XPUs across multiple data centers with 51.2 Tbps switching capacity.
  • The chip integrates HyperPort technology, RoCE transport, deep buffering, and line-rate MACsec encryption.
  • Jericho4 aims to address the power, space, and performance limitations of single-site data centers.
  • The product is shipping now to cloud providers and system builders, with a broader rollout expected over the next nine months.

The News: Broadcom has begun shipping its Jericho4 Ethernet fabric router, built to power the next generation of distributed AI systems. The chip delivers secure, high-bandwidth, and lossless connections between more than a million XPUs, helping AI workloads scale beyond the limits of individual data center locations.

Unveiled at the OCP APAC Summit in Taiwan, Jericho4 supports up to 36,000 HyperPorts per system, each running at 3.2 Tbps. The chip brings together RoCE transport, MACsec encryption, and advanced congestion control in a custom design built on a 3nm process. It’s currently sampling with customers, with full rollout expected in about nine months.

Will Jericho4 Help Broadcom Lead the Next Era of AI Networking?

Analyst Take: Broadcom’s release of the Jericho4 Ethernet fabric router marks a big shift in how cloud providers and AI infrastructure players can scale compute across different locations. As AI workloads keep pushing for faster speeds and lower latency, Jericho4 offers a secure, high-bandwidth solution with deep buffers and low latency, thereby tackling major challenges in running AI across multiple sites. With support for over a million XPUs and distances beyond 100km, this chip lays a strong foundation for Ethernet-based AI networks, without trading off performance, efficiency, or security.

Built to Scale AI Beyond One Data Center

Jericho4 is designed to move past the physical and power limitations of centralized GPU clusters. A single system can scale up to 36,000 HyperPorts, each combining four 800GE links into a 3.2 Tbps logical port. This lets operators connect multiple facilities while keeping performance steady through deep buffering and RoCE over long distances. Every port runs MACsec encryption at line rate, keeping data secure even when it travels across third-party infrastructure. These features make Jericho4 a key enabler for spreading AI workloads across large, power-hungry, multi-site setups.

Lossless Data Across Distances Over 100km

A big part of Jericho4’s value comes from Broadcom’s focus on lossless transport and managing congestion at the hardware level. Even under heavy AI traffic, the chip avoids packet loss by buffering data during congestion and supporting RDMA over Converged Ethernet (RoCE). This setup stops Priority Flow Control (PFC) from slowing down operations across different locations. Broadcom says traffic gets handled locally, preventing ripple effects in other clusters. The result is steady, uninterrupted performance for hyperscale AI running across city-wide or regional data centers.

HyperPort and SerDes Drive Better Network Use

With Broadcom’s HyperPort tech, four 800GE links work together as one 3.2 Tbps channel, which simplifies how links are managed and cuts down the inefficiencies seen with traditional load balancing. This setup improves network use by up to 70% and can shorten job completion times by as much as 40%. The chip also uses 200G PAM4 SerDes, removing the need for retimers, which helps cut power use and lowers the part count. As models get bigger and energy use climbs, this design helps keep things cool and efficient while pushing data at high speeds. That mix of performance and efficiency makes Jericho4 a smart choice for scaling up cost-effectively.

Standards Support and Ecosystem Fit for Long-Term Use

Jericho4 meets the specs of the Ultra Ethernet Consortium (UEC), so it works with standard Ethernet NICs, switches, and software stacks. It can handle 51.2 Tbps of switching capacity and supports port speeds from 100GE to 1.6TE, which means it fits into a wide range of setups – from edge locations to cloud data centers. It also supports over 200,000 MACsec security policies and features Elastic Pipe packet processing, positioning it as a long-term infrastructure piece. By sticking to standards and ensuring broad compatibility, Jericho4 helps avoid vendor lock-in and encourages adoption across the AI networking space.

What to Watch:

  • Integration of Jericho4 into OEM platforms such as Arista’s R-Series or Nokia’s 7250 IXR.
  • Competition from NVIDIA InfiniBand, Cisco Nexus, and other Ethernet vendors scaling their AI networking portfolios.
  • Customer reception of Jericho4’s performance across diverse geographies and heterogeneous data center setups.
  • Operator feedback on congestion management, packet loss, and real-time performance under AI training loads.

See the complete press release on the launch of Broadcom Jericho4, enabling distributed AI computing on the Broadcom website.

Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.

Other insights from Futurum:

Synopsys Demonstrates PCIe 6.x Interoperability With Broadcom at PCI-SIG DevCon 2025

Broadcom Q2 FY 2025 Sees Record Revenue, Solid AI and Software Growth

Solidigm and Broadcom Extend SSD Partnership to Power AI’s Next Growth Phase

Author Information

Ray Wang is the Research Director for Semiconductors, Supply Chain, and Emerging Technology at Futurum. His coverage focuses on the global semiconductor industry and frontier technologies. He also advises clients on global compute distribution, deployment, and supply chain. In addition to his main coverage and expertise, Wang also specializes in global technology policy, supply chain dynamics, and U.S.-China relations.

He has been quoted or interviewed regularly by leading media outlets across the globe, including CNBC, CNN, MarketWatch, Nikkei Asia, South China Morning Post, Business Insider, Science, Al Jazeera, Fast Company, and TaiwanPlus.

Prior to joining Futurum, Wang worked as an independent semiconductor and technology analyst, advising technology firms and institutional investors on industry development, regulations, and geopolitics. He also held positions at leading consulting firms and think tanks in Washington, D.C., including DGA–Albright Stonebridge Group, the Center for Strategic and International Studies (CSIS), and the Carnegie Endowment for International Peace.

Related Insights
Intel Q4 FY 2025 AI PC Ramp Meets Supply Constraints
January 26, 2026

Intel Q4 FY 2025: AI PC Ramp Meets Supply Constraints

Futurum Research analyzes Intel’s Q4 FY 2025 results, highlighting AI PC and data center demand, 18A/14A progress, and near-term supply constraints with guidance improving as supply recovers from Q2 FY...
Is Tesla’s Multi-Foundry Strategy the Blueprint for Record AI Chip Volumes
January 22, 2026

Is Tesla’s Multi-Foundry Strategy the Blueprint for Record AI Chip Volumes?

Brendan Burke, Research Director at Futurum, explores how Tesla’s dual-foundry strategy for its AI5 chip enables record production scale and could make multi-foundry production the new standard for AI silicon....
Synopsys and GlobalFoundries Reshape Physical AI Through Processor IP Unbundling
January 16, 2026

Synopsys and GlobalFoundries Reshape Physical AI Through Processor IP Unbundling

Brendan Burke, Research Director at Futurum, evaluates GlobalFoundries’ acquisition of Synopsys’ Processor IP to lead in specialized silicon for Physical AI. Synopsys pivots to a neutral ecosystem strategy, prioritizing foundation...
Qualcomm Unveils Future of Intelligence at CES 2026, Pushes the Boundaries of On-Device AI
January 16, 2026

Qualcomm Unveils Future of Intelligence at CES 2026, Pushes the Boundaries of On-Device AI

Olivier Blanchard, Research Director at Futurum, shares his/her insights on Qualcomm’s CES 2026 announcements, which highlight both the breadth of Qualcomm’s Snapdragon and Dragonwing portfolios, and the velocity with which...
TSMC Q4 FY 2025 Results and FY 2026 Outlook Signal AI-Led Growth
January 16, 2026

TSMC Q4 FY 2025 Results and FY 2026 Outlook Signal AI-Led Growth

Futurum Research analyzes TSMC’s Q4 FY 2025 update, highlighting AI-led demand, advanced-node mix, tight capacity, and a higher FY 2026 capex plan to scale N2 and advanced packaging while sustaining...
SiFive and NVIDIA Rewriting the Rules of AI Data Center Design
January 15, 2026

SiFive and NVIDIA: Rewriting the Rules of AI Data Center Design

Brendan Burke, Research Director at Futurum, analyzes the groundbreaking integration of NVIDIA NVLink Fusion into SiFive’s RISC-V IP, a move that signals the end of the proprietary CPU’s stranglehold on...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.