Menu

Will Jericho4 Help Broadcom Lead the Next Era of AI Networking?

Analyst(s): Ray Wang
Publication Date: August 8, 2025

Broadcom’s Jericho4 chip enables AI workloads to scale across data centers using Ethernet, with a strong focus on performance, security, and energy efficiency.

What is Covered in this Article:

  • Broadcom has launched Jericho4, a new Ethernet fabric router built for distributed AI infrastructure.
  • Jericho4 supports interconnectivity of over one million XPUs across multiple data centers with 51.2 Tbps switching capacity.
  • The chip integrates HyperPort technology, RoCE transport, deep buffering, and line-rate MACsec encryption.
  • Jericho4 aims to address the power, space, and performance limitations of single-site data centers.
  • The product is shipping now to cloud providers and system builders, with a broader rollout expected over the next nine months.

The News: Broadcom has begun shipping its Jericho4 Ethernet fabric router, built to power the next generation of distributed AI systems. The chip delivers secure, high-bandwidth, and lossless connections between more than a million XPUs, helping AI workloads scale beyond the limits of individual data center locations.

Unveiled at the OCP APAC Summit in Taiwan, Jericho4 supports up to 36,000 HyperPorts per system, each running at 3.2 Tbps. The chip brings together RoCE transport, MACsec encryption, and advanced congestion control in a custom design built on a 3nm process. It’s currently sampling with customers, with full rollout expected in about nine months.

Will Jericho4 Help Broadcom Lead the Next Era of AI Networking?

Analyst Take: Broadcom’s release of the Jericho4 Ethernet fabric router marks a big shift in how cloud providers and AI infrastructure players can scale compute across different locations. As AI workloads keep pushing for faster speeds and lower latency, Jericho4 offers a secure, high-bandwidth solution with deep buffers and low latency, thereby tackling major challenges in running AI across multiple sites. With support for over a million XPUs and distances beyond 100km, this chip lays a strong foundation for Ethernet-based AI networks, without trading off performance, efficiency, or security.

Built to Scale AI Beyond One Data Center

Jericho4 is designed to move past the physical and power limitations of centralized GPU clusters. A single system can scale up to 36,000 HyperPorts, each combining four 800GE links into a 3.2 Tbps logical port. This lets operators connect multiple facilities while keeping performance steady through deep buffering and RoCE over long distances. Every port runs MACsec encryption at line rate, keeping data secure even when it travels across third-party infrastructure. These features make Jericho4 a key enabler for spreading AI workloads across large, power-hungry, multi-site setups.

Lossless Data Across Distances Over 100km

A big part of Jericho4’s value comes from Broadcom’s focus on lossless transport and managing congestion at the hardware level. Even under heavy AI traffic, the chip avoids packet loss by buffering data during congestion and supporting RDMA over Converged Ethernet (RoCE). This setup stops Priority Flow Control (PFC) from slowing down operations across different locations. Broadcom says traffic gets handled locally, preventing ripple effects in other clusters. The result is steady, uninterrupted performance for hyperscale AI running across city-wide or regional data centers.

HyperPort and SerDes Drive Better Network Use

With Broadcom’s HyperPort tech, four 800GE links work together as one 3.2 Tbps channel, which simplifies how links are managed and cuts down the inefficiencies seen with traditional load balancing. This setup improves network use by up to 70% and can shorten job completion times by as much as 40%. The chip also uses 200G PAM4 SerDes, removing the need for retimers, which helps cut power use and lowers the part count. As models get bigger and energy use climbs, this design helps keep things cool and efficient while pushing data at high speeds. That mix of performance and efficiency makes Jericho4 a smart choice for scaling up cost-effectively.

Standards Support and Ecosystem Fit for Long-Term Use

Jericho4 meets the specs of the Ultra Ethernet Consortium (UEC), so it works with standard Ethernet NICs, switches, and software stacks. It can handle 51.2 Tbps of switching capacity and supports port speeds from 100GE to 1.6TE, which means it fits into a wide range of setups – from edge locations to cloud data centers. It also supports over 200,000 MACsec security policies and features Elastic Pipe packet processing, positioning it as a long-term infrastructure piece. By sticking to standards and ensuring broad compatibility, Jericho4 helps avoid vendor lock-in and encourages adoption across the AI networking space.

What to Watch:

  • Integration of Jericho4 into OEM platforms such as Arista’s R-Series or Nokia’s 7250 IXR.
  • Competition from NVIDIA InfiniBand, Cisco Nexus, and other Ethernet vendors scaling their AI networking portfolios.
  • Customer reception of Jericho4’s performance across diverse geographies and heterogeneous data center setups.
  • Operator feedback on congestion management, packet loss, and real-time performance under AI training loads.

See the complete press release on the launch of Broadcom Jericho4, enabling distributed AI computing on the Broadcom website.

Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.

Other insights from Futurum:

Synopsys Demonstrates PCIe 6.x Interoperability With Broadcom at PCI-SIG DevCon 2025

Broadcom Q2 FY 2025 Sees Record Revenue, Solid AI and Software Growth

Solidigm and Broadcom Extend SSD Partnership to Power AI’s Next Growth Phase

Author Information

Ray Wang is the Research Director for Semiconductors, Supply Chain, and Emerging Technology at Futurum. His coverage focuses on the global semiconductor industry and frontier technologies. He also advises clients on global compute distribution, deployment, and supply chain. In addition to his main coverage and expertise, Wang also specializes in global technology policy, supply chain dynamics, and U.S.-China relations.

He has been quoted or interviewed regularly by leading media outlets across the globe, including CNBC, CNN, MarketWatch, Nikkei Asia, South China Morning Post, Business Insider, Science, Al Jazeera, Fast Company, and TaiwanPlus.

Prior to joining Futurum, Wang worked as an independent semiconductor and technology analyst, advising technology firms and institutional investors on industry development, regulations, and geopolitics. He also held positions at leading consulting firms and think tanks in Washington, D.C., including DGA–Albright Stonebridge Group, the Center for Strategic and International Studies (CSIS), and the Carnegie Endowment for International Peace.

Related Insights
Cisco Q2 FY 2026 Earnings- AI Infrastructure Momentum Lifts Results
February 13, 2026

Cisco Q2 FY 2026 Earnings: AI Infrastructure Momentum Lifts Results

Futurum Research analyzes Cisco’s Q2 FY 2026 results, highlighting AI infrastructure momentum, campus networking demand, and margin mitigation plans, with guidance reaffirming a strong FY 2026 outlook....
Astera Labs Q4 2025 Earnings Diversified AI Connectivity Momentum
February 13, 2026

Astera Labs Q4 2025 Earnings: Diversified AI Connectivity Momentum

Brendan Burke, Research Director at Futurum, analyzes Astera Labs’ Q4 2025 beat and above-consensus guidance, highlighting momentum in smart fabrics, signal conditioning, and CXL memory as AI connectivity spend accelerates....
Silicon Labs Q4 FY 2025 Earnings Highlight Wireless Momentum and Acquisition
February 13, 2026

Silicon Labs Q4 FY 2025 Earnings Highlight Wireless Momentum and Acquisition

Brendan Burke, Research Director at Futurum, analyzes Silicon Labs’ Q4 FY 2025 results and TI’s pending acquisition, highlighting industrial wireless momentum, manufacturing synergies, and how internalized production could expand reach...
Does Nebius’ Acquisition of Tavily Create the Leading Agentic Cloud
February 12, 2026

Does Nebius’ Acquisition of Tavily Create the Leading Agentic Cloud?

Brendan Burke, Research Director at Futurum, explores Nebius’ acquisition of Tavily to create a unified "Agentic Cloud." By integrating real-time search, Nebius is addressing hallucinations and context gaps for autonomous...
Lattice Semiconductor Q4 FY 2025 Record Comms & Compute, AI Servers +85%
February 12, 2026

Lattice Semiconductor Q4 FY 2025: Record Comms & Compute, AI Servers +85%

Futurum Research analyzes Lattice’s Q4 FY 2025 results, highlighting data center companion FPGA momentum, expanding security attach, and a growing new-product mix that supports FY 2026 growth and margin resilience....
AI Capex 2026 The $690B Infrastructure Sprint
February 12, 2026

AI Capex 2026: The $690B Infrastructure Sprint

Nick Patience, AI Platforms Practice Lead at Futurum, shares his insights on the massive AI capex plans of US hyperscalers, specifically whether the projected $700 billion infrastructure build-out can be...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.