Menu

Will Jericho4 Help Broadcom Lead the Next Era of AI Networking?

Analyst(s): Ray Wang
Publication Date: August 8, 2025

Broadcom’s Jericho4 chip enables AI workloads to scale across data centers using Ethernet, with a strong focus on performance, security, and energy efficiency.

What is Covered in this Article:

  • Broadcom has launched Jericho4, a new Ethernet fabric router built for distributed AI infrastructure.
  • Jericho4 supports interconnectivity of over one million XPUs across multiple data centers with 51.2 Tbps switching capacity.
  • The chip integrates HyperPort technology, RoCE transport, deep buffering, and line-rate MACsec encryption.
  • Jericho4 aims to address the power, space, and performance limitations of single-site data centers.
  • The product is shipping now to cloud providers and system builders, with a broader rollout expected over the next nine months.

The News: Broadcom has begun shipping its Jericho4 Ethernet fabric router, built to power the next generation of distributed AI systems. The chip delivers secure, high-bandwidth, and lossless connections between more than a million XPUs, helping AI workloads scale beyond the limits of individual data center locations.

Unveiled at the OCP APAC Summit in Taiwan, Jericho4 supports up to 36,000 HyperPorts per system, each running at 3.2 Tbps. The chip brings together RoCE transport, MACsec encryption, and advanced congestion control in a custom design built on a 3nm process. It’s currently sampling with customers, with full rollout expected in about nine months.

Will Jericho4 Help Broadcom Lead the Next Era of AI Networking?

Analyst Take: Broadcom’s release of the Jericho4 Ethernet fabric router marks a big shift in how cloud providers and AI infrastructure players can scale compute across different locations. As AI workloads keep pushing for faster speeds and lower latency, Jericho4 offers a secure, high-bandwidth solution with deep buffers and low latency, thereby tackling major challenges in running AI across multiple sites. With support for over a million XPUs and distances beyond 100km, this chip lays a strong foundation for Ethernet-based AI networks, without trading off performance, efficiency, or security.

Built to Scale AI Beyond One Data Center

Jericho4 is designed to move past the physical and power limitations of centralized GPU clusters. A single system can scale up to 36,000 HyperPorts, each combining four 800GE links into a 3.2 Tbps logical port. This lets operators connect multiple facilities while keeping performance steady through deep buffering and RoCE over long distances. Every port runs MACsec encryption at line rate, keeping data secure even when it travels across third-party infrastructure. These features make Jericho4 a key enabler for spreading AI workloads across large, power-hungry, multi-site setups.

Lossless Data Across Distances Over 100km

A big part of Jericho4’s value comes from Broadcom’s focus on lossless transport and managing congestion at the hardware level. Even under heavy AI traffic, the chip avoids packet loss by buffering data during congestion and supporting RDMA over Converged Ethernet (RoCE). This setup stops Priority Flow Control (PFC) from slowing down operations across different locations. Broadcom says traffic gets handled locally, preventing ripple effects in other clusters. The result is steady, uninterrupted performance for hyperscale AI running across city-wide or regional data centers.

HyperPort and SerDes Drive Better Network Use

With Broadcom’s HyperPort tech, four 800GE links work together as one 3.2 Tbps channel, which simplifies how links are managed and cuts down the inefficiencies seen with traditional load balancing. This setup improves network use by up to 70% and can shorten job completion times by as much as 40%. The chip also uses 200G PAM4 SerDes, removing the need for retimers, which helps cut power use and lowers the part count. As models get bigger and energy use climbs, this design helps keep things cool and efficient while pushing data at high speeds. That mix of performance and efficiency makes Jericho4 a smart choice for scaling up cost-effectively.

Standards Support and Ecosystem Fit for Long-Term Use

Jericho4 meets the specs of the Ultra Ethernet Consortium (UEC), so it works with standard Ethernet NICs, switches, and software stacks. It can handle 51.2 Tbps of switching capacity and supports port speeds from 100GE to 1.6TE, which means it fits into a wide range of setups – from edge locations to cloud data centers. It also supports over 200,000 MACsec security policies and features Elastic Pipe packet processing, positioning it as a long-term infrastructure piece. By sticking to standards and ensuring broad compatibility, Jericho4 helps avoid vendor lock-in and encourages adoption across the AI networking space.

What to Watch:

  • Integration of Jericho4 into OEM platforms such as Arista’s R-Series or Nokia’s 7250 IXR.
  • Competition from NVIDIA InfiniBand, Cisco Nexus, and other Ethernet vendors scaling their AI networking portfolios.
  • Customer reception of Jericho4’s performance across diverse geographies and heterogeneous data center setups.
  • Operator feedback on congestion management, packet loss, and real-time performance under AI training loads.

See the complete press release on the launch of Broadcom Jericho4, enabling distributed AI computing on the Broadcom website.

Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.

Other insights from Futurum:

Synopsys Demonstrates PCIe 6.x Interoperability With Broadcom at PCI-SIG DevCon 2025

Broadcom Q2 FY 2025 Sees Record Revenue, Solid AI and Software Growth

Solidigm and Broadcom Extend SSD Partnership to Power AI’s Next Growth Phase

Author Information

Ray Wang is the Research Director for Semiconductors, Supply Chain, and Emerging Technology at Futurum. His coverage focuses on the global semiconductor industry and frontier technologies. He also advises clients on global compute distribution, deployment, and supply chain. In addition to his main coverage and expertise, Wang also specializes in global technology policy, supply chain dynamics, and U.S.-China relations.

He has been quoted or interviewed regularly by leading media outlets across the globe, including CNBC, CNN, MarketWatch, Nikkei Asia, South China Morning Post, Business Insider, Science, Al Jazeera, Fast Company, and TaiwanPlus.

Prior to joining Futurum, Wang worked as an independent semiconductor and technology analyst, advising technology firms and institutional investors on industry development, regulations, and geopolitics. He also held positions at leading consulting firms and think tanks in Washington, D.C., including DGA–Albright Stonebridge Group, the Center for Strategic and International Studies (CSIS), and the Carnegie Endowment for International Peace.

Related Insights
Micron Technology Q1 FY 2026 Sets Records; Strong Q2 Outlook
December 18, 2025

Micron Technology Q1 FY 2026 Sets Records; Strong Q2 Outlook

Futurum Research analyzes Micron’s Q1 FY 2026, focusing on AI-led demand, HBM commitments, and a pulled-forward capacity roadmap, with guidance signaling continued strength into FY 2026 amid persistent industry supply...
NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy
December 16, 2025

NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy

Nick Patience, AI Platforms Practice Lead at Futurum, shares his insights on NVIDIA's release of its Nemotron 3 family of open-source models and the acquisition of SchedMD, the developer of...
Broadcom Q4 FY 2025 Earnings AI And Software Drive Beat
December 15, 2025

Broadcom Q4 FY 2025 Earnings: AI And Software Drive Beat

Futurum Research analyzes Broadcom’s Q4 FY 2025 results, highlighting accelerating AI semiconductor momentum, Ethernet AI switching backlog, and VMware Cloud Foundation gains, alongside system-level deliveries....
Oracle Q2 FY 2026 Cloud Grows; Capex Rises for AI Buildout
December 12, 2025

Oracle Q2 FY 2026: Cloud Grows; Capex Rises for AI Buildout

Futurum Research analyzes Oracle’s Q2 FY 2026 earnings, highlighting cloud infrastructure momentum, record RPO, rising AI-focused capex, and multicloud database traction driving workload growth across OCI and partner clouds....
Synopsys Q4 FY 2025 Earnings Highlight Resilient Demand, Ansys Integration
December 12, 2025

Synopsys Q4 FY 2025 Earnings Highlight Resilient Demand, Ansys Integration

Futurum Research analyzes Synopsys’ Q4 FY 2025 results, highlighting AI-era EDA demand, Ansys integration momentum, and the NVIDIA partnership....
Five Key Reasons Why Confluent Is Strategic To IBM
December 9, 2025

Five Key Reasons Why Confluent Is Strategic To IBM

Brad Shimmin and Mitch Ashley at Futurum, share their insights on IBM’s $11B acquisition of Confluent. This bold move signals a strategic pivot, betting that real-time "data in motion" is...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.