Menu

Can NeuReality AI-SuperNIC Deliver the Speed AI Systems Need?

Can NeuReality AI-SuperNIC Deliver the Speed AI Systems Need?

Analyst(s): Ray Wang
Publication Date: September 18, 2025

NeuReality unveiled its NR2 AI-SuperNIC with UEC compliance, designed to eliminate scale-out network bottlenecks and enhance GPU efficiency for AI factories.

What is Covered in this Article:

  • NeuReality’s launch of the NR2 AI-SuperNIC with 1.6 Tbps throughput and UEC compliance.
  • NR1 software upgrade to support UEC 1.0 specification.
  • In-network computing capabilities to improve large-scale GPU performance.
  • NR2 modular approach with separate networking and compute dies.
  • Availability timeline for NR2 AI-SuperNIC and mass production schedule.

The News: NeuReality just rolled out its new NR2 AI-SuperNIC, a 1.6 Tbps network card designed specifically for large-scale AI systems. This new card supports the Ultra Ethernet Consortium (UEC) standard and brings in-network computing to the table, tackling common issues such as scalability, latency, and efficiency in today’s AI workloads. Along with this launch, the company also released a software update for its earlier NR1 model to support UEC 1.0.

NeuReality says the NR2 can be deployed in several ways: co-packaged with GPUs, mounted on micro-server boards, or used as a standalone NIC. It’s built to remove network bottlenecks that often slow down high-performance systems. Select customers will get access in H2 FY 2026, with full-scale production planned for FY 2027.

Can NeuReality AI-SuperNIC Deliver the Speed AI Systems Need?

Analyst Take: NeuReality’s launch of the NR2 AI-SuperNIC feels like a big step forward for AI networking. As AI models grow to handle more complex tasks, including multimodal inputs and reasoning chains, the strain on infrastructure keeps growing – and older networks are starting to hold GPUs back. With UEC support and in-network computing, the NR2 could change how AI data centers run, helping accelerators get the data they need faster and more efficiently.

Tackling Scale-Out Limits

The NR2 builds on the NR1’s embedded AI-NIC foundation, boosting the network speed to 1.6 Tbps and bringing computing directly into the data path. With upgraded AI-Hypervisor and DSP processors, the NR2 can handle workload coordination at scale, supporting math-heavy and control-heavy tasks. This reduces communication delays that often leave GPUs idle, leading to better use of resources. The mix of speed, low latency, and workload offloading makes it a strong fit for AI clusters of all sizes. In short, it clears the way for the next wave of AI systems.

UEC Support and Compatibility

By going with UEC Ethernet, NeuReality delivers low latency and easy interoperability across different AI clusters. This builds on the original NR1 support for TCP and ROCEv2, making it easier to slot into existing setups. This commitment to standardization helps ensure that systems from multiple vendors can work together smoothly, which is key for AI factories built on mixed infrastructure. The move to UEC makes NeuReality’s setup especially appealing for distributed AI environments.

Modular AI Infrastructure

The NR2 is part of a shift toward modular infrastructure that separates networking from compute. The NR2 AI-SuperNIC (NR2n) will launch as a standalone product first, with the NR2 AI-CPU following. The CPU will feature up to 128 cores and be paired with the SuperNIC. Built on Arm’s Neoverse V3, the NR2 AI-CPU is tuned for tasks such as real-time coordination, token streaming, KV-cache management, and overall orchestration. This modular design helps balance power and performance across networking and computing tasks. Moving from NR1 to NR2 shows how NeuReality is staying in step with the evolving needs of GPU-based systems.

Big Picture for AI Infrastructure

NeuReality aims to rework the foundation of AI infrastructure, breaking away from the old CPU-GPU-NIC setup. The NR2 lineup helps GPUs and other accelerators make the most of every compute cycle by clearing out the usual bottlenecks that add cost and energy use. The system can scale easily—from small racks to massive AI factories—making it suitable for both training and inference at any level. With a focus on full-system efficiency, NeuReality is pushing the boundaries of what distributed AI networks can do.

What to Watch:

  • Customer adoption timelines as NR2 AI-SuperNIC becomes available in H2 FY 2026 and ramps in FY 2027.
  • Impact of in-network computing on large-scale GPU utilization across AI factories.
  • Progress of UEC specification adoption across the AI ecosystem and NeuReality’s role in shaping it.
  • Integration of the NR2 AI-CPU with up to 128 cores and its orchestration capabilities.
  • Demonstrations and uptake from the AI Infra Summit showcasing the efficiency and ROI of GenAI with NeuReality solutions.

See the complete press release on NeuReality’s NR2 AI-SuperNIC launch on the NeuReality website.

Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.

Other insights from Futurum:

Can Lip-Bu Tan’s Engineering-First Vision Get Intel Back on Track?

Is NVIDIA’s Jetson Thor the New Brain for General Robotics?

Qualcomm’s Arm-Based Data Center CPUs To Smoothly Integrate With NVIDIA

Author Information

Ray Wang is the Research Director for Semiconductors, Supply Chain, and Emerging Technology at Futurum. His coverage focuses on the global semiconductor industry and frontier technologies. He also advises clients on global compute distribution, deployment, and supply chain. In addition to his main coverage and expertise, Wang also specializes in global technology policy, supply chain dynamics, and U.S.-China relations.

He has been quoted or interviewed regularly by leading media outlets across the globe, including CNBC, CNN, MarketWatch, Nikkei Asia, South China Morning Post, Business Insider, Science, Al Jazeera, Fast Company, and TaiwanPlus.

Prior to joining Futurum, Wang worked as an independent semiconductor and technology analyst, advising technology firms and institutional investors on industry development, regulations, and geopolitics. He also held positions at leading consulting firms and think tanks in Washington, D.C., including DGA–Albright Stonebridge Group, the Center for Strategic and International Studies (CSIS), and the Carnegie Endowment for International Peace.

Related Insights
Micron Technology Q1 FY 2026 Sets Records; Strong Q2 Outlook
December 18, 2025

Micron Technology Q1 FY 2026 Sets Records; Strong Q2 Outlook

Futurum Research analyzes Micron’s Q1 FY 2026, focusing on AI-led demand, HBM commitments, and a pulled-forward capacity roadmap, with guidance signaling continued strength into FY 2026 amid persistent industry supply...
NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy
December 16, 2025

NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy

Nick Patience, AI Platforms Practice Lead at Futurum, shares his insights on NVIDIA's release of its Nemotron 3 family of open-source models and the acquisition of SchedMD, the developer of...
Will a Digital Adoption Platform Become a Must-Have App in 2026?
December 15, 2025

Will a DAP Become the Must-Have Software App in 2026?

Keith Kirkpatrick, Research Director with Futurum, covers WalkMe’s 2025 Analyst Day, and discusses the company’s key pillars for driving success with enterprise software in an AI- and agentic-dominated world heading...
Broadcom Q4 FY 2025 Earnings AI And Software Drive Beat
December 15, 2025

Broadcom Q4 FY 2025 Earnings: AI And Software Drive Beat

Futurum Research analyzes Broadcom’s Q4 FY 2025 results, highlighting accelerating AI semiconductor momentum, Ethernet AI switching backlog, and VMware Cloud Foundation gains, alongside system-level deliveries....
Oracle Q2 FY 2026 Cloud Grows; Capex Rises for AI Buildout
December 12, 2025

Oracle Q2 FY 2026: Cloud Grows; Capex Rises for AI Buildout

Futurum Research analyzes Oracle’s Q2 FY 2026 earnings, highlighting cloud infrastructure momentum, record RPO, rising AI-focused capex, and multicloud database traction driving workload growth across OCI and partner clouds....
Synopsys Q4 FY 2025 Earnings Highlight Resilient Demand, Ansys Integration
December 12, 2025

Synopsys Q4 FY 2025 Earnings Highlight Resilient Demand, Ansys Integration

Futurum Research analyzes Synopsys’ Q4 FY 2025 results, highlighting AI-era EDA demand, Ansys integration momentum, and the NVIDIA partnership....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.