Analyst(s): Futurum Research
Publication Date: February 16, 2026
Arista Networks’ Q4 update underscores expanding Ethernet adoption in AI back-end and front-end networks, deeper work with model builders and cloud titans, and growing demand for scale-out/scale-across fabrics. Management commentary points to increasing AMD accelerator traction in open, multi-vendor stacks, evolving DCI/optical architectures, and disciplined pricing amid memory cost inflation.
What is Covered in This Article:
- Arista Networks’ Q4 FY 2025 financial results
- AI Ethernet buildouts across model builders
- Scale-across DCI and 7800 spine momentum
- AMD-driven open networking attach gains
- Guidance and Final Thoughts
The News: Arista Networks (NYSE: ANET) reported Q4 FY 2025 results. Revenue was $2.5 billion, up 28.9% year on year (YoY), versus Wall Street consensus of $2.39 billion. Product revenue was $2.1 billion, up 30% YoY; service revenue was $392.1 million, up 22% YoY. Non-GAAP operating income was $1.2 billion with non-GAAP operating margin of 47.5%, up from 47.0% a year ago. Non-GAAP net income was $1.0 billion, and non-GAAP diluted EPS was $0.82, up from $0.66 a year ago.
“Our Q4 results underscore the strong operating leverage inherent in our model, providing a powerful exit rate as we head into 2026,” said Chantelle Breithaupt, Arista’s CFO. “By pairing 29% revenue growth with a disciplined 47.5% operating margin, Arista achieved a historic milestone: surpassing $1 billion in quarterly net income. Congratulations to the team
Arista Networks Q4 FY 2025: Revenue Beat on AI Ethernet Momentum
Analyst Take: Arista’s Q4 revenue beat and Q1 guide reflect sustained AI-driven demand for Ethernet-based fabrics across hyperscale, specialty cloud, and enterprise settings. Management emphasized broader engagement with multiple model builders and cloud partners, along with architectural shifts toward scale across topologies that elevate DCI and coherent optics. Open, multi-vendor configurations—especially in AMD-accelerated clusters—are expanding Arista’s attach across NIC, IO, and switching layers. Deferred revenue timing remains inherently lumpy amid 12–18 month acceptance windows, while memory cost pressures are prompting selective pricing actions. The setup supports continued growth in AI networking as deployments mature from pilots to multi-region production.
AI Ethernet Adoption Across Model Builders and Cloud Titans
Arista is expanding beyond early work with a small set of builders to support a wider mix, including Gemini, xAI, Anthropic Claude, and OpenAI, each with differing protocol and algorithmic needs that shape network design. These deployments are increasingly distributed across multiple colocation sites and regions, often in partnership with cloud titans for co-developed AI services rather than single–data center silos. Power constraints and workload complexity are pushing fabrics that emphasize bisectional bandwidth, throughput, and sustained utilization across domains. This shift favors Ethernet-based designs that unify back-end training, front-end inference, and enterprise interconnect under consistent telemetry and operations. Management’s commentary indicates Arista expects to work as both an AI-specialty partner and a cloud co-builder as adoption broadens. The broadening buyer set supports sustained AI networking demand.
Scale-Out/Scale-Across Fabrics Elevate DCI, Coherent Optics, and the 7800 Spine
As AI clusters grow, scale-across architectures are gaining traction, with high-injection-bandwidth DCI routing and coherent long-haul optics playing larger roles in multi–data center interconnects. Arista highlighted strong alignment with coherent optics vendors (while not building optics in-house) and positioned the 7800 spine chassis as the flagship platform for universal AI spine use cases. The design focus includes virtual output queuing, robust buffering, and high availability for regional interconnects. Management noted “less blue box” in these scale-across deployments, with preference shifting to the 7800 spine for resilience and performance. This evolution raises the strategic value of routing and interconnect layers in AI networks, where reliability and telemetry are critical. Expect Arista to capture higher-value spine and DCI roles as AI fabrics expand geographically.
AMD Accelerator Momentum and Open Ethernet Wins
While FY 2025 began as a predominantly NVIDIA environment, Arista now sees roughly 20%–25% of deployments using AMD as the preferred accelerator, particularly where customers want best-of-breed NIC, network, and IO choices over vertically integrated stacks. This shift aligns with Arista’s open-standards approach and EOS/CloudVision operational tooling that spans hosts and networks. Management also connected AI growth to rising front-end high-speed switching needs linked to agentic AI workloads, which are extending from cloud providers into enterprises over time. Telemetry enhancements—spanning in-network flow control and RDMA counters, plus host-level collectives and NIC-level diagnostics—consolidate visibility in CloudVision to accelerate troubleshooting. The combination of open hardware choices and integrated observability strengthens Arista’s position as AI networking scales into production. Open ecosystems are expanding Arista’s attach opportunities.
Guidance and Final Thoughts
Q1 FY 2026 revenue guidance is about $2.6 billion, above consensus of $2.45 billion, with non-GAAP gross margin of 62%–63% and operating margin around 46%. Deferred revenue recognition remains variable given 12–18 month acceptance windows and the potential for lumpy releases depending on customer timing. Management flagged memory price inflation and indicated that a one-time increase on select memory-intensive SKUs may be necessary, with memory included in purchase commitments but still in short supply. Hyperscalers are collaborating more closely with extended planning visibility, helping align capacity with deployment schedules. Collectively, these factors suggest continued momentum in AI networking while component costs and acceptance timing remain key execution variables.
See the full press release on Arista Networks’ Q4 FY 2025 financial results on the company website.
Declaration of generative AI and AI-assisted technologies in the writing process: This content has been generated with the support of artificial intelligence technologies. Due to the fast pace of content creation and the continuous evolution of data and information, The Futurum Group and its analysts strive to ensure the accuracy and factual integrity of the information presented. However, the opinions and interpretations expressed in this content reflect those of the individual author/analyst. The Futurum Group makes no guarantees regarding the completeness, accuracy, or reliability of any information contained herein. Readers are encouraged to verify facts independently and consult relevant sources for further clarification.
Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.
Other insights from Futurum:
Will Jericho4 Help Broadcom Lead the Next Era of AI Networking?
Does Nebius’ Acquisition of Tavily Create the Leading Agentic Cloud?
AI Capex 2026: The $690B Infrastructure Sprint
Author Information

Futurum Research
Futurum Research delivers forward-thinking insights on technology, business, and innovation. Content published under the Futurum Research byline incorporates both human and AI-generated information, always with editorial oversight and review from the expert Futurum Research team to ensure quality, accuracy, and relevance. All content, analysis, and opinion are based on sources and information deemed to be reliable at the time of publication.
The Futurum Group is not liable for any errors, omissions, biases, or inadequacies in the information contained herein or for any interpretations thereof. The reader is solely responsible for any decisions made or actions taken based on the information presented in this publication.