Pure Storage Announces FlashBlade //EXA for the AI Factory

Pure Storage announces FlashBlade //EXA for the AI Factory

Analyst(s): Camberley Bates
Publication Date: March 14, 2025

On March 11, 2025, Pure Storage announced their FlashBlade //EXA for AI training and large-scale inferencing.

What is Covered in this Article:

  • AI Factory and the new target market
  • Pure Storage FlashBlade //EXA
  • Run down on performance

The News: Pure Storage announces FlashBlade //EXA, which targets AI training and large-scale inferencing with the new pNFS architecture.

Pure Storage announces FlashBlade //EXA for the AI Factory

Analyst Take: Pure Storage released its latest AI technology, the FlashBlade //EXA, a pNFS architecture that reportedly can scale to meet what they coin the AI Factory requirements. Building on the FlashBlade technology with pNFS, Pure Storage has disaggregated the metadata and data nodes to enable a broad scale-out for high performance.

AI Factory – the target market for //EXA

Pure Storage, along with others, is using the phrase AI Factory to characterize organizations that are AI native, specializing in delivering technology for this next generation of AI. For instance, firms developing unique foundational models, specialized cloud providers, or HPC environments. This is somewhat contrasted to the typical enterprise organization that is building out and exploiting the trained or pre-inference technology. The differentiation of these markets is significant—in the count of GPUs (1000 – 10,000s), performance and data size (EB+). This market will need traditional scale-out parallel file systems (Lustre, GPFS, etc.) or a pNFS configuration to meet performance in the +1TB/ sec.

Scale-out file systems have traditionally been the domain of HPC markets. With the large-scale AI, we expect scale-out parallel file and pNFS / file systems to increasingly compete to deliver large, highly performant data infrastructure at a massive scale. In addition to scale and performance, the AI market will demand capabilities that optimize checkpointing performance, availability/reliability, and ease of use. The latter is not normally applicable to scale-out parallel file systems. It is this market that Pure Storage is focused on capturing.

Pure Storage and //EXA

Building on their hyperscaler experience (we believe particularly with Meta), Pure Storage FlashBlade //EXA takes on a disaggregated design, scaling metadata and data nodes independently. The metadata node uses NFSv4.1 over TCP (aka pNFS) and the data nodes are NFSv3 over RDMA. The metadata nodes will be purchased from Pure with CAPEX and OPEX options. The first release for the data nodes is with off-the-shelf hardware that is specified by Pure Storage. Pure Storage plans to release the data nodes with their Direct Flash Modules of 75TB and 150TB in 2H2025. Customers will license the software for the data nodes based on usable TB. Connectivity uses standard ethernet connectivity, including Nvidia’s NICs, right in the sweet spot of architectures for the AI Factory.

While Pure Storage alluded to performance, they held back from announcing any benchmark statistics. Historically they have claimed performance numbers but without the absolute comparison. In this market, we expect they will release comparatives and have been assured they are actively working on them. Their preliminary numbers (without Futurum analysis) are stated at 3.4TB/second per rack and 10+TB/ second read performance in a single namespace.

This is a positive announcement, one that will increase competition in the market. Read the full release from Pure Storage here.

What to Watch:

  • The AI battleground for data at large scale
  • How traditional HPC organizations adapt to bring AI technologies to their constituents
  • Increasing use of Object and/or File for AI deployments

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

SuperComputing 2024: A Playground for the Future of Technology

Pure Storage Q4 FY 2025: Double-Digit Revenue Growth Amid Strong Subscription Momentum

Pure Accelerates in London on FlashBlade

Author Information

Camberley Bates

Camberley brings over 25 years of executive experience leading sales and marketing teams at Fortune 500 firms. Before joining The Futurum Group, she led the Evaluator Group, an information technology analyst firm as Managing Director.

Her career has spanned all elements of sales and marketing including a 360-degree view of addressing challenges and delivering solutions was achieved from crossing the boundary of sales and channel engagement with large enterprise vendors and her own 100-person IT services firm.

Camberley has provided Global 250 startups with go-to-market strategies, creating a new market category “MAID” as Vice President of Marketing at COPAN and led a worldwide marketing team including channels as a VP at VERITAS. At GE Access, a $2B distribution company, she served as VP of a new division and succeeded in growing the company from $14 to $500 million and built a successful 100-person IT services firm. Camberley began her career at IBM in sales and management.

She holds a Bachelor of Science in International Business from California State University – Long Beach and executive certificates from Wellesley and Wharton School of Business.

SHARE:

Latest Insights:

Oracle Introduces a Platform to Design, Deploy, and Manage AI Agents Across Fusion Cloud at No Additional Cost to Users
Keith Kirkpatrick, Research Director at The Futurum Group, analyzes Oracle’s AI Agent Studio, a platform enabling enterprise users to create, manage, and extend AI agents across Fusion Cloud Applications without added cost or complexity.
Nokia Bell Labs’ 100th Anniversary Created the Opportunity for Nokia CNS to Showcase How Collaboration with Bell Labs is Productizing Portfolio Innovation
Ron Westfall, Research Director at The Futurum Group, shares insights on why Nokia CSN and Bell Labs are driving the portfolio innovation key to enable CSP and enterprise transformation of cloud, AI and automation, and monetization capabilities.
Synopsys Deepens NVIDIA Collaboration to Accelerate EDA Workloads on Grace Blackwell Platform
Richard Gordon, VP & Practice Lead, Semiconductors at The Futurum Group, examines how Synopsys and NVIDIA aim to accelerate chip design with Grace Blackwell, targeting 30x EDA speedups and enhanced AI productivity.
Custom Arm Neoverse V2 Chip Posts Gains in AI, HPC, and General Compute Across C4A VMs
Richard Gordon, VP & Practice Lead, Semiconductors at The Futurum Group, unpacks Google Axion’s strong benchmarks across AI, HPC, and cloud workloads, showing how Google’s custom Arm CPU could reshape enterprise infrastructure.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.