Menu

Google Enhances Storage for AI

Google Enhances Storage for AI

The News: Google has released a series of storage enhancements targeted at supporting AI and machine learning (ML) workloads. The announcement came at Google Next ‘24 as part of a larger announcement around Google’s AI Hypercomputer. The storage updates include enhancements to Cloud Storage FUSE, Parallelstore, and Hyperdisk ML. More information about the AI Hypercomputer announcements can be found here.

Google Enhances Storage for AI

Analyst Take: As part of a series of AI Hypercomputer enhancements, Google announced multiple storage updates targeted at enhancing AI and ML. The new storage updates focus on maximizing GPU and TPU utilization to accelerate model training. The announcement includes the following product updates:

  • Cloud Storage FUSE: Google announced new caching capabilities to Cloud Storage FUSE. Cloud Storage FUSE lets you mount and access Cloud Storage buckets as local file systems so that you can read and write objects using standard file system protocols. Caching will improve access time to the buckets, though customers may look to other file systems to provide the very high-speed needed for training. Google claims that the new caching functionality improves training by 2.9x and improves serving performance of Google foundational models by 2.2x.
  • Parallelstore: Google added caching to its parallel file system, which is targeted for scratch storage use. Parallelstore is a file system based on DAOS, which is a key data store architecture written for NVMe technology. Originally designed for storage class memory (SCM, i.e. Optane), the caching will provide faster access. The Parallelstore and the caching capability is still in preview. Google claims it can provide up to 3.9x faster training times and up to 3.7x higher training throughput compared with native ML framework data loaders.
  • Hyperdisk ML: Google is introducing a Hyperdisk ML block storage solution targeted at supporting AI inference workloads. This would be the fourth offering for Hyperdisk, thought the details on what is in the ML version are not available. Still, for this offering, Google claims Hyperdisk ML can provide up to 12x faster model load times and expects to trump the performance and throughput from with Azure UltraDisk SSD and Amazon EBS io2 BlockExpress.

The overall announcement from Google is optimizing for AI and ML across several layers of hardware and software. When considering the storage announcements specifically, the emphasis was placed on caching and keeping data close to compute resources to maximize training performance. Caching certainly is not a new concept, but it may hold extra significance when considering AI training. Training models is a time-consuming process that relies on expensive, compute-intensive resources. Keeping data near these resources and maximizing their utilization becomes a key priority when considering storage requirements for AI.

The bottleneck of the data to feed GPUs and train has always been an ongoing issue with traditional HPC/AI practitioners. There have been lots of gyrations, including processes, code adjustments, xPUs, etc., to address the IO problems. We expect the problem to ramp up as organizations take on more data and train for their specific use cases. Google is actively addressing these needs and looks to help customers further optimize their AI training.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

2023 Cloud Downtime Incident Report

Google Cloud Launches Axion and Enhances AI Hypercomputer

Public Cloud Storage Catered to AI Data in 2023

Author Information

Mitch comes to The Futurum Group through the acquisition of the Evaluator Group and is focused on the fast-paced and rapidly evolving areas of cloud computing and data storage. Mitch joined Evaluator Group in 2019 as a Research Associate covering numerous storage technologies and emerging IT trends.

With a passion for all things tech, Mitch brings deep technical knowledge and insight to The Futurum Group’s research by highlighting the latest in data center and information management solutions. Mitch’s coverage has spanned topics including primary and secondary storage, private and public clouds, networking fabrics, and more. With ever changing data technologies and rapidly emerging trends in today’s digital world, Mitch provides valuable insights into the IT landscape for enterprises, IT professionals, and technology enthusiasts alike.

Now retired, Camberley brought over 25 years of executive experience leading sales and marketing teams at Fortune 500 firms. Before joining The Futurum Group, she led the Evaluator Group, an information technology analyst firm as Managing Director.

Her career spanned all elements of sales and marketing including a 360-degree view of addressing challenges and delivering solutions was achieved from crossing the boundary of sales and channel engagement with large enterprise vendors and her own 100-person IT services firm.

Camberley provided Global 250 startups with go-to-market strategies, creating a new market category “MAID” as Vice President of Marketing at COPAN and led a worldwide marketing team including channels as a VP at VERITAS. At GE Access, a $2B distribution company, she served as VP of a new division and succeeded in growing the company from $14 to $500 million and built a successful 100-person IT services firm. Camberley began her career at IBM in sales and management.

She holds a Bachelor of Science in International Business from California State University – Long Beach and executive certificates from Wellesley and Wharton School of Business.

Related Insights
Agentic AI
April 14, 2026

Can HubSpot’s Agentic AI Bet Disrupt Enterprise CRM’s Old Guard?

HubSpot's new AI agents and agentic capabilities position it as a credible challenger to Salesforce and Microsoft, capturing enterprise demand for AI-powered task automation....
Neo4j's Context Gap
April 14, 2026

Does Neo4j’s Context Gap Thesis Expose Enterprise AI’s Biggest Blind Spot?

Neo4j's latest analysis exposes a critical flaw in enterprise AI: the neglect of structural, relational context. Discover why graph databases are positioned as the missing memory layer for agentic AI...
Wasabi Acquires Lyve Cloud. Does This Strengthen Its Storage Position?
April 14, 2026

Wasabi Acquires Lyve Cloud. Does This Strengthen Its Storage Position?

Alex Smith and Brad Shimmin from Futurum examine the Wasabi Lyve Cloud acquisition and whether the deal meaningfully shifts cloud storage dynamics or primarily consolidates customers, capacity, and ecosystem integrations....
Hammerspace's NVIDIA-Powered AI Data Platform Simplifies AI Infrastructure
April 14, 2026

Hammerspace’s NVIDIA-Powered AI Data Platform Simplifies AI Infrastructure

Alastair Cooke, Research Director, Cloud and Data Center at Futurum, shares his insights on Hammerspace's announcement of an AI data platform based on NVIDIA’s reference architecture and Hammerspace’s universal namespace....
Oracle Redefines Mission-Critical Tiers as AI Workloads Demand Always-On Data
April 14, 2026

Oracle Redefines Mission-Critical Tiers as AI Workloads Demand Always-On Data

Brad Shimmin, Research Director at Futurum, explores Oracle's new Diamond-Grade resilience and how sub-millisecond latency and post-quantum security are redefining mission-critical data for the age of agentic AI....
CoreWeave's Anthropic and Meta Partnerships
April 13, 2026

CoreWeave’s Anthropic and Meta Wins Validate Benchmark Outperformance

Brendan Burke, Research Director at Futurum, examines how CoreWeave's $21B Meta deal and Anthropic partnership validate the neocloud model for frontier AI infrastructure built on MLPerf-leading performance....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.