AWS re:Invent: With an Eye on AI, AWS Adds, Enhances Storage Services

AWS re:Invent: With an Eye on AI, AWS Adds, Enhances Storage Services

The News: Amazon Web Services (AWS) enhanced its object and file system storage services at AWS re:Invent, which fit into its overall theme of improving its public cloud for enterprises looking to use generative AI. You can find storage announcements in blog posts on the AWS website.

AWS re:Invent: With an Eye on AI, AWS Adds, Enhances Storage Services

Analyst Take: As usual, AWS made a wide range of storage and data protection announcements at re:Invent this year. Here is the main storage news.

Object Storage

Amazon S3 Express One Zone. AWS claims this new storage class can deliver up to 10 times improved performance on smaller objects than S3 standard storage. It also claims it can handle hundreds of thousands of requests per second with single-digit millisecond latency. The objects are stored and replicated – likely on flash storage – within one AWS Availability Zone to reduce the latency between compute and storage. The Express One Zone class is especially beneficial for smaller objects because there is less data to read.

File Storage

Elastic File System (EFS) Archive. Archive is a new storage class for the AWS EFS for running file workloads. EFS Archive keeps coldest file data always available. At $0.008/GB per month, EFS Archive costs up to 97% less than EFS Standard and up to 50% lower than EFS Infrequent Access storage classes in the US East region. EFS Archive is for file data accessed no more than a few times a year. Intelligent tiering can automatically move files from EFS Standard with sub-millisecond SSD latencies to EFS Infrequent Access to EFS Archive based on the last time the files were accessed.

FSx for NetApp ONTAP. AWS also enhanced its enterprise-class file system co-engineered with NetApp. Enhancements include new scale-out file systems, Virtual Private Cloud (VPC) support, and FlexGroup volume management.

Until now, FSx for ONTAP has been only scale-up, running on a single pair of servers in active-passive high-availability configurations. The new scale-out FSx for ONTAP file systems option uses from two to six high-availability pairs. While the scale-up file systems support a maximum of 4 GBps of read throughput, 1.8 GBps of write throughput, 160,000 IOPs, and 192 TiB of SSD storage, the scale-out file systems support up to 36 GBps of read throughput, 6.6 GBps of write throughput, 1.2 million IOPs, and 1 TPiB of SSD storage.

Customers who specify 4 GBps or less throughput will get the scale-up server configuration while customers with more than 4 GBps throughput receive scale-out server configurations. Scale-out file systems can use multiple availability zones while scale-out systems are only available in single AZs.

FSx for ONTAP users can now create multi-AZ file systems in VPCs shared with them by other accounts. That makes it possible to create and access high-available storage from multiple VCP virtual networks.

Users can also create, manage, and back up FSX for ONTAP FlexGroup volumes through the AWS Management Console, the Amazon FSx CLI, and the AWS SDK. Previously, they could only create FlexGroups using the ONTAP CLIP and ONTAP REST API. Customers can also now create Amazon FSx backups of r FlexGroup volumes. A FlexGroup volume is a scale-out NAS container that uses automatic load distribution and scalability for high performance. A FlexGroup can scale to 20 PBs.

On-demand replication for FSX for OpenZFS – this feature enables customers to send a snapshot from one file system to another file system in their FSx for OpenZFS account. FSx for OpenZFS file systems are accessible from Linux, Windows, and macOS compute instances and containers through the NFS file protocol.

Impact on AI

What does all this have to do with AI? Well, object and file storage are used mainly for unstructured data, such as audio, video, image, and document content frequently utilized for AI. Amazon S3 Express One Zone keeps storage and compute closer together, and the EFS Archive makes cold data more available. Placing compute and storage close is important to generative AI because it keeps data closer to training nodes. It also speeds monitoring and inferencing for AI.

These enhancements—along with the general performance gains—can be of great benefit to customers who have petabytes of data in AWS that they want to use for generative AI.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

AWS Storage Day 2023: AWS Tackles AI/ML, Cyber-Resiliency in the Cloud

AWS Serves Up NVIDIA GPUs for Short-Duration AI/ML Workloads

AWS re:Invent: AWS Unveils Next-Gen Graviton, Trainium Chips

Author Information

Dave focuses on the rapidly evolving integrated infrastructure and cloud storage markets.

Related Insights
Can Claude Opus 4.7 and Ensemble AI Models Finally Make Code Review Reliable?
April 18, 2026

Can Claude Opus 4.7 and Ensemble AI Models Finally Make Code Review Reliable?

CodeRabbit's ensemble AI code review system using Claude Opus 4.7 catches subtle bugs and race conditions that single-model systems miss, signaling a major shift in software quality assurance....
Will GPT-Rosalind Redefine AI’s Role in Life Sciences R&D?
April 18, 2026

Will GPT-Rosalind Redefine AI’s Role in Life Sciences R&D?

OpenAI's GPT-Rosalind marks a pivotal shift in enterprise AI, delivering domain-specific reasoning for life sciences while intensifying competition between horizontal and vertical AI specialists....
Can Real-Time Code Quality Tools Like Qodo and Cursor Break the Pull Request Bottleneck?
April 18, 2026

Can Real-Time Code Quality Tools Like Qodo and Cursor Break the Pull Request Bottleneck?

Qodo's integration with Cursor demonstrates how real-time code quality tools are eliminating pull request bottlenecks by surfacing issues as developers write code, not after submission....
Can CodeRabbit's Multi-Repo Analysis End the Microservices Blind Spot in Code Review?
April 18, 2026

Can CodeRabbit’s Multi-Repo Analysis End the Microservices Blind Spot in Code Review?

CodeRabbit's new Multi-Repo Analysis feature surfaces cross-repository breaking changes that traditional code review tools miss, addressing a critical pain point for microservices architectures and distributed teams....
Is PyTorch Europe's Rise a Turning Point for Open Source AI Leadership?
April 17, 2026

Is PyTorch Europe’s Rise a Turning Point for Open Source AI Leadership?

PyTorch Conference Europe 2026 drew 600+ AI leaders to Paris, showing open source AI's growing enterprise influence as organizations shift from proprietary solutions toward agentic AI and hybrid deployments....
Agentic AI or Pipeline AI for Code Reviews? Why the Architecture Decision Now Shapes Dev Velocity
April 17, 2026

Agentic AI or Pipeline AI for Code Reviews? Why the Architecture Decision Now Shapes Dev Velocity

Enterprise leaders face a critical decision: agentic AI versus pipeline AI for code reviews. Futurum Group's latest analysis reveals how this architectural choice directly impacts developer velocity, risk management, and...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.