AWS Serves Up NVIDIA GPUs for Short-Duration AI/ML Workloads

AWS Serves Up NVIDIA GPUs for Short-Duration AI/ML Workloads

The News: Amazon Web Services (AWS) launched Amazon Elastic Compute Cloud (EC2) Capacity Blocks for ML, a consumption model that lets customers reserve NVIDIA graphics processing units (GPUs) co-located in EC2 UltraClusters for short-duration machine learning (ML) workloads. You can read the press release on the AWS website.

AWS Serves Up NVIDIA GPUs for Short-Duration AI/ML Workloads

Analyst Take: NVIDIA has cemented its position as a leading GPU provider with its high-performance computing (HPC) and deep learning capabilities capturing significant market share, particularly among gamers, data scientists, and AI researchers. Hyperscale cloud providers are capitalizing on this demand by offering NVIDIA’s GPU-accelerated cloud instances, which cater to a wide array of workloads from complex AI modeling to graphics-intensive applications, thereby expanding access to these high-end computing resources without the upfront investment in physical hardware.

Against this backdrop, AWS has come up with a way to get around NVIDIA GPU demand issues while enabling customers to avoid making a long-term commitment to expensive GPUs to run short-term jobs. In his blog, Channy Yun, AWS principal developer advocate, compared this approach to making a hotel room reservation. The customer reserves a block of time starting and finishing on specific dates. Instead of picking a room type, the customer selects the number of instances required. When the start date arrives, the customer can access the reserved EC2 Capacity Block and launch P5 instances. At the end of the EC2 Capacity Block duration, any running instances are terminated.

The usage model provides GPU instances to train and deploy generative AI and ML models. EC2 Capacity Blocks are available for Amazon EC2 P5 instances powered by NVIDIA H100 Tensor Core GPUs in the AWS US East (Ohio) Region. The EC2 UltraClusters designed for high-performance ML workloads are interconnected with Elastic Fabric Adapter (EFA) networking for the best network performance available in EC2.

Capacity options include 1, 2, 4, 8, 16, 32, or 64 instances for up to 512 GPUs, and they can be reserved for between 1 and 14 days. EC2 Capacity Blocks can be purchased up to 8 weeks in advance. Keep in mind, EC2 Capacity Blocks cannot be modified or cancelled after purchase.

EC2 Capacity Block pricing depends on available supply and demand at the time of purchase (again, like a hotel). When a customer searches for Capacity Blocks, AWS will show the lowest-priced option to meet the specifications in the selected data range. The EC2 Capacity Block price is charged up front and will not change after purchase.

We see this usage model as a particularly good fit for organizations that need GPU for a single large language model (LLM) job and do not want to pay for long-term instances. This setup is especially valuable now with interest in generative AI peaking and GPU resources in great demand and priced at a premium.

Looking Ahead

Looking ahead, the GPU provisioning marketplace is poised for further innovation, with hyperscale cloud providers such as AWS leading the charge by offering flexible and cost-effective GPU access models akin to the EC2 Capacity Blocks. This approach not only circumvents the scarcity and high upfront costs of NVIDIA GPUs but also aligns with the growing enterprise demand for scalability and agility, especially as interest in generative AI peaks. AWS’s model, which facilitates short-term, high-intensity compute jobs without long-term commitment, is likely to become a blueprint for cloud services, offering a strategic advantage to organizations that engage in sporadic, resource-intensive tasks such as training LLMs.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

AWS Storage Day 2023: AWS Tackles AI/ML, Cyber-Resiliency in the Cloud

AWS Announces New Offerings to Accelerate Gen AI Innovation

Google Cloud Set to Launch NVIDIA-Powered A3 GPU Virtual Machines

Author Information

Steven engages with the world’s largest technology brands to explore new operating models and how they drive innovation and competitive edge.

Dave’s focus within The Futurum Group is concentrated in the rapidly evolving integrated infrastructure and cloud storage markets. Before joining the Evaluator Group, Dave spent 25 years as a technology journalist and covered enterprise storage for more than 15 years. He most recently worked for 13 years at TechTarget as Editorial Director and Executive News Editor for storage, data protection and converged infrastructure. In 2020, Dave won an American Society of Business Professional Editors (ASBPE) national award for column writing.

His previous jobs covering technology include news editor at Byte and Switch, managing editor of EdTech Magazine, and features and new products editor at Windows Magazine. Before turning to technology, he was an editor and sports reporter for United Press International in New York for 12 years. A New Jersey native, Dave currently lives in northern Virginia.

Dave holds a Bachelor of Arts in Communication and Journalism from William Patterson University.

SHARE:

Latest Insights:

Oracle Database@AWS Launches in Virginia and Oregon With More Regions on the Way, Bringing Exadata and Autonomous Database Capabilities to AWS Customers
Brad Shimmin, VP and Practice Lead at Futurum, shares his insights on Oracle Database@AWS, which brings Oracle’s Exadata and Autonomous Database services directly to AWS data centers with AI and zero-ETL capabilities.
Dell and CoreWeave Partner To Deploy NVIDIA GB300 NVL72 Systems, Signaling the Upcoming Ramp-up of GB300 in Q4 2025
Ray Wang, Research Director at Futurum, shares insights on Dell’s early deployment of NVIDIA GB300 NVL72 systems with CoreWeave. The launch sets a new standard in rack-scale performance for AI reasoning and cloud infrastructure.
Ruba Borno and Karan Batta delve into the innovative Oracle Database@AWS offering, exploring its impact on accelerating cloud migrations for enterprise workloads.
Twilio’s Annual State of Customer Experience Report Finds That While Personalization Is Important, Brands Must Inspire Action, Earn Trust, and Keep Pace With Constant Change
Keith Kirkpatrick, Research Director at Futurum covers Twilio’s 2025 State of Customer Engagement Report, and shares his insights into the strategies brands and software vendors must take to drive more customer engagement and trust.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.