Search
Close this search box.

AWS Serves Up NVIDIA GPUs for Short-Duration AI/ML Workloads

AWS Serves Up NVIDIA GPUs for Short-Duration AI/ML Workloads

The News: Amazon Web Services (AWS) launched Amazon Elastic Compute Cloud (EC2) Capacity Blocks for ML, a consumption model that lets customers reserve NVIDIA graphics processing units (GPUs) co-located in EC2 UltraClusters for short-duration machine learning (ML) workloads. You can read the press release on the AWS website.

AWS Serves Up NVIDIA GPUs for Short-Duration AI/ML Workloads

Analyst Take: NVIDIA has cemented its position as a leading GPU provider with its high-performance computing (HPC) and deep learning capabilities capturing significant market share, particularly among gamers, data scientists, and AI researchers. Hyperscale cloud providers are capitalizing on this demand by offering NVIDIA’s GPU-accelerated cloud instances, which cater to a wide array of workloads from complex AI modeling to graphics-intensive applications, thereby expanding access to these high-end computing resources without the upfront investment in physical hardware.

Against this backdrop, AWS has come up with a way to get around NVIDIA GPU demand issues while enabling customers to avoid making a long-term commitment to expensive GPUs to run short-term jobs. In his blog, Channy Yun, AWS principal developer advocate, compared this approach to making a hotel room reservation. The customer reserves a block of time starting and finishing on specific dates. Instead of picking a room type, the customer selects the number of instances required. When the start date arrives, the customer can access the reserved EC2 Capacity Block and launch P5 instances. At the end of the EC2 Capacity Block duration, any running instances are terminated.

The usage model provides GPU instances to train and deploy generative AI and ML models. EC2 Capacity Blocks are available for Amazon EC2 P5 instances powered by NVIDIA H100 Tensor Core GPUs in the AWS US East (Ohio) Region. The EC2 UltraClusters designed for high-performance ML workloads are interconnected with Elastic Fabric Adapter (EFA) networking for the best network performance available in EC2.

Capacity options include 1, 2, 4, 8, 16, 32, or 64 instances for up to 512 GPUs, and they can be reserved for between 1 and 14 days. EC2 Capacity Blocks can be purchased up to 8 weeks in advance. Keep in mind, EC2 Capacity Blocks cannot be modified or cancelled after purchase.

EC2 Capacity Block pricing depends on available supply and demand at the time of purchase (again, like a hotel). When a customer searches for Capacity Blocks, AWS will show the lowest-priced option to meet the specifications in the selected data range. The EC2 Capacity Block price is charged up front and will not change after purchase.

We see this usage model as a particularly good fit for organizations that need GPU for a single large language model (LLM) job and do not want to pay for long-term instances. This setup is especially valuable now with interest in generative AI peaking and GPU resources in great demand and priced at a premium.

Looking Ahead

Looking ahead, the GPU provisioning marketplace is poised for further innovation, with hyperscale cloud providers such as AWS leading the charge by offering flexible and cost-effective GPU access models akin to the EC2 Capacity Blocks. This approach not only circumvents the scarcity and high upfront costs of NVIDIA GPUs but also aligns with the growing enterprise demand for scalability and agility, especially as interest in generative AI peaks. AWS’s model, which facilitates short-term, high-intensity compute jobs without long-term commitment, is likely to become a blueprint for cloud services, offering a strategic advantage to organizations that engage in sporadic, resource-intensive tasks such as training LLMs.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

AWS Storage Day 2023: AWS Tackles AI/ML, Cyber-Resiliency in the Cloud

AWS Announces New Offerings to Accelerate Gen AI Innovation

Google Cloud Set to Launch NVIDIA-Powered A3 GPU Virtual Machines

Author Information

Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the Vice President and Practice Leader for Hybrid Cloud, Infrastructure, and Operations at The Futurum Group. With a distinguished track record as a Forbes contributor and a ranking among the Top 10 Analysts by ARInsights, Steven's unique vantage point enables him to chart the nexus between emergent technologies and disruptive innovation, offering unparalleled insights for global enterprises.

Steven's expertise spans a broad spectrum of technologies that drive modern enterprises. Notable among these are open source, hybrid cloud, mission-critical infrastructure, cryptocurrencies, blockchain, and FinTech innovation. His work is foundational in aligning the strategic imperatives of C-suite executives with the practical needs of end users and technology practitioners, serving as a catalyst for optimizing the return on technology investments.

Over the years, Steven has been an integral part of industry behemoths including Broadcom, Hewlett Packard Enterprise (HPE), and IBM. His exceptional ability to pioneer multi-hundred-million-dollar products and to lead global sales teams with revenues in the same echelon has consistently demonstrated his capability for high-impact leadership.

Steven serves as a thought leader in various technology consortiums. He was a founding board member and former Chairperson of the Open Mainframe Project, under the aegis of the Linux Foundation. His role as a Board Advisor continues to shape the advocacy for open source implementations of mainframe technologies.

Dave’s focus within The Futurum Group is concentrated in the rapidly evolving integrated infrastructure and cloud storage markets. Before joining the Evaluator Group, Dave spent 25 years as a technology journalist and covered enterprise storage for more than 15 years. He most recently worked for 13 years at TechTarget as Editorial Director and Executive News Editor for storage, data protection and converged infrastructure. In 2020, Dave won an American Society of Business Professional Editors (ASBPE) national award for column writing.

His previous jobs covering technology include news editor at Byte and Switch, managing editor of EdTech Magazine, and features and new products editor at Windows Magazine. Before turning to technology, he was an editor and sports reporter for United Press International in New York for 12 years. A New Jersey native, Dave currently lives in northern Virginia.

Dave holds a Bachelor of Arts in Communication and Journalism from William Patterson University.

SHARE:

Latest Insights:

MongoDB Is Positioning Itself as the AI-Era Database, Emphasizing Developer Experience, Application Modernization, and a Robust Partner Ecosystem
Nick Patience and Steven Dickens share their insights on how MongoDB 8.0 is driving application modernization through AI, improved performance, and partnerships. Learn how enterprises are embracing this shift to modernize legacy systems.
Ron Westfall, Research Director of The Futurum Groups, explores why fiber services and AI are mutually beneficial, driving improved outcomes and experiences in Fiber and AI, AI and Fiber: Meant for Each Other – Sept. 2024.
Sanjay Mirchandani, CEO at Commvault, joins Daniel Newman and Patrick Moorhead to share his insights on leveraging strategic acquisitions and partnerships for enhancing cloud-first cyber resilience at Commvault SHIFT.
Paul Nashawaty, Practice Lead, at The Futurum Groups shares his take on top trends and challenges organizations are facing in The Rise of Cloud Native, WebAssembly, and FinOps in Application Development.