Google Cloud Set to Launch NVIDIA-Powered A3 GPU Virtual Machines

Google Cloud Set to Launch NVIDIA-Powered A3 GPU Virtual Machines

The News: Google extended its partnership with NVIDIA during Google Cloud Next ’23, announcing general availability (GA) of its H100-powered A3 GPU virtual machines (VMs) and outlining plans for future collaboration. See the announcement on the Google Cloud blog.

Google Cloud Set to Launch NVIDIA-Powered A3 GPU Virtual Machines

Analyst Take: The big Google- NVIDIA news out of the conference was that the A3 supercomputer VMs will be generally available next month. These VMs use NVIDIA’s H100 Tensor Core GPUs, which are built to train and serve demanding AI workloads and large language models (LLMs). Google claims the A3 instances, combined with Google Cloud infrastructure, can provide 3x faster training and 10x greater networking bandwidth over previous products. A3 VMs can also scale models to tens of thousands of NVIDIA H100 GPUs.

The A3 VM includes dual 4th Gen Intel Xeon scalable processors, 8 NVIDIA H100 GPUs per VM, and 2 TB of host memory. The A3 VM delivers 3.6 TB/s bisectional bandwidth between the eight GPUs via fourth-generation NVIDIA NVLink technology. The bandwidth comes from Google’s Titanium network adapter and NVIDIA Collective Communications Library (NCCL) optimizations.

During a Cloud Next keynote, Google Cloud CEO Thomas Kurian and NVIDIA CEO Jensen Huang spoke of other joint generative AI projects the companies are working on. These projects include:

  • Integrating Google’s serverless Spark with NVIDIA acceleration libraries and GPUs for data science workloads with Google’s Dataproc Hadoop and Spark managed service
  • Plans to put the NVIDIA DGX Cloud on Google Cloud Platform (GCP), so GCP customers can take advantage of NVIDIA’s AI cloud supercomputer
  • Co-engineering chips for data processing, model serving, networking, and software to integrate NVIDIA acceleration into the GCP Vertex AI development environment
  • Working on NVIDIA large-memory AI with DGX GH200 Grace Hopper Superchips and NVIDIA NLink Switch System
  • Enabling NVIDIA GPU acceleration for PaxML framework used by Google to build internal LLMs

Even while launching the next-generation of its own TPU custom chips for accelerating ML, Google made it clear that partnering with NVIDIA acceleration is essential for serious AI companies. That is an easy call, following NVIDIA’s strong earnings report in August when it doubled revenue year-over-year to $13.5 billion on the strength of its AI products and services.

The winners in the generative AI wars will be the companies that can best leverage their NVIDIA acceleration partnerships, and Google is fully engaged. Google Next ’23 featured presentations from General Motors, IHOP, Fox Sports, Six Flags, Wendy’s, Estée Lauder, GE Appliances, and healthcare firms Bayer Pharma, HCA Healthcare, and Meditech detailing their use of AI in the Google Cloud.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Google Cloud’s TPU v53 Accelerates AI Compute War

NVIDIA Generative AI Accelerates Automotive Innovation

Duet AI for Google Workspaces Enhances Google Meet and Google Chat

Author Information

Dave’s focus within The Futurum Group is concentrated in the rapidly evolving integrated infrastructure and cloud storage markets. Before joining the Evaluator Group, Dave spent 25 years as a technology journalist and covered enterprise storage for more than 15 years. He most recently worked for 13 years at TechTarget as Editorial Director and Executive News Editor for storage, data protection and converged infrastructure. In 2020, Dave won an American Society of Business Professional Editors (ASBPE) national award for column writing.

His previous jobs covering technology include news editor at Byte and Switch, managing editor of EdTech Magazine, and features and new products editor at Windows Magazine. Before turning to technology, he was an editor and sports reporter for United Press International in New York for 12 years. A New Jersey native, Dave currently lives in northern Virginia.

Dave holds a Bachelor of Arts in Communication and Journalism from William Patterson University.

SHARE:

Latest Insights:

Phison’s Pascari Enterprise SSDs Are Available Through Newegg and Offer a Range of Performance and Endurance Characteristics for Different Workloads
Alastair Cooke, Tech Field Day Event Lead at Futurum, shares his insights on how the Phison Pascari Enterprise SSD range can help organizations accommodate on-premises AI data challenges in a cost-effective and high-performance manner.
Bola Rotibi, Chief of Enterprise Research at CCS Insight, joins Daniel Newman and Greg Lotko to shed light on balancing succession planning with strategic IT investment for enhanced resiliency. Discover the significant insights drawn from extensive leadership surveys and years of IT industry analysis.
Andrew Dieckmann and Anush Elangovan from AMD join Patrick Moorhead and Daniel Newman to share their insights on enhancing AI development through open source, rapidly evolving hardware, and AMD's strategic moves in the AI infrastructure space.
Dan McNamara, SVP and GM at AMD, joins Patrick Moorhead and Daniel Newman to discuss AMD's EPYC impact on the enterprise AI and server market.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.