Search
Close this search box.

Google Cloud Set to Launch NVIDIA-Powered A3 GPU Virtual Machines

Google Cloud Set to Launch NVIDIA-Powered A3 GPU Virtual Machines

The News: Google extended its partnership with NVIDIA during Google Cloud Next ’23, announcing general availability (GA) of its H100-powered A3 GPU virtual machines (VMs) and outlining plans for future collaboration. See the announcement on the Google Cloud blog.

Google Cloud Set to Launch NVIDIA-Powered A3 GPU Virtual Machines

Analyst Take: The big Google- NVIDIA news out of the conference was that the A3 supercomputer VMs will be generally available next month. These VMs use NVIDIA’s H100 Tensor Core GPUs, which are built to train and serve demanding AI workloads and large language models (LLMs). Google claims the A3 instances, combined with Google Cloud infrastructure, can provide 3x faster training and 10x greater networking bandwidth over previous products. A3 VMs can also scale models to tens of thousands of NVIDIA H100 GPUs.

The A3 VM includes dual 4th Gen Intel Xeon scalable processors, 8 NVIDIA H100 GPUs per VM, and 2 TB of host memory. The A3 VM delivers 3.6 TB/s bisectional bandwidth between the eight GPUs via fourth-generation NVIDIA NVLink technology. The bandwidth comes from Google’s Titanium network adapter and NVIDIA Collective Communications Library (NCCL) optimizations.

During a Cloud Next keynote, Google Cloud CEO Thomas Kurian and NVIDIA CEO Jensen Huang spoke of other joint generative AI projects the companies are working on. These projects include:

  • Integrating Google’s serverless Spark with NVIDIA acceleration libraries and GPUs for data science workloads with Google’s Dataproc Hadoop and Spark managed service
  • Plans to put the NVIDIA DGX Cloud on Google Cloud Platform (GCP), so GCP customers can take advantage of NVIDIA’s AI cloud supercomputer
  • Co-engineering chips for data processing, model serving, networking, and software to integrate NVIDIA acceleration into the GCP Vertex AI development environment
  • Working on NVIDIA large-memory AI with DGX GH200 Grace Hopper Superchips and NVIDIA NLink Switch System
  • Enabling NVIDIA GPU acceleration for PaxML framework used by Google to build internal LLMs

Even while launching the next-generation of its own TPU custom chips for accelerating ML, Google made it clear that partnering with NVIDIA acceleration is essential for serious AI companies. That is an easy call, following NVIDIA’s strong earnings report in August when it doubled revenue year-over-year to $13.5 billion on the strength of its AI products and services.

The winners in the generative AI wars will be the companies that can best leverage their NVIDIA acceleration partnerships, and Google is fully engaged. Google Next ’23 featured presentations from General Motors, IHOP, Fox Sports, Six Flags, Wendy’s, Estée Lauder, GE Appliances, and healthcare firms Bayer Pharma, HCA Healthcare, and Meditech detailing their use of AI in the Google Cloud.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Google Cloud’s TPU v53 Accelerates AI Compute War

NVIDIA Generative AI Accelerates Automotive Innovation

Duet AI for Google Workspaces Enhances Google Meet and Google Chat

Author Information

Dave’s focus within The Futurum Group is concentrated in the rapidly evolving integrated infrastructure and cloud storage markets. Before joining the Evaluator Group, Dave spent 25 years as a technology journalist and covered enterprise storage for more than 15 years. He most recently worked for 13 years at TechTarget as Editorial Director and Executive News Editor for storage, data protection and converged infrastructure. In 2020, Dave won an American Society of Business Professional Editors (ASBPE) national award for column writing.

His previous jobs covering technology include news editor at Byte and Switch, managing editor of EdTech Magazine, and features and new products editor at Windows Magazine. Before turning to technology, he was an editor and sports reporter for United Press International in New York for 12 years. A New Jersey native, Dave currently lives in northern Virginia.

Dave holds a Bachelor of Arts in Communication and Journalism from William Patterson University.

SHARE:

Latest Insights:

Krista Case of The Futurum Group reflects on lessons learned and shares her expected impacts from the July 2024 CrowdStrike outage.
Steven Dickens and Ron Westfall from The Futurum Group highlighted that HPE Private Cloud AI’s ability to rapidly deploy generative AI applications, along with its solution accelerators and partner ecosystem, can greatly simplify AI adoption for enterprises, helping them scale quickly and achieve faster results.
Uma Ramadoss and Eric Johnson from AWS join Daniel Newman and Patrick Moorhead to share their insights on the evolution and the future of Building Generative AI Applications with Serverless, highlighting AWS's role in simplifying this futuristic technology.
Steven Dickens, Chief Technology Advisor at The Futurum Group, explores how AWS is transforming sports with AI and cloud technology, enhancing fan engagement and team performance while raising concerns around privacy and commercialization. Discover the future challenges and opportunities in sports tech.