Search
Close this search box.

VMware Private AI at AI Field Day: CPUs If You Can, GPUs If You Must

VMware Private AI at AI Field Day: CPUs If You Can, GPUs If You Must

Introduction

VMware Private AI presented at AI Field Day about how to run your AI on-premises and get the best out of the CPUs you already have in your data center. If I had to summarize the presentation in one sentence, it would be this: You can run large language model (LLM) AI inference in virtual machines (VMs) on Intel Sapphire Rapids CPUs; you don’t always need GPUs. The vital part is that Sapphire Rapids (4th Generation Xeon Scalable) CPUs added the Advanced Matrix Extensions (AMX) instructions that allow the CPU to do matrix math efficiently.

This matrix math is precisely what GPUs do at large scale, so Intel adding them to a CPU is a big deal for AI workloads. From VMware’s Earl Ruby, we heard there are a few hurdles for the infrastructure team to ensure vSphere VMs can access the AMX instructions because they are so new. Those hurdles are familiar to seasoned vSphere administrators: minimum ESXi version 8.0U2, VM hardware V20, and Linux kernel, preferably above 5.19. The good news is that support for Intel AMX is already present in many popular AI tools, such as PyTorch, so the AI development team does not need to do anything special to benefit from AMX. One element of optimization is essential: quantization, which changes the LLM to use lower precision integer math to replace floating point. Quantization balances the required precision against the resource cost to get the question. Less precise math costs less CPU and memory but at the cost of a less accurate answer.

Earl had an interesting comparison, running the Llama-2 7 billion parameter model as a chatbot on Intel Ice Lake CPUs (no AMX) compared to Sapphire Rapids with AMX. Both were shown in VMs without any GPU capability. The chatbot on Sapphire Rapids ran approximately 8x faster than on Ice Lake. Without AMX, the Ice Lake CPU didn’t deliver a responsive chatbot, and we would have needed to add a GPU for acceptable performance. On Sapphire Rapids with AMX, the chatbot was perfectly usable. Would the chatbot have run faster if the Sapphire Rapids VM had a GPU? Undoubtedly. Would the GPU’s additional cost and power consumption have delivered better value than using AMX in the CPU? That will depend entirely on the value of the lower latency in your application and your business.

Earl also led us through hurdles in getting the AMX instructions available to pods running in VMware Tanzu Kubernetes. Again, these are relatively well-known hurdles for a seasoned vSphere and Tanzu administrator. Like the ESXi hurdles, these Tanzu hurdles will disappear over time as software versions in use catch up with the AMX features added in Sapphire Rapids.

As we have seen before, Intel adds built-in accelerators to its CPUs to remove the need for add-in card accelerators. I’m old enough to remember when SSL offload cards were essential for web servers, along came the AES-NI instructions and now cryptography, including SSL, is baked into the CPU. Over time, I expect to see more and more AI use cases that the CPU can fulfil alone. Running mixed workloads on a shared pool of servers has always been a value proposition for VMware. Allowing AI onto this same pool of servers without requiring additional hardware is a clear win.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

VMware Explore: Making Moves with Multi-Cloud and Private AI

VMware VCF and Tanzu Post Broadcom: Lessons and Evolution

Broadcom Redefines VMware

Author Information

Alastair has made a twenty-year career out of helping people understand complex IT infrastructure and how to build solutions that fulfil business needs. Much of his career has included teaching official training courses for vendors, including HPE, VMware, and AWS. Alastair has written hundreds of analyst articles and papers exploring products and topics around on-premises infrastructure and virtualization and getting the most out of public cloud and hybrid infrastructure. Alastair has also been involved in community-driven, practitioner-led education through the vBrownBag podcast and the vBrownBag TechTalks.

SHARE:

Latest Insights:

Veeam Makes a Strategic Move to Enhance Positioning in Next-Generation, AI-Driven Cyber Resilience
Krista Case, Research Director at The Futurum Group, covers Veeam’s acquisition of Alcion and its appointment of Niraj Tolia as CTO. The move will strengthen its AI cyber resilience capabilities.
Google’s New Vault Offering Enhances Its Cloud Backup Services, Addressing Compliance, Scalability, and Disaster Recovery
Krista Case, Research Director at The Futurum Group, offers insights on Google Cloud’s new vault offering and how this strategic move enhances data protection, compliance, and cyber recovery, positioning Google against competitors such as AWS and Azure.
Capabilities Focus on Helping Customers Execute Tasks and Surface Timely Insights
Keith Kirkpatrick, Research Director with The Futurum Group, shares his insights on Oracle’s Fusion Applications innovations announced at CloudWorld, and discusses the company’s key challenges.
OCI Zero Trust Packet Routing Zeros in on Enabling Organizations to Minimize Data Breaches by Decoupling Network Configuration from Network Security
Futurum’s Ron Westfall examines why newly proposed OCI ZPR technology can usher in a new era of network security across multi-cloud environments by decoupling security policies from the complexities of network configurations and simplifying security policy management.