VMware Private AI at AI Field Day: CPUs If You Can, GPUs If You Must

VMware Private AI at AI Field Day: CPUs If You Can, GPUs If You Must


VMware Private AI presented at AI Field Day about how to run your AI on-premises and get the best out of the CPUs you already have in your data center. If I had to summarize the presentation in one sentence, it would be this: You can run large language model (LLM) AI inference in virtual machines (VMs) on Intel Sapphire Rapids CPUs; you don’t always need GPUs. The vital part is that Sapphire Rapids (4th Generation Xeon Scalable) CPUs added the Advanced Matrix Extensions (AMX) instructions that allow the CPU to do matrix math efficiently.

This matrix math is precisely what GPUs do at large scale, so Intel adding them to a CPU is a big deal for AI workloads. From VMware’s Earl Ruby, we heard there are a few hurdles for the infrastructure team to ensure vSphere VMs can access the AMX instructions because they are so new. Those hurdles are familiar to seasoned vSphere administrators: minimum ESXi version 8.0U2, VM hardware V20, and Linux kernel, preferably above 5.19. The good news is that support for Intel AMX is already present in many popular AI tools, such as PyTorch, so the AI development team does not need to do anything special to benefit from AMX. One element of optimization is essential: quantization, which changes the LLM to use lower precision integer math to replace floating point. Quantization balances the required precision against the resource cost to get the question. Less precise math costs less CPU and memory but at the cost of a less accurate answer.

Earl had an interesting comparison, running the Llama-2 7 billion parameter model as a chatbot on Intel Ice Lake CPUs (no AMX) compared to Sapphire Rapids with AMX. Both were shown in VMs without any GPU capability. The chatbot on Sapphire Rapids ran approximately 8x faster than on Ice Lake. Without AMX, the Ice Lake CPU didn’t deliver a responsive chatbot, and we would have needed to add a GPU for acceptable performance. On Sapphire Rapids with AMX, the chatbot was perfectly usable. Would the chatbot have run faster if the Sapphire Rapids VM had a GPU? Undoubtedly. Would the GPU’s additional cost and power consumption have delivered better value than using AMX in the CPU? That will depend entirely on the value of the lower latency in your application and your business.

Earl also led us through hurdles in getting the AMX instructions available to pods running in VMware Tanzu Kubernetes. Again, these are relatively well-known hurdles for a seasoned vSphere and Tanzu administrator. Like the ESXi hurdles, these Tanzu hurdles will disappear over time as software versions in use catch up with the AMX features added in Sapphire Rapids.

As we have seen before, Intel adds built-in accelerators to its CPUs to remove the need for add-in card accelerators. I’m old enough to remember when SSL offload cards were essential for web servers, along came the AES-NI instructions and now cryptography, including SSL, is baked into the CPU. Over time, I expect to see more and more AI use cases that the CPU can fulfil alone. Running mixed workloads on a shared pool of servers has always been a value proposition for VMware. Allowing AI onto this same pool of servers without requiring additional hardware is a clear win.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

VMware Explore: Making Moves with Multi-Cloud and Private AI

VMware VCF and Tanzu Post Broadcom: Lessons and Evolution

Broadcom Redefines VMware

Author Information

Alastair has made a twenty-year career out of helping people understand complex IT infrastructure and how to build solutions that fulfil business needs. Much of his career has included teaching official training courses for vendors, including HPE, VMware, and AWS. Alastair has written hundreds of analyst articles and papers exploring products and topics around on-premises infrastructure and virtualization and getting the most out of public cloud and hybrid infrastructure. Alastair has also been involved in community-driven, practitioner-led education through the vBrownBag podcast and the vBrownBag TechTalks.


Latest Insights:

Kate Woolley, General Manager, Ecosystem at IBM, joins Daniel Newman and Patrick Moorhead on Six Five On The Road to share her insights on the growth of IBM's Partner Plus program and the strategic importance of partnerships in the AI landscape.
Dr. Darío Gil and Rob Thomas from IBM join Daniel Newman and Patrick Moorhead on the Six Five On The Road to share their insights on how IBM's AI initiatives are driving significant transformations and value for enterprises across the globe.
Tina Tarquinio, VP at IBM, joins Steven Dickens to share her insights on leveraging AI with the mainframe to enhance productivity and modernize applications, charting the course for future innovations in IT infrastructure.
New Catchpoint Capability Transforms Internet Performance Monitoring with Its Real-Time, Comprehensive Internet Stack Visualization
Paul Nashawaty, Practice Lead, and Sam Holschuh, Analyst, at The Futurum Group share their insight on how Catchpoint's Internet Stack Map affects IPM by enhancing real-time, comprehensive monitoring capabilities.