Deciding When to Use Intel Xeon CPUs for AI Inference, AI Field Day

Deciding When to Use Intel Xeon CPUs for AI Inference, AI Field Day

Introduction

Intel presented the capabilities of Intel Xeon CPUs for AI inference at AI Field Day, filling out a complete day with a series of Intel partner presentations following the same theme. Intel has been building workload-specific acceleration into CPU designs for over a decade. The 5th Generation Xeon Scalable CPUs added an AI-specific accelerator (AMX) alongside a few new built-in accelerators. This is part of the evidence that Intel is dedicated to allowing customers to run AI on their CPUs rather than requiring add-in card accelerators for every AI use.

Ronak Shah presented this continuing vision at AI Field Day 4 where delegates wanted to understand the decision points for using older Xeon CPUs, 5th Generation Xeon Scalable or adding an off-CPU accelerator such as an NVIDIA GPU. Ronak was very clear that not all AI use cases suit Intel Xeon CPUs for AI inference and that the decision is not clear-cut. The rule of thumb seems to be that large language models (LLMs) with over 20 billion parameters will seldom deliver acceptable performance on CPUs. Smaller models and non-LLM-based AI can often use Intel Xeon CPUs for AI inference and deliver the required latency.

The AI Pipeline CPU-GPU Sandwich

Ronak outlined Intel’s view of an AI pipeline, starting with training data preparation, a CPU-dominated task that mostly involved moving data and extract-transform-load (ETL) tasks. After data preparation, the next phase is model training, which is almost always a GPU-dominated task where the massive parallelization of a GPU can be continuously loaded. The third stage is inference, deploying the AI model to do its job. Ronak sees many production uses of Intel Xeon CPUs for AI inference. Mainly, when the AI is a part of a complete business application, this use of CPU for data prep, GPU for training, and CPU for inference is what I’m calling the AI pipeline CPU-GPU sandwich.

One of the big benefits of Intel Xeon CPUs for AI inference is that you already have them. There is no need to build a specialized infrastructure just for AI. The AI application can live alongside other applications on your shared computing platform. It is essential to recognize that generative AI is not the only player in the game; most production use of AI uses much smaller models. These smaller models are ideally suited to CPUs. Notably, the AMX accelerator speeds machine vision use cases up to two orders of magnitude compared with 4th Generation Xeon Scalable. In many production use cases, using Intel Xeon CPUs for AI inference makes sense.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Intel’s AI Everywhere Event Unveils Strategic Moves in the Era of AI

Intel Developer Cloud: Driving AI Chip Design, Filling AI Workload Gap

Intel 5th Gen Xeon Scalable Processors Make Breakthroughs

Author Information

Alastair has made a twenty-year career out of helping people understand complex IT infrastructure and how to build solutions that fulfil business needs. Much of his career has included teaching official training courses for vendors, including HPE, VMware, and AWS. Alastair has written hundreds of analyst articles and papers exploring products and topics around on-premises infrastructure and virtualization and getting the most out of public cloud and hybrid infrastructure. Alastair has also been involved in community-driven, practitioner-led education through the vBrownBag podcast and the vBrownBag TechTalks.

SHARE:

Latest Insights:

ServiceNow Reports Robust 22% Yoy cRPO Growth and Improved Margins Amid Macro Uncertainty in Q1 FY 2025
Keith Kirkpatrick, Research Director at The Futurum Group, unpacks ServiceNow’s Q1 FY2025 earnings beat, driven by strong AI-led subscription growth, CRM and workflow expansion, and disciplined guidance amid U.S. budget and macro headwinds.
Seventh-Generation TPU Delivers 42.5 Exaflops and 2x Energy Efficiency Over Trillium
Daniel Newman, CEO at The Futurum Group, shares insights on Google’s Ironwood TPU launch. With 42.5 exaflops at scale and 2x Trillium’s efficiency, Ironwood is Google’s clearest inference-focused chip yet.
Lightmatter Breakthrough 3D Photonic Interposer Tech Provides Highest Bandwidth and Largest Die Complexes for AI Infrastructure Chips
The Futurum Group’s Ron Westfall shares his insights on why Lightmatter can be a frontrunner in photonic supercomputing through its Passage M1000 and L200 products aimed at transforming AI infrastructure with breakthrough 3D photonic innovation.
Sap Q1 FY 2025 Sees Continued Success in Cloud Transformation, With 27% Cloud Growth and Operating Margin up 810 BPS
Keith Kirkpatrick and Daniel Newman at The Futurum Group examine SAP’s Q1 FY 2025 earnings, highlighting strong cloud ERP growth, rising AI adoption, and resilience amid macro and tariff pressures.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.