Menu

Deciding When to Use Intel Xeon CPUs for AI Inference, AI Field Day

Deciding When to Use Intel Xeon CPUs for AI Inference, AI Field Day

Introduction

Intel presented the capabilities of Intel Xeon CPUs for AI inference at AI Field Day, filling out a complete day with a series of Intel partner presentations following the same theme. Intel has been building workload-specific acceleration into CPU designs for over a decade. The 5th Generation Xeon Scalable CPUs added an AI-specific accelerator (AMX) alongside a few new built-in accelerators. This is part of the evidence that Intel is dedicated to allowing customers to run AI on their CPUs rather than requiring add-in card accelerators for every AI use.

Ronak Shah presented this continuing vision at AI Field Day 4 where delegates wanted to understand the decision points for using older Xeon CPUs, 5th Generation Xeon Scalable or adding an off-CPU accelerator such as an NVIDIA GPU. Ronak was very clear that not all AI use cases suit Intel Xeon CPUs for AI inference and that the decision is not clear-cut. The rule of thumb seems to be that large language models (LLMs) with over 20 billion parameters will seldom deliver acceptable performance on CPUs. Smaller models and non-LLM-based AI can often use Intel Xeon CPUs for AI inference and deliver the required latency.

The AI Pipeline CPU-GPU Sandwich

Ronak outlined Intel’s view of an AI pipeline, starting with training data preparation, a CPU-dominated task that mostly involved moving data and extract-transform-load (ETL) tasks. After data preparation, the next phase is model training, which is almost always a GPU-dominated task where the massive parallelization of a GPU can be continuously loaded. The third stage is inference, deploying the AI model to do its job. Ronak sees many production uses of Intel Xeon CPUs for AI inference. Mainly, when the AI is a part of a complete business application, this use of CPU for data prep, GPU for training, and CPU for inference is what I’m calling the AI pipeline CPU-GPU sandwich.

One of the big benefits of Intel Xeon CPUs for AI inference is that you already have them. There is no need to build a specialized infrastructure just for AI. The AI application can live alongside other applications on your shared computing platform. It is essential to recognize that generative AI is not the only player in the game; most production use of AI uses much smaller models. These smaller models are ideally suited to CPUs. Notably, the AMX accelerator speeds machine vision use cases up to two orders of magnitude compared with 4th Generation Xeon Scalable. In many production use cases, using Intel Xeon CPUs for AI inference makes sense.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Intel’s AI Everywhere Event Unveils Strategic Moves in the Era of AI

Intel Developer Cloud: Driving AI Chip Design, Filling AI Workload Gap

Intel 5th Gen Xeon Scalable Processors Make Breakthroughs

Author Information

Alastair has made a twenty-year career out of helping people understand complex IT infrastructure and how to build solutions that fulfil business needs. Much of his career has included teaching official training courses for vendors, including HPE, VMware, and AWS. Alastair has written hundreds of analyst articles and papers exploring products and topics around on-premises infrastructure and virtualization and getting the most out of public cloud and hybrid infrastructure. Alastair has also been involved in community-driven, practitioner-led education through the vBrownBag podcast and the vBrownBag TechTalks.

Related Insights
Cisco Q2 FY 2026 Earnings- AI Infrastructure Momentum Lifts Results
February 13, 2026

Cisco Q2 FY 2026 Earnings: AI Infrastructure Momentum Lifts Results

Futurum Research analyzes Cisco’s Q2 FY 2026 results, highlighting AI infrastructure momentum, campus networking demand, and margin mitigation plans, with guidance reaffirming a strong FY 2026 outlook....
Astera Labs Q4 2025 Earnings Diversified AI Connectivity Momentum
February 13, 2026

Astera Labs Q4 2025 Earnings: Diversified AI Connectivity Momentum

Brendan Burke, Research Director at Futurum, analyzes Astera Labs’ Q4 2025 beat and above-consensus guidance, highlighting momentum in smart fabrics, signal conditioning, and CXL memory as AI connectivity spend accelerates....
ServiceNow Buys Pyramid Does this Spell the End of the BI Dashboard
February 13, 2026

ServiceNow Buys Pyramid: Does this Spell the End of the BI Dashboard?

Brad Shimmin, VP and Practice Lead at Futurum, along with Keith Kirkpatrick, Vice President & Research Director, Enterprise Software & Digital Workflows, analyze ServiceNow’s acquisition of Pyramid Analytics. They explore...
Does Nebius’ Acquisition of Tavily Create the Leading Agentic Cloud
February 12, 2026

Does Nebius’ Acquisition of Tavily Create the Leading Agentic Cloud?

Brendan Burke, Research Director at Futurum, explores Nebius’ acquisition of Tavily to create a unified "Agentic Cloud." By integrating real-time search, Nebius is addressing hallucinations and context gaps for autonomous...
Lattice Semiconductor Q4 FY 2025 Record Comms & Compute, AI Servers +85%
February 12, 2026

Lattice Semiconductor Q4 FY 2025: Record Comms & Compute, AI Servers +85%

Futurum Research analyzes Lattice’s Q4 FY 2025 results, highlighting data center companion FPGA momentum, expanding security attach, and a growing new-product mix that supports FY 2026 growth and margin resilience....
AI Capex 2026 The $690B Infrastructure Sprint
February 12, 2026

AI Capex 2026: The $690B Infrastructure Sprint

Nick Patience, AI Platforms Practice Lead at Futurum, shares his insights on the massive AI capex plans of US hyperscalers, specifically whether the projected $700 billion infrastructure build-out can be...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.