Search

Intel Enters the AI PC Race With Its NPU-Powered Core Ultra Processor

Intel Enters the AI PC Race With Its NPU-Powered Core Ultra Processor

The News: At its third annual Innovation event, Intel unveiled an array of technologies to bring AI everywhere and make it more accessible across all workloads, from client and edge to network and cloud. “AI will fundamentally transform, reshape and restructure the PC experience,” Intel CEO Pat Gelsinger shared with the audience, “unleashing personal productivity and creativity through the power of the cloud and PC working together.”

This new AI PC experience, as Gelsinger calls it, arrives with the upcoming Intel Core Ultra processors, code-named Meteor Lake, featuring Intel’s first integrated neural processing unit (NPU) for power-efficient AI acceleration and local inference on the PC. Gelsinger confirmed Core Ultra will launch December 14. Read the full press release on the Intel website.

Intel Enters the AI PC Race With Its NPU-Powered Core Ultra Processor

Analyst Take: Core Ultra represents an inflection point in Intel’s client processor roadmap. For starters, it is the first client chiplet design enabled by Foveros packaging technology. (Foveros enables Intel to build processors with compute tiles stacked on top of each other rather than side by side, allowing for better performance within a smaller footprint. It also allows Intel to mix and match compute tiles to optimize for cost and power efficiency.) Second, Intel 4 process technology is helping Intel deliver significant power-efficient performance improvements (roughly 50% compared with 13th generation Raptor Lake), which Intel really needed to ramp up. Third, the new processor brings discrete-level graphics performance with onboard Intel Arc graphics. And last but not least, Core Ultra comes equipped with Intel’s integral NPU, which is essentially the built-in, on-chip AI accelerator and inference engine that enables the logic and control needed to execute machine learning (ML) algorithms.

Gelsinger was joined onstage by Jerry Kao, CEO of Acer, for a sneak peek at an upcoming Core Ultra-powered laptop. “We’ve been co-developing with Intel teams a suite of Acer AI applications to take advantage of the Intel Core Ultra platform,” Kao said, “developing with the OpenVINO toolkit and co-developed AI libraries to bring the hardware to life.” Gelsinger also introduced several new AI PC use cases, such as Deep Render, which uses AI to significantly compress files (up to 5x); Fabletics, which helps users create size-accurate virtual avatars for more reliable apparel shopping; and Rewind.ai, a hearing aid that can transcribe whatever audio or voice prompts it captures (including generative AI queries). Gelsinger’s approach here is clever, as PC users need to look to a much richer AI horizon than the one available today to buy into Intel’s vision for the future of AI PCs.

The Business Case for AI PCs

The obvious question Intel is trying to preemptively answer here is why do PC users need on-device AI? Why not just let AI live in the cloud? A few obvious answers come to mind: PCs are not always connected to cloud services or might be suboptimally connected. Also, cloud services could be temporarily down. Being locked into a cloud-only AI ecosystem would only work some of the time, and that is not going to be good enough for most enterprise and individual users. In addition, not all AI-enabled processes need to be cloud-based or should be. For privacy reasons, giving PC users the ability to keep their generative AI queries and AI-enabled workflows on-device comes with clear benefits. Finally, for scores of use cases, inference can be handled on devices more economically than in the cloud, and so moving inference to devices wherever and whenever possible makes good business and resource management sense.

To some extent, on-device AI is not exactly new. AI has already been enabling enhanced digital assistant, voice-to-text, gaming, photography, and video and audio processing features for some years. Generative AI as a category, however, has been mostly a cloud play so far, with some accelerating but limited overlap in mobile. PCs have tended to lag a bit in comparison, and Intel sees an opportunity to step in and help PCs catch up.

AI PCs are therefore the natural evolution of on-device AI adapting to the requirements of generative AI-enabled applications. That is why I am not concerned about Intel, Qualcomm, AMD, or any of their OEM partners making the case for the category’s value to users and organizations. As AI capabilities, on-device or otherwise, become ubiquitous, what we call AI PCs today, we will simply call PCs in a few years. On-device AI will simply be assumed in the same way that Wi-Fi, Bluetooth, and other on-device features have become core PC technologies. The question is, what types of new applications and features will AI PCs unleash?

Intel’s Hardware-First Approach to AI-On-PC Innovation

Michelle Johnston Holthaus, executive VP and GM for Intel’s Client Computing Group, provides a glimpse into Intel’s approach to that question. Her emphasis on the plan to deliver a high cadence of products, millions of units next year, and “billions of TOPS that developers can design to,” speaks to Intel’s ecosystem strategy: create an open playground for developers to build the next killer app. This approach signals that Intel is not just already looking beyond the ChatGPT-phase of the generative AI goldrush but wants to play an integral part in helping developers bring about whatever comes next. This strategic current obviously extends well beyond PCs, but Intel’s deliberate inclusion of PCs in its layer cake approach to building an agile AI-enabled ecosystem speaks to the importance of thinking about AI as both a cloud strategy and an edge/on-device strategy.

Rebooting Performance Benchmark Models for the Era of AI PCs

These are still the very early days of AI PCs, so I want to be cautious about my timeframe expectations. On the one hand, competition with AMD and Qualcomm in the AI PC space should push Intel to move fast and differentiate itself in the market (and will hopefully motivate Intel to take its vision of what an AI PC could be – or should be – in novel directions). On the other hand, Intel could find itself mired in a linear competitive benchmark race with its rivals, as it has in the past, and with mixed outcomes.

To that point, I caution not to read too much into the role that benchmarks might play in communicating value or real-world performance in this market segment, assuming they will play any role at all. It is likely that each chipmaker and OEM in the AI PC space will deliver such differentiated approaches to AI-enabled on-device performance that benchmarks will become far more brand-specific than brand-agnostic. (Useful when comparing successive generations of products from the same chipmaker and OEM but limited for brand versus brand competitive analysis. We have already begun to see how TOPS have begun to lose steam as an empirical performance benchmark, for example, with power-efficiency metrics gaining in relevance.) Performance needs also tend to vary wildly depending on the application, making one-size-fits-all benchmarking difficult to apply to on-device AI performance comparisons. Keep that in mind when Intel releases its first NPU benchmarks in a few weeks, whatever these benchmarks suggest about the state of Intel’s progress.

Overall, Intel’s entry into the AI PC space is an exciting and significant – albeit unsurprising – new chapter in the race to complement cloud-based AI applications with entirely new layers of on-device AI capabilities, particularly in consumer and enterprise PCs.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Delivering AI-Ready PCs at Scale with Intel Partners – Six Five On the Road

Intel Meteor Lake: Placing Bets on On-Device AI

5G Factor VRN: Qualcomm Raises On-Device AI Processing Awareness

Author Information

Olivier Blanchard has extensive experience managing product innovation, technology adoption, digital integration, and change management for industry leaders in the B2B, B2C, B2G sectors, and the IT channel. His passion is helping decision-makers and their organizations understand the many risks and opportunities of technology-driven disruption, and leverage innovation to build stronger, better, more competitive companies.  Read Full Bio.

SHARE:

Latest Insights:

Camberly Bates, Chief Technology Advisor at The Futurum Group, highlights Solidigm's groundbreaking work in AI, sustainability, and edge innovations presented at the Six Five Summit. Solidigm's advancements are set to redefine the future of data storage, emphasizing efficiency and environmental stewardship.
Oracle Exadata Exascale Debuts Aiming to Unite the Best of Exadata Database Intelligent Architecture and Cloud Elasticity to Boost Performance for Key Workloads
The Futurum Group’s Ron Westfall examines why the Exadata Exascale debut can be viewed as optimally uniting Exadata with the cloud to provide customers a highly performant, economical infrastructure for their Oracle databases with hyper-elastic resources expanding Oracle’s market by making Exadata attractive to small organizations with low entry configuration and small workload affordability.
Brad Tompkins, Executive Director at VMware User Group (VMUG), joins Keith Townsend & Dave Nicholson to share insights on how the VMware community is navigating the company's acquisition by Broadcom, focusing on continuity and innovation.
On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss AWS Summit New York 2024, Samsung Galaxy Unpacked July 2024, Apple & Microsoft leave OpenAI board, AMD acquires Silo, Sequoia/A16Z/Goldman rain on the AI parade, and Oracle & Palantir Foundry & AI Platform.