Analyst(s): Olivier Blanchard
Publication Date: July 29, 2025
While AI PCs, mobile handsets, smart glasses, and other AI-enabled devices grow and are increasingly capable of delivering AI assistant and agentic AI experiences locally, most assistant and agentic experiences remain, disappointingly, cloud-based. Although this may not initially matter to most users, this gap between value proposition and real-world execution could negatively impact device OEMs looking to capitalize on the promise of agentic AI to drive device refresh cycles.
Key Points:
The asymmetry between the availability of AI-enabled device hardware and the still-limited availability of AI-enabled on-device software is worrisome. It signals to savvy technology buyers—both in the consumer and commercial segments—that they may be wise to wait before investing in fully AI-capable devices.
- As new, powerful AI capabilities promise to enhance new smart features in PCs, mobile handsets, tablets, TVs, speakers, cameras, wearables, drones, vehicles, smart glasses, and the XR segment, both the consumer and commercial technology segments are racing to deliver entirely new types of experiences for technology users.
- One critical promise of AI-enabled devices is that they will be able to deliver similar experiences and features on-device, regardless of network connectivity or bandwidth.
- For most technology users today, most AI-enabled assistant and agentic features experienced through devices remain cloud-based. If this doesn’t change soon, it could negatively impact the pace and scale of adoption of advanced AI-enabled devices, especially in flagship and premium price tiers.
- Rather than investing in advanced, cutting-edge systems capable of delivering offline AI-enabled features, consumers and IT decision-makers are continuing to lean toward lower-priced devices with limited local AI capabilities, knowing that they will continue to access assistant and agentic features via cloud-based services.
- Two years after the introduction of truly AI-enabled devices, a significant chunk of the device ecosystem still seems more focused on cobbling “on-device AI” into products, presumably to remain relevant and edgy, than on actually leveraging AI and semiconductor innovation to deliver substantial, let alone remarkable, new features and utility.
- The gap between the value proposition and real-world execution could negatively impact device OEMs looking to capitalize on agentic AI’s promise to drive device refresh cycles.
Overview:
Figure 1: AI PC Adoption Forecast: Breakdown by NPU TOPS
Where Are the On-Device AI-Powered Killer Apps for AI PCs?
For a little more than a year now, PC vendors and their OS partners have been promising that AI-capable PCs would bring about a revolution in productivity and UX improvements. The launch of Copilot+ PC specifically was positioned as an inflection point for AI, effectively enabling cloud-based AI assistant, agentic AI, and generative AI to be delivered on-device. Among the primary benefits of this expansion of the AI ecosystem were improved data security, faster inference, reduced dependence on connectivity and network bandwidth, and lower inference costs.
But to date, most of the truly remarkable assistant, generative, and agentic features accessed by device users remain cloud-based. And while Copilot+ PC laptops are incredibly capable pieces of hardware, they currently bring little tangible on-device AI capabilities or on-device productivity gains. If not for the all-day battery life, Copilot+ PCs might not be AI PCs at all. Nearly every generative and agentic task available through these advanced PCs could just as easily be done with less capable ones, so long as their browser is up to date and a reliable network connection is available. More than a year after the introduction of Copilot+ PCs, that’s a problem.
AI-Enabled XR Is Stalled
A few technology categories show more potential for multimodal AI assistants and agentic than XR. In particular, smart glasses and MR (mixed reality) headsets hold the most promise for hands-free, context-aware, natural language agentic features. Smart cameras and microphones capture what the user sees and hears. The device knows where it is, where the user is going, and how fast. No other portable AI-enabled device category can capture as much data simultaneously, contextualize it properly to understand prompts, interpret them in real time, and provide useful, immediate, relevant responses while delivering hands-free operation.
But two years into the integration of voice-enabled AI into the category, device OEMs seem to struggle to bring tangible innovation and value improvements: This year’s smart glasses deliver essentially the same features that were already generally available 18–24 months ago, only with incremental improvements such as better cameras and improved noise cancellation. This isn’t just tragic, it’s dangerous: There may come a time, if the promise of on-device AI doesn’t materialize, that “futureproofing for AI” will stop being a credible reason for anyone to invest in devices that, while capable of delivering unique, differentiated, remarkable features, fail to live up to that promise.
Mobile Handsets Signal the Arrival of an Agentic Binary Model
Samsung’s release of the Galaxy S25 earlier this year still feels like an inflection point in the overall on-device agentic roadmap. A standout innovation with the S25 is the binary approach to AI assistant and agentic solutions that Samsung is currently experimenting with: It combines a cloud-based assistant and agentic layer (in the S25’s case, Gemini) and the far more secure, entirely local, hyper-personalized AI assistants and agents (in the S25’s case, powered by Samsung’s clever Personal Data Engine, also known as “PDE”).
The decision to approach agentic in this way challenges the notion that AI assistants and agentic AI should be both ubiquitous and device-agnostic. Ideally, AI assistants would be wherever we need them to be: Walk into a room, climb into a cab, step into an office, run into the kitchen, speak to the always-listening assistant through whatever device happens to be nearby, and let the assisting begin. Samsung, however, questioned that assumption: What if AI assistants and agents weren’t always device-agnostic? What if some of them needed to be hyper-personalized, therefore hyper-secure, and therefore hyper-local? What if users needed their AI assistants and agents to be both ubiquitous and not, depending on the environment, situation, and context?
The fact that the question was asked at all, and that fundamental assumptions about AI assistants were questioned, reflects the type of innovative thinking that has been mostly missing from the AI PC space: Here, Samsung product teams ask critical questions that every device OEM and their software partners should be asking as well.
- How do we deliver a hyper-personalized agentic experience to our users while keeping their most sensitive data entirely secure?
- How do we design and package AI assistants and agents that will deliver meaningful value to users?
- How do we create unique, articulatable, valuable differentiation in the market?
- How do we leverage AI technologies to solve problems, eliminate friction points, and do both better than our competitors?
Automotive AI Assistants Are Improving, However
I have been concerned about the state of AI integration in vehicle systems, particularly cockpit interfaces. In the past year, I experienced several lackluster cockpit AI integration demos, all focusing on inexplicably bringing generative AI to in-car experiences. One demo showcased how the system enabled a driver to voice-prompt the vehicle to create an image of a dog on a skateboard. Aside from the fact that the cockpit’s AI ultimately failed to actually create the image, my questions to the system engineers were predictably:
- What purpose does this feature serve in a car?
- What problem or pain point does it solve for a driver?
- What value does this add to the vehicle?
Unsurprisingly, no one could provide a coherent answer. The prevailing sentiment was essentially “car cockpits, but with generative AI!”
Despite system engineers not always understanding use cases for their IP, many automotive manufacturers have been focused on thoughtful execution when it comes to designing remarkable form factors, UX, and AI-powered systems: Instead of useless generative AI integration, they are focusing on optimizing navigation system UX, fine-tuning clear natural language interface with onboard assistants, and pursuing differentiated system design with an actual point. The common denominator between these remarkable examples of AI integrations isn’t the technology itself: It is the design doctrine that AI isn’t the feature. AI is merely the enabling and optimizing technology behind every feature. There is a critical lesson for all device OEMs.
With cars having a lot more flexibility than devices with restricted form factors when it comes to space and battery autonomy but less reliable (or consistent access to) 5G bandwidth, expect onboard AI assistants and agentic features in the automotive segment to begin outpacing other popular personal device segments, such as smartphones, which have thus far led the way in these types of user experiences.
The full report is available via subscription to Futurum Intelligence’s Intelligent Devices IQ service—click here for inquiry and access.
Futurum clients can read more in the Futurum Intelligence Platform, and non-clients can learn more here: Intelligence Devices Practice.
About the Futurum Intelligent Devices Practice
The Futurum Intelligent Devices Practice provides actionable, objective insights for market leaders and their teams so they can respond to emerging opportunities and innovate. Public access to our coverage can be seen here. Follow news and updates from the Futurum Practice on LinkedIn and X. Visit the Futurum Newsroom for more information and insights.
Other insights from Futurum:
Meta and Oakley Launch Performance AI Glasses With 3K Video and Built-in Meta AI
New Categories of High-Performance AI PCs Are Here to Do What Data Centers Can’t
Can Arm’s Zena CSS Reshape Automotive AI Development Timelines?
Author Information
Olivier Blanchard is Research Director, Intelligent Devices. He covers edge semiconductors and intelligent AI-capable devices for Futurum. In addition to having co-authored several books about digital transformation and AI with Futurum Group CEO Daniel Newman, Blanchard brings considerable experience demystifying new and emerging technologies, advising clients on how best to future-proof their organizations, and helping maximize the positive impacts of technology disruption while mitigating their potentially negative effects. Follow his extended analysis on X and LinkedIn.

