Analyst(s): Olivier Blanchard
Publication Date: August 16, 2024
The News: Google just unveiled its next generation of Pixel phones at its “Made By Google” event. Among the products announced at the event were Google’s second-generation foldable handset – the Pixel 9 Pro Fold – as well as the Pixel 9, Pixel 9 Pro, and Pixel 9 Pro XL. But despite the emphasis on Google hardware, the other major topic of discussion at the event was AI, and specifically Google’s vision for a hybrid model of integrated solutions in mobile, which combines on-device (on-chip) AI capabilities with cloud-based AI solutions to deliver a broad spectrum of next-gen AI features and user experiences. Google’s hybrid approach to AI integration through the Pixel ecosystem may provide the clearest hints yet of Google’s AI monetization strategy in Mobile. Read more about the event here.
How The Pixel 9 Launch Clarifies Google’s Expanding AI Monetization Strategy
Analyst Take: AI isn’t just transforming the data center and cloud services anymore. It is also reinventing how we interact with our devices, and first among them are our phones. If you have been following our coverage of Qualcomm’s Snapdragon-powered mobile handsets for the past several years, you will probably recall that we pointed out the potential of on-device AI capabilities long before generative AI became the disruptive force it is today. And if you have been paying attention to Apple’s latest announcements around AI, you no-doubt understand just how critical this new technology is to the Apple ecosystem’s ability to remain competitive against Android (Mobile) and Windows (PC) devices. It was therefore not the least bit surprising to see Google, which plays both on the hardware and the software side of the AI equation, showcase its own AI capabilities and features during its Made by Google event, which introduced its new Pixel 9 lineup.
The growing importance of AI as it pertains to devices may also have played a part in the scheduling of the event. Rather than wait for the autumn launch season to reveal its new products, Google likely wanted to get ahead of Apple’s fall rollout of Apple Intelligence’s Beta program, and argue that it may be able to deliver a richer mix of AI capabilities and services to consumers who might be dissatisfied or bored with iPhone, or simply curious about other options now that AI is transforming mobile UX.
The Future of AI Is Hybrid, and That Is Great News for Google and Its Pixel Strategy
The future of AI integration in the mobile handset space is hybrid, meaning that handset OEMs plan to mix on-device AI capabilities and Cloud-based AI services to deliver a broad spectrum of AI use cases and experiences. Qualcomm’s Snapdragon mobile platform started this trend by introducing powerful on-device AI capabilities – first focusing on system optimization but quickly expanding to more obvious UX features that vastly improved photography, video, gaming, security, live translations, and many of the generative AI applications that we are familiar with today. As the company behind the Android platform, and an AI company in its own right, Google was also there from the start, helping develop these solutions for the Android market. All of this to say that Google is not new to this technology trend.
One of the principal reasons why Google decided to transition to its own silicon several years ago was to more easily align its mobile platform with its own ecosystem and product roadmap. The timing was especially important as the challenge of how best to bring AI to mobile experiences pointed to a mix of on-device and Cloud-based solutions. Perhaps because Tensor 4 isn’t quite on the same level as Snapdragon 8Gen3 when it comes to overall on-device performance or AI capabilities, Google’s vision for mobile AI experiences is likely to rely on a more hybrid mix of device and cloud – similar to Apple’s own model (at least as articulated thus far).
In Pixel’s price tier, this approach makes sense. For starters, Pixel has always been the phone that does more with less (or for less), and the brand’s scrappy, clever approach to solving UX problem is consistent with how Pixel 9 is tackling the AI problem in a space dominated by Snapdragon (and Samsung) at the very high end of the price and performance range. Pixel understands its position in the market, its value proposition, and the lane it occupies, and clearly articulating that through consistent UX design decisions clearly continues to pay off. Also, because Pixel works as a direct interface with Google services, it is an ideal portal through which Google can sell “premium” cloud-based AI services and solutions. In other words, why would Google try to compete at the very high end of on-device (on-chip) AI capabilities when its business model leans more toward monetizing cloud-based solutions?
For those who have lamented Tensor’s past inability to compete head-to-head with flagship tier mobile SOCs from Qualcomm, I say stop and reset your thinking. Google is a different company from Qualcomm (or even MediaTek, which also designs AI-enabled chipsets for mobile handsets): Google’s AI monetization play in mobile is a well-calibrated bet that is far more likely to look for ways to incorporate cloud-based services at scale than it is to try and put out the most high-performing SOC in the industry. That’s because Pixel doesn’t need to have the best mobile chipset on the market. Pixel just needs to have precisely the right cost-to-performance chipset to take full advantage of Google’s full range of hybrid AI solutions designed for the Mobile segment.
With AI, Google’s Pixel Strategy Increasingly Feels Like a Direct Play Against Apple
Google’s approach to AI integration into mobile is similar to Apple’s in many ways: Develop your own silicon, build your own hardware and software stack, and lean on a mix of on-chip and in-Cloud AI to deliver a hopefully seamless AI user experience. And where on-device AI workloads don’t quite match Snapdragon’s capabilities, lean harder on the Cloud. This works better for Google, however, and here are two reasons why: (1) Pixel’s price point may make the offsetting of some Snapdragon-friendly AI workloads to the cloud seem more palatable to users than Apple doing the same despite its higher price points (and arguably better specs). (2) Google’s cloud-based AI solutions ecosystem is far more mature than Apple’s, and may feel less cobbled together than Apple’s. (Yes, Chat-GPT integration is nice because of Chat-GPT’s name recognition, but Apple’s AI strategy still feels a bit more outsourced than it could be.)
This may help explain why Pixel’s approach to AI integration and overall UX feels more like a play against Apple than a play against its Android competitors, especially now. That hypothesis certainly aligns with Google’s timing relative to Apple’s autumn launches.
How Gemini May Give Google a UX, Privacy, and Security Edge Against Apple in the Mobile Segment
Late last year, Google introduced Gemini, promising a phased rollout with basic versions of the platform (“Nano” and “Pro”) immediately being integrated into Bard (Google’s AI-powered chatbot) and Pixel 8 Pro phones. Google’s stated strategy was to use Gemini to make Bard more intuitive (including being more predictive in planning applications), and to help Pixel 8 Pro users enjoy even better task automation than Pixel’s already slick suite of AI-powered time-saving features.
Fast-forward to the Pixel 9 introduction: Google’s Gemini-powered assistant feature already feels far more conversational than it already was and more capable of multitasking. Tensor G4 enables many of these functions to be processed on-device instead of pushing out to the Cloud, which gives user-assistant interactions a far more natural conversational pace (far less latency), and delivers obvious privacy and security advantages.
The live demo of Gemini assistant struggled a bit with more complex LMM-based queries than LLM-based queries (LMM, which stands for Large Mixed Model is different from LLM, which means Large Language Model, in that it analyzes other media inputs such as images and sound instead of just language), but that is to be expected this early in the game. AI is a continuous improvement play, and these models will improve at a rapid pace. Kudos to Google for having attempted a live demo of a bleeding edge feature that I believe will perform considerably better in just a few short months from now.
What’s important to note here is that Google’s AI integration – from chip to cloud – seems more contained and proprietary than Apple’s, meaning that it appears, for the time being, more secure. ChatGPT, for example, is external to Apple while Gemini is Google’s own product. Google also has its own robust cloud infrastructure to lean on. Because Apple Intelligence relies on ecosystem partnership to deliver many of its new cloud-based AI features, it appears to find itself at a privacy, security, and differentiation disadvantage against Google’s more integrated model, at least for AI features that require a connection to cloud-based solutions.
Google’s Hybrid Approach to AI Integration Into Device UX Drops Hints About Its Monetization Strategy
Echoing my earlier hypothesis about Google’s AI monetization strategy, the more advanced Gemini Assistant features will be available through a monthly subscription (free for one year for Pixel 9 phone buyers). In a similar vein, Pixel users will also be able to upgrade their 4K videos to 8K through a cloud-based service, suggesting that we may see a lot more of freemium-to-paywall-premium-tiered AI performance options from Google in the future.
Pros and cons of that strategy: For occasional users of very high-end AI features, the option to enjoy good-enough everyday AI features for free with occasional spend for premium features is going to be a great way to get the best of both worlds: AI on a budget. But for daily users of premium AI features and services, this strategy may shift them to higher end silicon (like flagship Snapdragon phones). As Pixel seems to prioritize the former over the latter, at least for now, that model makes sense.
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Other insights from The Futurum Group:
Google Tensor G3 SOC Elevates Pixel 8 and Pixel 8 Pro – The Futurum Group
Google Pixel Watch 2 Showcases Wear OS 4’s Complete Feature – The Futurum Group
T-Mobile Offering Free Google Pixel 8 Products – The Futurum Group
Author Information
Research Director Olivier Blanchard covers edge semiconductors and intelligent AI-capable devices for Futurum. In addition to having co-authored several books about digital transformation and AI with Futurum Group CEO Daniel Newman, Blanchard brings considerable experience demystifying new and emerging technologies, advising clients on how best to future-proof their organizations, and helping maximize the positive impacts of technology disruption while mitigating their potentially negative effects. Follow his extended analysis on X and LinkedIn.