Search
Close this search box.

Why The New AI-Enabled Pixel 9 Lineup Is Google’s Strongest Play Against Apple Yet

Why The New AI-enabled Pixel 9 Lineup Is Google’s Strongest Play Against Apple Yet

The News: Google has just unveiled its next generation of Pixel phones – under the Pixel 9 designation – at its Made By Google event, several months ahead of the usual October unveiling. Among the products announced at the event were Google’s second-generation foldable handset – the Pixel 9 Pro Fold – as well as the Pixel 9, Pixel 9 Pro, and Pixel 9 Pro XL. Read more about the event here.

Why The New AI-Enabled Pixel 9 Lineup Is Google’s Strongest Play Against Apple Yet

Analyst Take: This year’s Made by Google event gave us not only a clear view of Google’s Pixel product roadmap and strategy, but also invaluable insights into how the mobile handset (and its connected device ecosystem, which also includes hearables and wearables) is approaching AI-enabled UX design. On the one hand, the event still focused on the usual mobile handset specs: the camera, connectivity, gaming, apps, and bleeding edge hardware features. On the other hand, it was difficult to miss just how much AI dominated the UX discussion.

AI isn’t just transforming the data center and cloud services anymore. It is also reinventing how we interact with our devices, and first among them our phones. If you have been following our coverage of Qualcomm’s Snapdragon-powered mobile handsets for the last five years or so, you will probably recall that we pointed out the potential of on-device AI capabilities long before generative AI became the disruptive force it is today. And if you have been paying attention to Apple’s latest announcements around AI, you no doubt understand just how critical this new technology is to the Apple ecosystem’s ability to remain competitive against Android and Windows devices. It is therefore not the least bit surprising to see Google, which plays both on the hardware and the software side of the AI equation, showcase its own AI capabilities and features during the event.

The growing importance of AI as it pertains to devices may also have played a part in the scheduling of the event. Rather than wait for the autumn launch season to reveal its new products, Google likely wanted to get ahead of Apple’s fall rollout of Apple Intelligence’s Beta program, and argue that it may be able to deliver a richer mix of AI capabilities and services to consumers who might be dissatisfied or bored with iPhone, or simply curious about other options now that AI is transforming mobile UX.

Building on Pixel’s Unique Value Proposition in the Handset Segment

As a longtime user of Pixel phones, I can vouch for the quality of the images produced by Pixel phones. Even the base models have always made a point to absolutely crush the photography angle, giving Pixel a clear and sustained value proposition for consumers looking to create amazing photos without necessarily breaking the bank. And thanks to its ability to deliver a complete tech stack of hardware, software, and services, including frictionless Google Assistant integration into the Pixel experience, Google also delivers an almost Apple-like ability to deliver clean, bloatware-free Android experiences to Pixel users, with easy attach motions with Pixel hearables, wearables, and Chromebooks.

Despite Google’s market power, distinctive design aesthetic, and unique advantages, Pixel phones remain a bit of an underdog in the Mobile handset ecosystem. Pixel has only managed to carve out roughly 5% of the US handset market since the category launched, 9 generations ago. To be fair, Pixel’s appeal has grown in recent years, and the share growing nearly 2x over the last three years speaks volumes about the product’s progress and momentum. Google is absolutely on the right track with Pixel.

And yet, the two principal challenges standing in the way of Pixel achieving double-digit market share in the US, in my view, can be summed up by two sides of the same value prop question: (1) Why Pixel and (2) Why not iPhone or [insert Android brand here]? I believe that AI might help Google answer that question for the market, and after watching Google’s string of August 13 announcements and product demos, I suspect that Google feels the same way about it.

It may also be useful to consider that Google likely sees Pixel 9 as both a distribution mechanism and handheld showcase for the expansive menu of AI technologies and services that the company hopes that technology users will use every day. “We are obsessed with the idea that AI can make life easier and more productive for people,” Google’s Rick Osterloh explained this week, hinting yet again at the role that AI-enabled user experiences will play in Pixel’s competitive future.

Will Google push AI features far enough to create unique value and differentiation to answer the why and why not questions that stand between Pixel and double-digit market share? The Pixel 9 announcement may hold some clues as to whether or not Google is on the right track.

Breakdown of Google’s Pixel 9 Most Important Features and Specs

First, some basics: The Pixel 9 lineup is made up of the Pixel 9 Pro Fold (starting at $1,799), Pixel 9 Pro XL (starting at $1,099), Pixel 9 Pro (starting at $999), and Pixel 9 (starting at $799, a notable $100 increase over last year’s basic model). All models are naturally powered by Google’s new Tensor G4 chipset. Memory configurations look like a standard progression from 12GB of RAM and 128GB and 256GB storage options for the Pixel 9 to 16GB of RAM and 128GB, 256GB, 512GB, and 1TB storage options for the Pro and Pro XL. The new Fold ships with 16GB of RAM and 256GB and 512GB of storage options.

On the camera front, the Pro and Pro XL sport have 50-megapixel (wide), 48-megapixel (ultrawide), and 48-megapixel (5x telephoto) cameras, while the basic model sticks to a simpler 50-megapixel (wide) and 48-megapixel (ultrawide) setup. The new Fold delivers a 48-megapixel (wide), a 10.5-megapixel (ultrawide), and a 10.8-megapixel (5x) telephoto. Video capture looks consistent across all models with 4K on device but with the intriguing option of enhancing videos to 8K by way of an AI-enabled cloud service option.

Good effort from the Pixel team to work around Samsung’s latest flagship Galaxy phones’ 8K video capture (one of Qualcomm’s Snapdragon 8 Gen 3 SOC’s performance advantages), while also making a play against iPhone’s 4K video capture (even on the iPhone 15 Pro Max). Google has a knack for shooting for the middle-ground between premium performance and competitive price points.

Expect a 4,700 mAh battery on the 152.8 x 72 x 8.5 mm (6 x 2.8 x 0.3 in) Pixel 9 and Pixel 9 Pro, and a larger 5,060 mAh battery on the 162.8 x 76.6 x 8.5 mm (6.4 x 3 x 0.3 in) Pro XL, with the two smaller models weighing in at around 199g (7 oz) and the Pro XL coming in at a slightly heftier 221g (7.8 oz). The battery on the new 155x150x5.1 mm (open)/155×76.2×10.16 mm (closed) (or 6.1×5.9×0.2 in open/6.1x3x0.4 in closed) Pixel 9 Pro Fold is a somewhat disappointing 4,650 mAh (smaller than the previous generation’s 4,821 mAh battery), but the device comes in at 257g (9.1 oz), a considerable weight reduction from last year’s 283g (9.98 oz) model. The Pixel team clearly prioritized portability here, which makes sense given how overall heft perception may impact adoption of foldable phones, at least more so than battery life.

The Pixel 9 Pro Fold naturally has two main displays: The internal display is an 8-inch OLED with 2,152×2,076 pixels and a 1-120 Hz variable refresh rate (LTPO), while the 6.3-inch cover display is an OLED with 2,424×1,080 pixels and sports a 60-120 Hz variable refresh rate. Pixel density is 373 ppi and 422 ppi, respectively.

Displays on the Pixel 9, Pixel 9 Pro, and Pixel 9 Pro XL, respectively, are 6.3-inch OLED with 2,424 x 1,080 pixels and a 60-120 Hz variable refresh rate, a 6.3-inch LTPO OLED with 2,856 x 1,280 pixels and a 1-120Hz variable refresh rate, and lastly a larger 6.8-inch LTPO OLED with 2,992 x 1,344 pixels and a 1-120Hz variable refresh rate. Pixel density is also 422 ppi, 495 ppi, and 486 ppi, respectively.

All Pixel 9 phones ship with Android 14 and will, as before, enjoy seven-year Android support – an economically useful feature for handset users who either push their phones to the absolute limits of their longevity or depend on their ability to be able to hand down reliable older phones to family members with every upgrade cycle.

Why Google’s Hybrid Approach to Mobile AI Integration Sets it Apart from Even Apple

The future of AI integration in the mobile handset space is hybrid, meaning that handset OEMs plan to mix on-device AI capabilities and Cloud-based AI services to deliver a broad spectrum of AI use cases and experiences. Qualcomm’s Snapdragon mobile platform started this trend by introducing powerful on-device AI capabilities – first focusing on system optimization but quickly expanding to more obvious UX features that vastly improved photography, video, gaming, security, live translations, and many of the generative AI applications that we are familiar with today. As the company behind the Android platform and an AI company in its own right, Google was also there from the start, helping develop these solutions for the Android market. All of this to say that Google is not new to this technology trend.

One of the principal reasons why Google decided to transition to its own silicon several years ago was to more easily align its mobile platform with its own ecosystem and product roadmap. The timing was especially important as the challenge of how best to bring AI to mobile experiences pointed to a mix of on-device and Cloud-based solutions. Perhaps because Tensor 4 isn’t quite on the same level as Snapdragon 8Gen3 when it comes to overall on-device performance or AI capabilities, Google’s vision for mobile AI experiences is likely to rely on a more hybrid mix of device and cloud – similar to Apple’s own model (at least as articulated thus far).

In Pixel’s price tier, this approach makes sense. For starters, Pixel has always been the phone that does more with less (or for less), and the brand’s scrappy, clever approach to solving UX problem is consistent with how Pixel 9 is tackling the AI problem in a space dominated by Snapdragon (and Samsung) at the very high end of the price and performance range. Pixel understands its position in the market, its value proposition, and the lane it occupies, and clearly articulating that through consistent UX design decisions continues to pay off. But also, because Pixel works as a direct interface with Google services, it is an ideal portal through which Google can sell “premium” cloud-based AI services and solutions. In other words, why would Google try to compete at the very high end of on-device (on-chip) AI capabilities when its business model leans more toward monetizing cloud-based solutions?

For those who have lamented Tensor’s past inability to compete head-to-head with flagship tier mobile SOCs from Qualcomm, I say stop and reset your thinking. Google is a different company from Qualcomm (or even MediaTek, which also designs AI-enabled chipsets for mobile handsets): Google’s AI monetization play in mobile is a well-calibrated bet that is far more likely to look for ways to incorporate cloud-based services at scale than it is to try and put out the most high-performing SOC in the industry. That’s because Pixel doesn’t need to have the best mobile chipset on the market. Pixel just needs to have precisely the right cost-to-performance chipset to take full advantage of Google’s full range of hybrid AI solutions designed for the Mobile segment.

Google’s approach to AI integration into mobile is similar to Apple’s in many ways: Develop your own silicon, build your own hardware and software stack, and lean on a mix of on-chip and in-Cloud AI to deliver a hopefully seamless AI user experience. And where on-device AI workloads don’t quite match Snapdragon’s capabilities, lean harder on the Cloud. This works better for Google, however, and here are two reasons why: (1) Pixel’s price point may make the offsetting of some Snapdragon-friendly AI workloads to the cloud seem more palatable to users than Apple doing the same despite its higher price points (and arguably better specs); (2) Google’s cloud-based AI solutions ecosystem is far more mature than Apple’s, and may feel less cobbled together than Apple’s. (Yes, Chat-GPT integration is nice because of Chat-GPT’s name recognition, but Apple’s AI strategy still feels a bit more outsourced than it could be.)

This may help explain why Pixel’s approach to AI integration and overall UX feels more like a play against Apple than a play against its Android competitors, especially now. That hypothesis certainly aligns with Google’s timing relative to Apple’s autumn launches.

How Google’s AI Announcements Surrounding the Pixel Launch Position the Company’s AI Innovation and Monetization Strategy

Late last year, Google introduced Gemini, promising a phased rollout with basic versions of the platform (“Nano” and “Pro”) immediately being integrated into Bard (Google’s AI-powered chatbot) and Pixel 8 Pro phones. Google’s stated strategy was to use Gemini to make Bard more intuitive (including being more predictive in planning applications), and to help Pixel 8 Pro users enjoy even better task automation than Pixel’s already slick suite of AI-powered time-saving features.

Fast forward to the Pixel 9 introduction: Google’s Gemini-powered assistant feature already feels far more conversational than it already was and more capable of multitasking. Tensor G4 enables many of these functions to be processed on-device instead of pushing out to the Cloud, which gives user-assistant interactions a far more natural conversational pace (far less latency), and delivers obvious privacy and security advantages.

The live demo of Gemini assistant struggled a bit with more complex LMM-based queries than LLM-based queries (LMM, which stands for Large Mixed Model is different from LLM, which means Large Language Model, in that it analyzes other media inputs such as images and sound instead of just language), but that is to be expected this early in the game. AI is a continuous improvement play, and these models will improve at a rapid pace. Kudos to Google for having attempted a live demo of a bleeding edge feature that I believe will perform considerably better in just a few short months from now.

Echoing my earlier hypothesis about Google’s AI monetization strategy, the more advanced Gemini Assistant features will be available through a monthly subscription (free for one year for Pixel 9 phone buyers). In a similar vein, Pixel users will also be able to upgrade their 4K videos to 8K through a cloud-based service, suggesting that we may see a lot more of freemium-to-paywall-premium tiered AI performance options from Google in the future.

Pros and cons of that strategy: For occasional users of very high-end AI features, the option to enjoy good-enough everyday AI features for free with occasional spend for premium features is going to be a great way to get the best of both worlds: AI on a budget. But for daily users of premium AI features and services, this strategy may shift them to higher end silicon (like flagship Snapdragon phones). As Pixel seems to prioritize the former over the latter, at least for now, that model makes sense.

Another impressive AI feature introduced at the event was “Magic Editor,” which allows users to easily add a person into a photo, or change the photo’s landscape or background. While these types of use cases make me cringe a little because they essentially create moments that never actually happened, I do value the ability to include the designated photo taker in group and family photos that they otherwise wouldn’t be in. For those of us who often miss out on being in vacation photos, it’s a nice feature to have for those once-in-a-lifetime events and vacations. From a purely technological and UX innovation standpoint, it’s an impressive feature to include in a phone or camera, and an effective way to highlight Pixel’s continued cutting-edge camera innovation and unique value proposition.

For instance, I still love that Pixel made a point to make its camera capable of adjusting lighting and contrast to match a wide range of skin tones in group photos a few generations back. So seeing the product team continue to innovate around making camera features as inclusive as possible (in as many ways as it can) signals that Pixel innovation remains on a very healthy path to capturing more users and market share.

Pixel 9 phones start shipping August 22.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Google Tensor G3 SOC Elevates Pixel 8 and Pixel 8 Pro Performance

Google Pixel Watch 2 Showcases Wear OS 4’s Complete Feature Set

T-Mobile Offering Free Google Pixel 8 Products

Image Credit: Google

Author Information

Olivier Blanchard has extensive experience managing product innovation, technology adoption, digital integration, and change management for industry leaders in the B2B, B2C, B2G sectors, and the IT channel. His passion is helping decision-makers and their organizations understand the many risks and opportunities of technology-driven disruption, and leverage innovation to build stronger, better, more competitive companies.  Read Full Bio.

SHARE:

Latest Insights:

Krista Case of The Futurum Group reflects on lessons learned and shares her expected impacts from the July 2024 CrowdStrike outage.
Steven Dickens and Ron Westfall from The Futurum Group highlighted that HPE Private Cloud AI’s ability to rapidly deploy generative AI applications, along with its solution accelerators and partner ecosystem, can greatly simplify AI adoption for enterprises, helping them scale quickly and achieve faster results.
Uma Ramadoss and Eric Johnson from AWS join Daniel Newman and Patrick Moorhead to share their insights on the evolution and the future of Building Generative AI Applications with Serverless, highlighting AWS's role in simplifying this futuristic technology.
Steven Dickens, Chief Technology Advisor at The Futurum Group, explores how AWS is transforming sports with AI and cloud technology, enhancing fan engagement and team performance while raising concerns around privacy and commercialization. Discover the future challenges and opportunities in sports tech.