Search
Close this search box.

Qualcomm Creates Opportunities for the On-Device Generative AI Market

Qualcomm Creates Opportunities for the On-Device Generative AI Market

The News: At Mobile World Congress (MWC) Barcelona, Qualcomm’s AI Research arm is showcasing a range of important advancements for on-device AI such as large multimodal models (LMMs) and customization of large vision models (LVMs) running on Android smartphones and Windows PCs, and more, highlighting both the increasing importance of on-device AI capabilities and Qualcomm’s market opportunity in bringing these capabilities to its OEM partners. Read more about Qualcomm’s on-device AI releases at MWC 2024 on the Qualcomm media page.

Qualcomm Creates Opportunities for the On-Device Generative AI Market

Analyst Take: Qualcomm is hitting the gas on its on-device use case offensive for AI, as well it should. When generative AI hit the world by storm a little over a year ago, most of the discussions we were having were about data center capacity and scale to support large language model (LLM) training and inference workloads for apps such as ChatGPT. Fast-forward to today, and the discussion has shifted from “how do we scale generative AI in the data center” to “how much AI training and inference can we assign to a device?” Just like NVIDIA had the right product at the right time with its data center GPUs, Qualcomm had the right platform at the right time with its Snapdragon family of SOCs and its intellectual property (IP).

I remember pointing out that Qualcomm was an AI company long before anyone else noticed, and it feels good to see that observation validated. Qualcomm’s focus on building more AI capabilities into its platforms accelerated in the past year, for obvious reasons, redefining the parameters of Qualcomm’s market relevance for the next decade and opening entirely new opportunities for growth for its platforms. The company is leveraging the 2024 Mobile World Congress in Barcelona to not only showcase its latest advancements in on-device AI features but drop some important hints about how the future of AI-powered user experience (UX) is likely to be shaped by the on-device capabilities of handsets, PCs, IoT, XR, automotive, and more.

Mobile

For the first time running on an Android smartphone, Qualcomm AI Research is demonstrating Large Language and Vision Assistant (LLaVA), a 7+ billion parameter LMM that can accept multiple types of data inputs, including text and images, and generate multi-turn conversations with an AI assistant about an image. This LMM runs at a responsive token rate on device, which results in enhanced privacy, reliability, personalization, and cost. LMMs with language understanding and visual comprehension enable many use cases, such as identifying and discussing complex visual patterns, objects, and scenes.

Qualcomm AI Research is highlighting its first demonstration of Low Rank Adaptation (LoRA) on an Android smartphone: Running Stable Diffusion with LoRA, users can create high-quality custom images based on personal or artistic preferences. LoRA reduces the number of trainable parameters of AI models, enabling greater efficiency, scalability, and customization of on-device generative AI use cases. Beyond enabling fine-tuned language vision models (LVMs) for different artistic styles, LoRA is broadly applicable for customized AI models, such as large language models, to create tailored personal assistants, improved language translation, and more.

Qualcomm is showcasing a range of flagship commercial AI smartphones powered by its Snapdragon 8 Gen 3 Mobile Platform at MWC Barcelona, including the HONOR Magic6 Pro, the OPPO X7 Ultra, and the Xiaomi 14 Pro. These devices include impressive new generative AI feature like AI-generated image expansion (Xiaomi), AI-powered video creation and AI-powered calendar creation (HONOR), and image object eraser (OPPO), which together establish a pretty solid baseline for everyday generative AI applications on smartphones.

PC

Qualcomm’s accelerating focus on PCs is also on display at MWC Barcelona with the new Snapdragon X Elite PC platform and its impressive 45 TOPS NPU, built for on-device AI. Using GIMP (the popular and free image editor) with a Stable Diffusion plug-in, Qualcomm Technologies is showing that on-device generative AI can create an image on a PC in as little as 7 seconds – 3x faster than x86 competitors. I have been waiting for a gauntlet moment in the emerging AIPC arms race… and this feels like it.

Qualcomm AI Research is showcasing a world’s first on-device demonstration of a 7+ billion parameter LMM On a Windows PC that can accept text and audio inputs (like music, bird calls, etc.) and then generate multi-turn conversations about the audio.

Automotive

Qualcomm is also leveraging its industry-leading AI hardware and software solutions in the automotive space, applying traditional and generative AI capabilities to the Snapdragon Digital Chassis Platform. While the company looks to” deliver more powerful, efficient, private, safer, and personalized experiences for drivers and passengers,” creative and differentiated implementations outside of privacy, safety and personalization have thus far been a little thin. To be fair, this is still very early in the automotive design cycle for in-vehicle generative AI to find powerful use cases, so it may take a year or two to begin seeing real use cases for this. (There isn’t necessarily a lot of value in asking the car to generate images of dogs on skateboards while trying to find a parking spot or a restaurant, but there could be value in generating customized music to match a mood, instantly building out themed itineraries for a date night, or generating a customized city tour to make the most of a day trip).

IoT

In the consumer IoT space, Qualcomm is showcasing Humane’s AI Pin, which, if you didn’t know, runs on a Snapdragon platform. The device offers users the ability to take AI with them everywhere in an entirely new, conversational, and screenless form factor.

Modem-RF

Qualcomm is also showcasing its new Snapdragon X80 Modem-RF System, which integrates a second-generation 5G AI processor to enhance cellular performance, coverage, latency, and
power efficiency. Look for separate coverage for details about the solution.) Qualcomm is also introducing its new FastConnect 7900 Mobile Connectivity System, the first AI-optimized Wi-Fi 7 system, which leverages AI to radically improve adaptable, high-performance, low latency, and low-power local wireless connectivity. (Look for separate coverage of this connectivity solution as well.)

Infrastructure

Finally, on the infrastructure side of the business, since this is MWC Barcelona after all, Qualcomm is showcasing three groundbreaking AI-based enhancements for network management at the show, including a generative AI assistant for radio access network (RAN) engineers to simplify network and slice management tasks, an AI-based open RAN application (rApp) that reduces network energy consumption, and an AI-based 5G network slice lifecycle management suite.

In addition to being aware of these developments, it is important to consider the impact that these increasingly powerful on-device AI capabilities will have on device refresh cycles, particularly phones and PCs, both as a reset and an acceleration vector. Look for a more thoughtful analysis of this topic very soon.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

On-Device AI, Part 2 | The AI Moment, Episode 6

Qualcomm Raises Bar for On-Device Generative AI at Snapdragon Summit

Qualcomm Snapdragon 8 Gen 3 Brings Generative AI to Smartphones

Author Information

Olivier Blanchard

Olivier Blanchard has extensive experience managing product innovation, technology adoption, digital integration, and change management for industry leaders in the B2B, B2C, B2G sectors, and the IT channel. His passion is helping decision-makers and their organizations understand the many risks and opportunities of technology-driven disruption, and leverage innovation to build stronger, better, more competitive companies.

SHARE:

Latest Insights:

Nick Coult, Director at Amazon Web Services, joins Keith Townsend to share insights on Amazon ECS's 10th anniversary, discussing its evolution, key innovations, and future vision, highlighting the impact Gen AI has on the industry.
Join hosts Patrick Moorhead and Melody Brue to explore the groundbreaking impact of high-capacity SSDs and QLC technology in driving AI's future, emphasizing Solidigm's role in leading this transformative journey.
Adobe Reports Record FY2024 Revenue Driven by Strong Digital Media and Digital Experience Segments While Leveraging AI to Drive Innovation and Meet Analyst Expectations
Keith Kirkpatrick, Research Director at The Futurum Group, analyzes Adobe’s FY2024 performance. Growth in the Digital Media and Digital Experience segments contributed to record revenue while addressing challenges like the impacts of foreign exchange.
Matt Yanchyshyn, VP at AWS, joins Dion Hinchcliffe to share insights on the evolving cloud marketplace landscape, highlighting AWS Marketplace's new features and the impact of GenAI on business operations.