Qualcomm AI Demonstrates Stable Diffusion on Android Phone

The News: Qualcomm’s AI Research division has for the first time successfully demonstrated the use of Stable Diffusion – an innovative text-to-image generative AI model – on a handheld Android smartphone at the edge, instead of just in a laboratory. Stable Diffusion, which is a popular foundation model for AI, allows users to create and generate photorealistic images by transforming written text inputs in less than 20 seconds per image. Qualcomm AI Research unveiled the Stable Diffusion demo on February 23 at Mobile World Congress in Barcelona, Spain. Read the full Qualcomm OnQ blog post about the Stable Diffusion demonstration on the company’s web site.

Qualcomm AI Demonstrates Stable Diffusion on Android Phone

Analyst Take: Qualcomm AI’s live Stable Diffusion demonstration on an Android smartphone is an impressive moment for Qualcomm and its AI Research team, which is again taking the opportunity to show how it can bring difficult ideas to life in the real world.

The beauty of AI is that it can enable many tasks that may have been impossible or taken too long in the past, but at its core, AI is still difficult and takes a huge amount of research, testing, and trials to find success. That is where the Qualcomm AI Research division comes in, working to perform that hard work by demonstrating proof-of-concepts that can lead to commercial products and innovation for consumers and businesses.

Foundation models for AI are large neural networks that are trained on an incredibly huge quantity of data, which can then provide high performance across a broad range of tasks. Stable Diffusion is one of those large models, featuring more than 1 billion parameters. So far, it has been used mostly in the cloud, but Qualcomm AI Research took Stable Diffusion and tweaked it with the Qualcomm AI Stack to perform full-stack AI optimizations that enabled this first live demo at the edge on an Android smartphone.

Talk about rethinking the uses of divergent technologies and bringing them together in fresh new ways – because that is just what Qualcomm is doing here, and it is exciting.

Right now, this first demonstration produced dreamy images of a “super cute fluffy cat warrior in armor,” but the fact that those images could even be generated and produced using AI on a smartphone provides immense possibilities for the future for business and consumers. This is huge.

Why the Qualcomm AI Demo is Important

The Qualcomm AI Stable Diffusion demonstration means far more than generating and displaying a cute cat image on a mobile screen.

What makes it notable, I believe, is that Stable Diffusion graphically shows and proves that on-device processing on the edge of networks can be achieved using the model, far from data centers. This will lead to an immense number of new uses, many of which have not yet been dreamed up. Qualcomm says that this kind of on-device processing using edge AI is important because it delivers benefits including greater reliability, lower latency, improved privacy, more efficient use of network bandwidth, and lower costs.

The potential uses of Stable Diffusion capabilities for businesses and consumers are at the heart of Qualcomm’s research in this area, bringing new services, features and revenue streams for commercial service providers. With those goals in mind, Qualcomm AI Research says those tasks could include image editing, inpainting for artwork restoration, style transfer, super-resolution, and more.

What Qualcomm AI’s Work with Stable Diffusion Means

I believe that these are amazing possibilities, and that this is only the beginning of how Stable Diffusion foundation models can be used in business and everyday life.

Also notable is that this Qualcomm AI Research demonstration also shows the immense power of our handheld smartphones, which we often forget because we use them without thinking about them. But in this case, the Android device in this demonstration shows how this kind of powerful computing can be done at the network edge using just an average smartphone. It is amazing to think about and even more incredible to watch these images appear before your eyes on a screen.

The beauty of Stable Diffusion in Qualcomm’s research is that it can generate almost any imaginable picture by giving it edge access to tremendous amounts of data. And that means that it can be used across a broad range of edge devices and use cases, such as laptops, extended reality (XR) headsets, IoT devices, and any other devices powered by Qualcomm Technologies Snapdragon 8+ Gen2 and other Snapdragon chips. It also solves a big problem with today’s technologies – running this kind of AI processing in the cloud would be too expensive. But by moving this critical AI processing to the edge, it becomes more efficient and affordable.

Stable Diffusion Overview

In my view, these are major strengths for Qualcomm’s Stable Diffusion research, which show why this smartphone demonstration is a dramatic tease for what will be possible and even expected from devices and services in the future.

Think about this – by using easily transmittable text to send information, simple text data can be sent afar where it can be quickly reassembled and reborn by Stable Diffusion as rich images at the edge. This means that these processes can be done in a simpler, more efficient and less costly way, driving more use cases every day. This is just amazing potential for enterprise and consumer applications to come. And edge processing using Qualcomm technologies and research is making it all possible.

I believe this is an exciting innovation from Qualcomm Research that will further expand the possibilities of AI at the edge and drive even broader thinking about how these technologies can be used in the future.

Disclosure: Futurum Research is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum Research as a whole.

Other insights from Futurum Research:

New Qualcomm AI Stack Powers the Connected Intelligent Edge

Qualcomm AI Research Quietly Working to Make AI Ubiquitous in a Wide Range of Business Fields, Including Wireless, Automotive, Extended Reality, IoT and Mobile

MWC 2023: Qualcomm Dramatically Raises 5G FWA Game with FWA Platform Gen 3

Image Credit: GamingDeputy

SHARE:

Latest Insights:

Brad Shimmin, VP and Practice Lead at The Futurum Group, examines why investors behind NVIDIA and Meta are backing Hammerspace to remove AI data bottlenecks and improve performance at scale.
Looking Beyond the Dashboard: Tableau Bets Big on AI Grounded in Semantic Data to Define Its Next Chapter
Futurum analysts Brad Shimmin and Keith Kirkpatrick cover the latest developments from Tableau Conference, focused on the new AI and data-management enhancements to the visualization platform.
Colleen Kapase, VP at Google Cloud, joins Tiffani Bova to share insights on enhancing partner opportunities and harnessing AI for growth.
Ericsson Introduces Wireless-First Branch Architecture for Agile, Secure Connectivity to Support AI-Driven Enterprise Innovation
The Futurum Group’s Ron Westfall shares his insights on why Ericsson’s new wireless-first architecture and the E400 fulfill key emerging enterprise trends, such as 5G Advanced, IoT proliferation, and increased reliance on wireless-first implementations.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.