The Six Five On the Road with Qualcomm’s Ziad Asghar at Snapdragon Summit 2022

The Six Five On The Road at Snapdragon Summit 2022. Hosts Patrick Moorhead and Daniel Newman sit down with Ziad Asghar, VP of Product Management, Snapdragon Roadmap, at Qualcomm, for one of many conversations here at the #SnapdragonSummit. Their conversation covers:

  • New features in Gen 2, specifically in AI
  • Snapdragon Sight, in collaboration with partners from the ISP (Image Signal Processor) side
  • Hardware accelerated ray tracing, spatial audio, and the benefits for users
  • Snapdragon Secure, how Qualcomm will ensure the highest level of security for the mobile platform

Be sure to subscribe to The Six Five Webcast so you never miss an episode.

You can watch the full video here:

You can listen to the conversation here:

Disclaimer: The Six Five On the Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we do not ask that you treat us as such.


Patrick Moorhead: Hi, this is Pat Moorhead and The Six Five is live in Maui at the Qualcomm Snapdragon Summit. We don’t always do our shows in Maui, but I think we may in the future. I’m here with my amazing co-host, Daniel Newman. How are you my friend? It is the day after the big announcement, right?

Daniel Newman: You going to let me talk?

Patrick Moorhead: There we go. You go.

Daniel Newman: How are you? But then just keeps on going. No, actually I’m great. I am basking in the glow, in sun in the warmth. And like I said, I always really have a lot of fun here at Snapdragon Summit. There’s a ton of technology being announced and of course so many of our friends, colleagues, competitors, but so many of here all converging on Maui to talk about the future of… Well now it’s not just mobile. It used to be and now it’s so much more.

Patrick Moorhead: That’s right. But there’s so much focus on mobile, this show. And what I love about the Snapdragon Summit is literally you get all the technologies, all the features, all the benefits that you’re going to see in all the top phones out there globally. But hey, let’s introduce our guest. Ziad, how are you doing, my friend?

Ziad Asghar: Very good. How are you guys?

Patrick Moorhead: Good man. Are you ready to talk tech?

Ziad Asghar: I am very ready to talk tech.

Patrick Moorhead: Because we want to dive in and see all the great tech that’s out there that’s driving these incredible experiences.

Ziad Asghar: Perfect. We have a lot to talk about with every technology. There’s some amazing stuff that we are doing. So look forward to discussing it with you guys.

Daniel Newman: Let’s go. Yeah, super excited to have you here. Let’s start with something that’s on everybody’s mind and now part of everybody’s life and that’s AI. So a big focus over the last few years we’ve seen at the Snapdragon Summit events and in your various launches has been AI. So what are some of the big new and important announcements around AI this week?

Ziad Asghar: Awesome. So for the Snapdragon 8 Gen 2, we are basically purely focused on AI from the perspective of hardware, software, and what we’re talking about now is our AI stack.

So we have a complete story from a tools perspective, also from the hardware and software perspective. So just on the hardware side, we are increasing the performance by all the way by a factor 4.35x.

Patrick Moorhead: Approximately.

Ziad Asghar: Approximately, indeed. And then that we could have gone down the decimal places, but we stopped there. And then essentially… What we are also doing along with that as we introduced some very key features.

So this is the first commercial design on the edge that can do integer 4 processing. Now the problem that people find with integer 4 usually is when you go to these low precisions people lose accuracy. What we’ve done along with that is we introduce our AI studio and AI stack, which actually provides you with all the tools to modify the model such that you can actually retain accuracy while you go to integer 4.

But now just imagine you have something that you were doing in 32, but floating point, if you wrote in integer 4, that’s 64 x lower power consumption. So it’s an immensely huge advantage, very focused on it. That’s one of the big ones that we are focusing on this time around and we’ll show some data around that.

Secondly, we also have basically what we call micro tile inferencing. Micro tile inferencing is a key one where people have tried to solve the problems where, take on the old network, you run one.

Layer, you write the results to memory, then you go back, you run the next one and you do over little again. But all this power that you’re losing in writing is a big problem. So now what we’re doing is called depth first processing, but along with that very, very small tile sizes. So what you’re able to do is to process all the neural net through, not just layer by layer. And you’re able to pack it very well together because the micro tiles. So again, a huge advantage that comes due to that on the power side.

So hardware-wise made huge strides we feel this year. And then that’s going to light up use cases like you’ll see multiple language translations. I mean I was the one who started doing it in Mandarin, but now you’ll see that we can do multiple different languages and you can see in the video conferencing use case, you could be talking to four or five different people all speaking different languages. And essentially they are hearing exactly in their own language what you said.

Daniel Newman: Totally real time, right?

Ziad Asghar: Totally real time on the device.

Daniel Newman: That’s amazing. So many barriers are going to be broken down with that.

Ziad Asghar: So very, very powerful story there. Then what we’re also doing along with that is because of these techniques, we are actually reducing the power concession by 60% while doing that. So amazingly powerful story there.

And then we are introducing a full new Qualcomm AI studio. All the tools needed for model preparation, model deployment, how to use quantization to be able to benefit from 4-bit integer, all baked into this tool.

So we’re going to have that available. We are launching kits that we’ll be offering to different universities. All of those tools are basically meant to allow people to use these capabilities.

And then of course we, you’ll see some very key use cases. You’ve probably already seen those key use cases. We were talking about being able to, in a game, have AI bots compete with you. AI bots that are learning and improving the time and all of that influencing is happening on the device.

So a very, very superb story for always-on, always available peak use cases. And of course we have a sensing hub that does it for continuous use cases too that I can cover.

Patrick Moorhead: It’s incredible how much the AI landscape has changed. I mean it really started off something in the cloud and I think there was a theory that it could all be done in the cloud and way too much latency, like 10 to 13 hops to make it happen. Even some NLPs really leaned on the cloud and then they realized we really have to make this local and they would add on top of that the privacy benefits that come along with it and even the security benefits. It really has been a big deal. And even watching, again, I like to nerd out on this stuff, watching your evolution to 4-bit, but finding a way to make it accurate in the end and cut the power is pretty awesome.

Ziad Asghar: It’s unbelievable, right? Exactly what you said in the beginning. Both influencing and training were happening on the cloud. We basically brought all the influencing on the edge for all the reasons that you mentioned, why it’s so much better do influencing on the edge and by what we’re talking about from a research perspective is now to start to do some degree of learning on the device in the future so that we start to bring that part over too. For again, privacy reason, for immediate reasons, for all of those factors, Edge continues to be the right place to do AI.

Patrick Moorhead: Hey, I’m looking forward to bringing that power to the pc. Now Microsoft with a new drop of Windows showed some incredible features, I think it was called studio effects.

Ziad Asghar: That’s right.

Patrick Moorhead:

Where Qualcomm processors actually outperformed and could do more functions inside of video than even AMD and Intel. But let’s move on. I mean AI for AI’s sake is one thing, but applying it to use cases is what customers can relate to.

And still the number one feature for these phones or at least the top three is the camera. Where the camera seems to be this epicenter of interest. Why we can’t get enough. I don’t know, but I know I like to jump into it with all of these different lens… No, I know.

Daniel Newman: Take a selfie.

Patrick Moorhead: All right. I’m not the best person to take a picture of. But I do love the effects and you… Can you talk a little bit about the ISP and how you’ve integrated and improved and stepped up the experience with Snapdragon 8 Gen 2?

Ziad Asghar: I think what we’ve been able to accomplish with the camera on the smartphone has been truly amazing. The ability to be able to use other engines inside the smartphone like the AI engine and other aspects have made the ability of the smartphone far in excess of any other device.

So what we are talking about this year, the Snapdragon 8 Gen 2 is what we’re calling the cognitive ISP. Which means it’s capable of understanding exactly what it’s taking a picture off. And the way we’ve done that is we’ve actually hardware integrated a solution in there to be able to do hardware segmentation. Such that we are looking at the picture right now, right? There is back lit situations. We are able to identify fabric from skin, from hair, from plants. And what you’re able to do with that is as you segment to all those different levels, you can treat each of those portions very differently and then your pictures come out even better. And that’s really the power of it.

A DSLR can never do that because it does not have that AI capability in it, for example. And to be able to do that, we to do something else, we call the direct link. What it allows you to do is to have camera data be able to go to our hexagon processor without needing to go through memory. And with that you’re able to actually leverage that engine seamlessly for all sorts of AI processing.

So very, very solid upgrade for us this time. Again, on this side, we are also partnering with both with Sony and Samsung for sensors that are customized for our solution. So you have the 200 megapixel ISOCELL sensor from Samsung that’s optimized for Snapdragon 8 Gen 2. Two new Sony sensors that are optimized for 8 Gen 2 also. And these are capable of actually doing HDR but with four different timelines. So right, long short maybe, but you can do four different ranges, which means you can get even better HDR effects.

So really a very, very full roster of capabilities this time again, on the camera side.

Patrick Moorhead: Well one of the things that I appreciate that you do on the camera too is you balance the different compute units. Whether it’s CPU, GPU, NPU, DSP, all working together. And that is not easy. And some of your tool sets really come into handy for your ISVs and what you’re not doing is just shoving out hardware. You’re going all the way through the ecosystem, even to the sensor. That’s cool. I can’t wait to see that 200… That Sony and the Samsung sensors. But I appreciate that because what ends up happening is in four or five months later it’s there and it actually works.

Ziad Asghar: That’s right. And by the way, that’s what allows you to get to a good power number. It is by optimizing exactly how you’re saying across all those different engines. And actually in this one we also have the sensing hub where basically we have a always sensing camera. So just like you guys, were taking a picture right now, if you’re sitting over here looking at the camera and somebody happens to come behind you, the always sensing camera can see that other face, it actually turned it off. So we’re actually trying to enhance privacy further by using some of those capabilities. Or you can actually use the all sensing camera to always keep the… I don’t know if you guys, but if you’re laying down, sometimes you have your phone, it keeps going landscape and portrait mode.

Daniel Newman: Yes, it drives me insane.

Ziad Asghar: Well with that. It actually looks at your face and keeps it exactly in the right mode that is perfect for you. So we’re bringing in a lot of such kind of capabilities into our device as

Daniel Newman: Well. When I just think about it, how many videos and photos I don’t want to be in. That we’ve probably collectively ended up in nowadays, because you’re standing behind people on the beach and they’re taking the photo and you’re there and you’re like, “That wasn’t my best moment.” Don’t worry, Facebook is indexing that right now. So that’s out there. I’m just wondering how you can put the VPs, GPUs, NPUs, all together with the sensors and make this selfie look good. I don’t know if that’s possible.

Patrick Moorhead: I think that means his team has just given that challenge. …

Daniel Newman: Can you do the blur?

Patrick Moorhead: I know you need help. I mean…

Ziad Asghar: Of course.

Daniel Newman: The blur anyways, so we’re going from trend to trend, from strength to strength, and of course the camera’s, been a big trend.

Another thing that tends to always be a hot topic here is gaming. And so mobile gaming on a global scale, huge, huge growth. And obviously premium tier devices tend to be what powers that mobile gaming. But talk a little bit about what you’re doing here with hardware accelerated ray tracing, because I imagine that the gamers, their eyes are going to light up a little bit.

Ziad Asghar: Oh yeah. I mean these are the capabilities, just think about, right? Until recently you could only do these on a desktop.

Daniel Newman: That’s right.

Ziad Asghar: And now-

Patrick Moorhead: By the way, and not that well.

Ziad Asghar: And not that well indeed, that’s right. And now within a very, very short period of time. We have demos, multiple demos, we’re working with multiple partners to actually bring games that are enabling rate tracing, hardware based rate racing on the device.

And I think that’s the true test of it. You need to be able to commercially deploy it. You just talking about it does not get it. And we are really super excited. I mean, I’ve seen the demos, I’ve seen how the games are showing and truly… If you’re able to follow a light source. Light coming from there, reflecting true reflections, exactly true shadows, it just completely changes the experience that you have on that game.

And again, all of that fitting in the palm of your hand with the power consumption to be able to make that happen. I think that’s where we’re truly excited about. At the same time, we have upped the graphics performance by 25%, improving power efficiency by 45%. Designing these blocks all from the ground up is how you get this advantage.

Patrick Moorhead: Increasing performance by 25 and reducing power by 40. That is not easy.

Ziad Asghar: It’s not easy.

Patrick Moorhead: There was another processor maker that really impressed that they increased dye area by 25% and they increased performance by 25%. I’m like, “I wouldn’t be proud of that. Okay.” But it’s hard to squeeze more out of power efficiency than it is just to throw some transistors at a problem. How did you do that by the way?

Ziad Asghar: Oh, it is basically starting the design from scratch to architect in a way that you can actually-

Patrick Moorhead: Oh interesting, a brand new. Okay.

Ziad Asghar: Yeah. So this is an updated architecture from last time, but I think to you earlier point, if you notice we don’t mention this many billion transistors. Because the target that I give my teams is you got to give this functionality in fewer number of transistors. That’s how you win. It’s not by doing more transistors and throwing more area at it.

Patrick Moorhead: Imagine that not following another company gets you to win. Huh. Last time I checked, Snapdragon has the most SOCs on the planet for mobile design.

So we talked about the camera, we talked about AI, we talked about ray tracing. Let’s talk about for the ears, auditory. can you talk about some of the spatial audio features that you brought out in Gen 2?

Ziad Asghar: So we’ve actually enabled dynamic spatial audio in this particular product. And this is the really cool thing where literally you can place different object sources or audio sources and the sound that you get basically mimics that. You can sense the distance, you can sense the direction. Makes it very, very realistic as to what you’re listening to. And what we mean by dynamic is that as you move your head, it has head tracking built into be able to adapt to that.

Because many a times, if it’s static, you move your head and the effect doesn’t seem real anymore. So we actually introduce that. We worked with all our partners to be able to bring that effect to any applications that actually use audio. So we were super excited and with the right earbuds you can actually get the full effect to end with Qualcomm. So again, if you’re gaming-

Patrick Moorhead: Is this more for XR or is this for mobile users… Who benefits from it?

Ziad Asghar: So for a gaming like scenario, that we were just talking about, right? You have rate tracing and you have an enemy coming from this one particular side. Well the spacial audio allows you to actually be able to understand where the enemy or your-

Patrick Moorhead: Even if I’m moving, I might be slightly moving ahead for some reason.

Ziad Asghar: We can track your head’s movement.

Patrick Moorhead: That’s the dynamic part of it.

Ziad Asghar: That’s the dynamic part. So this will really change your gaming experience once you’re doing it on your device, you might be sitting in the crowded space, but you’re in the world of your own.

And then you’re right, you could extend it to Metaverse like use cases too, because again, spacial art is going to play a big role over there too.

Daniel Newman: So with all this incredible horsepower going through gaming, we’re we’re talking about AI, talking about data and photography, and one of the things that probably doesn’t come up as much as it should but needs to be talked about is security.

Of course, we’ve seen another pretty well known filmmaker really lean into security and privacy and I think it’s landed pretty well. But you’ve worked very hard along with the Android ecosystem to make these premium devices and the device in the portfolio very secure. Talk about your perspective, talk about the Qualcomm Snapdragon Android perspective on Snapdragon Secure, kind of where you’re taking and how people can feel confident that their device is-

Ziad Asghar: Yeah, we’ve always taken security extremely serious and have taken pride in the fact that our solutions are the most secure. And essentially we have had secure processing units inside our solutions for the longest time. And we have trusted execution environments where you can actually operate in and keep that app basically protected inside that environment not accessible and the data’s not accessible to anybody else.

And this time around, actually we’re bringing you a very good and key use case using that trusted execution environment where literally, you can do object or face detection. People try to do face unlock. What happens many a times is you can actually take a 3D image created of you put it in front of the device and it unlocks.

So what we’ve done is we’ve actually partnered, partnered with trinamiX, and what we are able to do now is we actually detect for liveness. That what you’re looking at is actually a live person rather than somebody created a 3D picture of what you ought to look like and the device should unlock that.

Patrick Moorhead: Is that how light shines off the bone structure? Is that how you’re doing that?

Ziad Asghar: It’s light, also there is changes in the skin, right? There is little bit of movement, there is change in color. There is many different aspects that basically detect and confirm that this is actually a live person. And that’s how we are able to actually do it and make sure that that data again, stays protected in a trusted execution environment. Just like your password, you don’t want your face getting out too. So essentially in the same way, we want to keep all of that data protected. So we’re very excited about this, the great application last year we’re also brought in a new hardware route of trust. Again, it’s not as visible and as you know, sexier feature board allows you to do is to have a new baseline for all of your security functions within your device. And it just gives us an additional level of protection that nobody else has.

Daniel Newman: I think security though is often one of the things that’s done best when you don’t notice it.

Ziad Asghar: Exactly.

Daniel Newman: And so I think in the end, building a reputation for being incredibly secure is almost something that is a table stakes, right?

Ziad Asghar: That’s right.

Daniel Newman: When you buy a premium smartphone from any of your Qualcomm OEMs that you know that that security is handled. And of course things like you mentioned trusted execution environment, those are really important because it’s what’s happening in the app and the data that’s being created. That’s where a lot of risk and vulnerability comes up. And it sounds, by the way, it sounds like the mobile ecosystems really nailed it, where we still see a lot of risk in PCs, desktops and cloud.

Patrick Moorhead: It’s why you see a lot of PC architectures adopting mobile like technology. By the way, if I can editorialize that other company that does the security and privacy, I do find it ironic that they now have a multi-billion dollar advertising business.

Daniel Newman: I’m a hundred percent. I just want to be very clear. I’m merely talking about the way the public perceived it. I actually made a comment.

Patrick Moorhead: I know, I know.

Daniel Newman: But in fun, for the sake of our audience out there, I actually made a comment at a dinner other day. I said, there’s a pretty high probability they might become the world’s largest advertising company to assume with what they’ve done. I think your intention is very different. And I like the fact that the intention here really is all about making security transparent.

Patrick Moorhead: Well, and the challenge, I mean, what’s harder one brand of smartphone or 120 or 150 brands of smartphones working as an ecosystem together. But anyways, no, you have set the bar for Android security in the way that you’re doing these things. And I think it’s to be applauded in, it has to be the ground floor for all the fancy, sexy, cool features. Otherwise, businesses and consumers lose trust.

Ziad Asghar: That’s right.

Patrick Moorhead: So yeah, I can talk about this forever.

Ziad Asghar: Me too.

Patrick Moorhead: It’s been a great conversation and maybe off camera we can keep the geek going. I would love to do that. Great conversation.

Ziad Asghar: Likewise.

Daniel Newman: Yeah, it’d be a lot of fun. So yeah, this is your moment to sign off, but thanks so much for joining us here at the Snapdragon Summit.

Ziad Asghar: Thank you for having me.

Daniel Newman: We really love it. I think it’s back to the beach for all of us though, Pat.

Patrick Moorhead: We’re kind of here right now.

Daniel Newman: And so yeah, that’s maybe the overhead shot of where we’re sitting. But in all serious, everyone out there really, thanks a lot for tuning in. Please hit that subscribe button, join us for more of these episodes. We did nine videos here at the Snapdragon Summit and all of them are really great. We appreciate you-

Patrick Moorhead: Soft acclaim they’re doing great [inaudible 00:20:11].

Daniel Newman: They are. We appreciate you all tuning in. Time to say goodbye though. We’ll see y’all really soon.


Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.


Latest Insights:

The Six Five team discusses NVIDIA announces Mistral NeMo 12B NIM.
The Six Five team discusses Apple using YouTube to train its models.
The Six Five team discusses TSMC Q2FY24 earnings.