On this episode of the Futurum Tech Podcast – Interview Series I am joined by Erez Dagan Executive Vice President, Product and Strategy for Mobileye. Erez is responsible for overseeing all parts of planning, development, and production of products that are helping shape the future of mobility — a vital role in today’s business climate.
Our discussion centered on how the announcements Mobileye made at CES 2021 and where the autonomous vehicle industry is headed in the future.
The Challenges of Scaling Autonomous Vehicle Growth
My conversation with Erez also revolved around the following:
- An exploration into how autonomous driving solutions are slowly going mainstream
- How organizations like Mobileye are overcoming the challenges presented by the chip shortage
- How Mobileye is approaching safety in the industry
- How REM mapping is a unique differentiator for Mobileye
- A look into how data will be responsibly managed
The autonomous driving industry is growing at unprecedented speeds. This episode is a must listen for anyone interested in the technology that will likely become a part of our everyday lives in just a few short years.
Watch my interview with Erez here:
Or listen to my interview with Erez on your favorite streaming platform here:
Don’t Miss An Episode – Subscribe Below:
Disclaimer: The Futurum Tech Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.
Transcript:
Daniel Newman: Hi, everybody. Welcome back to another episode of the Futurum Tech Webcast. I’m your host, Daniel Newman, principal analyst, founding partner at Futurum Research.
Excited for this interview series. We have Erez Dagan of Intel’s Mobileye and we’re going to be talking about what’s going on in enabling mass consumer autonomous vehicle market, and so much more. Erez, welcome to the webcast.
Erez Dagan: Thank you very much. Hi. Pleasure to be here.
Daniel Newman: So, first and foremost, where are you right now? I always like to ask that question of people.
Erez Dagan: In Israel.
Daniel Newman: In Israel. So, for everyone out there, of course, you may be watching this at a later time than I am, of course, recording it, but it’s early in the morning here and I’m guessing it’s pretty late in the day there. So, appreciate you taking the time and joining me at the end of the day there, Erez.
If you don’t mind, would you do a quick introduction of yourself and the work you’re doing over at Intel’s Mobileye?
Erez Dagan: Sure. My name is Erez Dagan. I’m with Mobileye since 2003 and today acting as the EVP of Products and Strategy.
Daniel Newman: Pretty big role right now, Erez. There’s a ton going on in this space. I just got back from CES. I’m guessing you were either there or you were watching. Were you there?
Erez Dagan: Watching from afar.
Daniel Newman: Watching from afar. Of course, with all the craziness going on with COVID still and travel and restrictions, I can’t blame people for being a little cautious about making international trips.
I was at this show. Got there, spent some time live. Of course, it was very scaled back from a typical CES show, where usually halls are packed. You can’t get a taxi or an Uber. You’re waiting forever in line. Can’t get anything to eat or you have to make reservations well in advance. So, it was a little better from a mobility standpoint because you could get around. Of course, there was also many fewer people to actually talk to, to see, to spend time with.
A lot of companies pulled out and only had virtual displays. But one thing I did notice, okay, was that this was a huge event for automotive. From the chip makers and semiconductor companies, AI, autonomous vehicles, electric vehicles. And of course, your traditional OEM’s and tier ones. All were there, many making big announcements. This is clearly a really important space. And Mobileye, of course, was included.
It feels like to me that ADAS autonomous solutions are really evolving into the mainstream. I think the steps towards taking AV’s into the broad markets, scaling them, is here. It’s coming, if people haven’t recognized that you’re hearing about it more in the news.
Talk about it through your lens. What does this mean as we try to bring this to market, to scale, from a technology perspective?
Erez Dagan: Great. We look at autonomous driving at scale as something that carries three core challenges, if we want to try to map it out. The regulatory challenge, the geographic challenge. How do you bring it to drive anywhere you want? Not just in geofenced areas, as we know of access, for example, and the cost. If you want to proliferate this type of solution, you need to bring it to a mass market cost envelope.
These are the challenges that we’ve been designing our system to deal with, ground up, actually. In quick sentences, we have the REM, the crowdsource mapping, tackling the problem of geographical scalability for being able to drive our autonomous solutions anywhere.
We have the RSS model for mediating and promoting the regulatory discussion of what is safe enough for autonomous vehicles. So, the regulatory hurdle is a bit more addressable this way. With the correct language, with the correct terms and the correct approach to it.
In terms of costs, we’ve designed our system, in many aspects, to be very cost sensitive. We are kind of packing a lot of background from ADAS, as you mentioned, which has that cost sensitivity trait. It guides our solutions in that space as well, the consumer aided space as well.
Daniel Newman: Yeah, as I’ve been kind of watching the market space, those are a few of the different things that have been on my mind.
You’ve got the technological challenges, Erez, of being able to develop technology that’s going to be safe. That’s going to improve mobility, improve the experience of driving. Of course, have policy and regulatory. Tons, and tons of environmental challenges, right? Different cities, roads, maps. How do we enable these vehicles to successfully navigate all these different spaces? So, you mentioned a bunch of different and things and I’d love to start and talk a little bit about the tech, of course, given the background and what you guys do specifically.
From a chip and compute standpoint, where do we start? What are the requirements to get over the hurdles of those things you just mentioned?
Erez Dagan: Excellent question. What we’re seeing as the metric that’s very popular. There’s an attempt to take the very complex question you just asked and project it on a one dimensional axis of tops. And this is a huge oversimplification. The versatile types of workloads, compute workloads, that needs to be carried in order to accomplish this undertaking of autonomous driving and actually also in ADAS is much more complex than a one dimensional figure. It’s a combination of thread level parallelism, data level parallelism, instruction level parallelism. All sorts of very intricate acceleration models, compute acceleration models, that the only way that we can talk about compute efficiency, the way that I look at it is look at the actual value produced pair watts of power. And if you can drive a vehicle in five, six different cities, different geographies, using two IQ5’s with double digit watts power consumption, that’s the only way of going around this multidimensional question of compute efficiency. What we announced on CES recently, just to go to pinpoint that is IQ Ultra, which is packing all of our experience of what’s needed to drive an autonomous consumer autonomy solution. Based on that nomination that we had with Zika announced, which packs six IQ5’s and we exceed that by far more in IQ Ultra maintaining 100 watt power envelope.
Daniel Newman: Yeah. There was a pretty exciting set of announcements. And just to provide some clarity to everybody out there. So, at CES, the company basically delivered what would be considered the market’s first AV on chip. So, autonomous vehicle on chip offering and that’s what they’re calling IQ Ultra. Now, IQ is the series. And then in the name of the product, and of course, just a few weeks before that you guys were pretty excited to announce, I believe it was the hundred millionth IQ.
Erez Dagan: Correct.
Daniel Newman: So, you guys had hit some thresholds, you’d hit some pretty big numbers. This, of course, was part of that big story that came out late last year, about Mobileye next year, in partnership and working closely with Intel to go to market as Mobileye and have an IPO at least this year.
I’m not going to ask you to comment on that, but I do think it’s an important thing to at least bring that up right now. And I think a big part of the reason that that’s going to happen is because there’s so much value in Mobileye that’s not necessarily going to be unlocked when it’s sitting as part of a giant company that’s so diversified. With AV in the interest in the automotive space, being so significant. These advancements from IQ to IQ Ultra, the adoption rates that you guys have had and really being out there in the road and being able to say, hey, we aren’t just talking about this. You’ve got crazy valuations of automotive companies building AV’s and EV’s of the future that have barely sold a unit yet. You guys have real, marketable experience, numbers, metrics, and that’s great. So, the technology and the next generation being launched, you’re going to make it simpler for companies that want to go down this path with the AV on a chip technology. So, that’s great.
Something with you being a product and strategy person though, that I also want to talk a little bit about is the sensing side of this. There’s been a ton of discussion about sensing. There’s different ways. There’s vision, there’s radar, there’s lidar, there’s all kinds of different technology and we’re hearing about it. And, of course, I think some of the ability for us to get to full autonomous driving, because a lot of what we’re doing with IQ is really L2 Plus is what’s there, but you of course are doing things with move it. And robo taxis that are fully autonomous and moving from that L2 Plus two full autonomies comes down to a lot of the sensing technology. Making sure you’re covering long range, short range. Right? Near field. Being able to get all the mapping correctly.
Safety’s a huge issue. I mean talk about kind of what’s going on in the sensing space, how Mobileye’s approaching it and how you’re going to be able to differentiate and get to that full autonomy with the technology you’re building?
Erez Dagan: Very good. So, when you come to break down a bit, the problem of sensing, you can look at the environment model that you need to build as comprised of four major categories of elements. Okay? You have the other road users. You have the boundaries of your drivable area, the road boundaries. You have the geometry of the paths within that drivable area. And you have a big basket of semantics that guides your driving. Could be explicit semantics, like your traffic light or traffic sign or an implicit semantic like a pedestrian looking into a smartphone as something that indicates the way you should make your decisions. So, all of these four categories of elements are comprehensively covered by visual sensing.
Let’s start with that. The difference between driver assistance and autonomous driving is the failure rate or the safety, as you mentioned. In autonomous driving, you are going to replace the driver and in order to do so, you need to demonstrate a favorable failure rate or a higher mean time between failures and the human driver.
To that aim, we are using the, first and foremost, the crowdsource mapping that I mentioned earlier that can give us contribute a lot to robust defining our perception of all of the static elements of the environment. Okay? All the things that you go by and are static and are not changing, you can really gain a lot from mapping these elements and serving them into the vehicle.
The second element, which is very important is robust defining the computer vision performance with additional sensors and these additional sensors we have pronounced also in CES right now. The advancement of our development of two such sensors and imaging radar that we demonstrated, I think very, very extreme capabilities, as far as the angular resolution of that radar and an FMCW lidar that we announced already last year, that we’re developing and demonstrated the compute chip that that is in that device that was sampled and the silicon photonic chips that propels it.
Daniel Newman: Yeah, I know this year, you guys certainly were able to show that you’re going to use all the sensing technologies concurrently to deliver the optimal experience and, of course, safety profile. You partnered with Luminar on some of it. And I know you’re building on Luminar on the lidar side, for instance. And I know you’re building your own and I think there’s a plan over the next, what, about two years that you’re going to be able to roll that out through Mobileye? Now, of course, I did have a chance at CES. I did some really interesting demonstrations with Luminar where they did kind of these difficult situations where they showed lidar. And then they took a Tesla and they showed the exact same situation. And, of course, that’s their position until you’re working side by side right now. But I do really believe, as I’ve continued to watch some of these different examples, Erez, is that if safety is the focus, which you have to imagine it would be. You’ve seen how much the world is focused on safety with things like masks.
Well, injuries and casualties related to driving are pretty remarkably high and avoidable. And that was one of the things that was very interesting seeing some of these demonstrations is with computer vision, radar, lidar, altogether. There are avoidable accidents that could save lives and that’s really important. And sometimes it brings a lot of wonder to me that we’re talking price on this stuff when the bottom line is lives, but safety’s been, for the longest time, one of those challenges, right? It goes all the way back to seat belts. Getting people to wear them and it’s like, eh, it’s your choice. Well, save lives. So, we can definitely keep moving here, but I like that you guys are focusing. And I think that’s something really important for everyone that’s listening to understand that approaching this across the different spectrums of sensing is going to be critical.
Another thing I wanted you to touch on, though, that makes Mobileye unique in my perspective, and I think in the Mark’s perspective, has been REM mapping. First of all, if you could just explain what that is a little bit and why is this unique and a differentiator for Mobileye?
Erez Dagan: Sorry. Okay. I thought I lost you. Sorry. So, yes, as I mentioned briefly earlier, REM is a crowdsourced, high definition mapping, which leverages the fact that we have millions of driver assistance cameras traveling out there that can harvest information for us and help us dynamically cure a very detailed map of the driving environment. And it contributes dramatically to the robustness at which we can perceive those elements. Just for example, in Europe alone, we mapped 2.5 million kilometers of road. The size of the fleet that we’re talking about is immense. And the amount data that we are digesting in this machine, in this cloud machine is unprecedented, as far as dynamic mapping.
Also very, very important to say to all the important things you said about safety and the avoidability of car accidents. Much of the value that we are developing for autonomous driving future is tappable. You can tap into it also in the driver assistance domain. And that’s what we do. We have announced the Travel Assist 2.5 with Volkswagen on CES, which demonstrates how REM, the dynamic mapping system that I just described, can help push forward the envelope of driver assistance to provide full empath assist. That really takes into account all of these intricate semantics that you can aggregate as you drive through the environment.
In very short terms, it kind of makes any road, a familiar road. When you’re a driver, there’s a big difference between driving on a familiar and non-familiar road. That makes any road familiar for us and we have the foresight and the stability of understanding the stack environment.
Daniel Newman: And I think as a kind of a practical example, if I may, you look at the fact that a lot of companies that are doing this different mapping technologies, maybe we’ve all seen the Google vehicles or the Apple vehicles with their lidars running around, taking pictures and sensing environments. And they got one or two cars running around a city creating the map, right? And it’s not necessarily just for the road, it’s also for buildings. And a lot of other reasons they’re using it. But, one of the things interesting when you have all these on the road and you’re collecting all this data. I’m not going to quite get to the metaverse where you talk about being able to actually simulate that. But, that’s a scalable model because you have so much data, that you could actually create a digital twin because of all the data you have.
But, another thing that’s pretty interesting to me is conditions. It’s not static, Erez. It’s not like these cars are in a static environment. Say for instance, you’re in an environment that has four seasons. You have snowy seasons. Seeing how drivers perform and how automobiles are able to handle the road conditions in a changing situation, that data could then be utilized to give the car better information in terms of how much space to keep between vehicles. Knowing that on a night when it’s snowy and it’s cold versus a day when it’s dry and it’s sunny. And all I’m saying, these are nuances. These are small nuances, but with training, with data, with repetition, these nuances can create that safety profile, can create better road experiences, reduced congestion that’s caused by accidents. And this data means a lot.
Erez Dagan: So, there are two important points that you touched upon here I want to reflect on. The fact that the comparison to the highly equipped mapping vehicles that we’re all familiar with from other companies, we are talking about a system that transmits the amount of, I don’t know, two YouTube videos over a year.
Daniel Newman: Yeah.
Erez Dagan: Okay. So, the harvesting vehicles are very lean in what they harvest. They are semantically decomposing the video into the actual essentials of mapping and driving the environments afterwards.
And the second element you touched upon, which is very, very important as well, is these intricate semantics. For example, what is the common speed on the road I’m driving at this time of day, this time of year? It could correlate very well to the chances of meeting, I don’t know, black ice, or there is a traffic jam ahead of me. And the common speed is coming for a reason or I’m about to enter a very sharp curve that the crowd knows about and the crowd drives by. And I can inform my driver assistance or autonomous driving to do the same.
Daniel Newman: Absolutely. And with connected vehicle to cloud or CV2X, that we talk about a lot, this actually gives the opportunity for each individual vehicle to know, but also for the vehicles to talk to one another. Because you have different traffic density, for instance, at different times of day. And that density of traffic is important that these vehicles make different decisions. Breaking, turning. Everything that they do can become more intelligent. They connect to the city, they connect to one another, they connect to emergency services. And then, of course, there is so much more data and if we just know the power and possibility.
So, only have a couple minutes left here. So, I’d kind of like to maybe summarize a little bit here and kind of come up with a big, large leading question where you can talk a little bit about what’s ahead.
You’ve got the road information, you’ve got the sensors. One of the big challenges, though, is still kind of the decision making of a vehicle, right? You always hear that sort of dilemma situation where the vehicle’s going to crash. It’s just, how does it crash? The stroller, the mother, the coffee shop. You hear these all the time. So, we’ve come a long way. And ideally, all the technology that we’ve talked about today, for instance, just using safety as the example, is going to eliminate a large number of these. But, there’s always going to be circumstances that are going to be completely unavoidable.
And AI, for instance, the ethics, the decision making policy, kind of going back to the beginning of where we started, guidance there. What’s kind of ahead for this? How are you seeing the future being in terms of managing all this data, managing the advancement of these systems to make sure that they’re as safe as possible, but that AI is done in a way that’s thoughtful and equal and democratized across the world?
Erez Dagan: Yeah. So, I’ll attend to the question about the driving points.
Daniel Newman: Big one. It’s a big one.
Erez Dagan: It is a big one, but it’s great. Great set of questions. I’ll be glad to poke it a bit. The very important distinction that we make in what’s called driving policy, the decision making of the vehicle based on its perception of the environment, is a very clear distinction between comfort related decisions. Decisions that elude to the comfort of the driving and decisions that affect the safety of the driving. Okay. This ambiguation is very important. And what I can tell you for sure is that because you cannot learn black swans, machine learning is not the tool for the safety part of it. Machine learning can build a very expressive and interesting strategies and tactics of driving policy, but when it comes to the safety of it, we have devised a formal, explicit model of quote unquote, the digital twin of duty of care.
How do you make sure that the vehicle put on the road has a contract to which it complies that gives you 100% clarity as to the boundaries of its decision making? What informs its decision making and how it’s maintaining the boundary between assertive driving and dangerous driving. This is the red line we humans usually cross. And if we were programmed to comply with a digital contract of duty of care, no accidents will happen. And that’s the future possible with autonomous digital vehicles traveling around. This is, I think, one of the most important elements in the safety, the backbone of the safety of our system. It draws a lot of things as far as the design of the rest of the system. And it most importantly allows us to deliver a very compute, lean, decision making process. And this is a bit too long to go into right now, but it’s one of the very important rewards of this RSS system, this formal model.
Daniel Newman: Erez, I want to thank you so much for taking the time here with me today. I think I’ll sort of conclude all of this by saying the space is very exciting. There’s a ton of growth. I recently did a picks for 2022 piece on MarketWatch. And one of the things I had picked out for 2022, for people that still have an investment appetite, given all the chaos in the market, is that Mobileye is going to be a very important company to watch’s. It’s going to be a very strong growth case, but I love the fact that you could go into these kinds of details, breaking out and separating the difference between comfort and safety, the challenges of AI.
What I do know for sure is this is where the market wants to go. This is where we are heading safer, better experiences, more connected. The vehicle is definitely the next frontier for mobility. And I’m sure we’re going to have opportunities to talk a lot about that because it’s not just about driving. It’s really going to be about creating and driving mobility in the future. And I know that’s something that Mobileye is focused on, beyond the places where you see lots of vehicles. That mobility is a challenge everywhere in the world.
Erez, thanks so much for joining the Futurum Tech webcast.
Erez Dagan: Thank you very much for having me. Thanks.
Daniel Newman: So, we’ll have to have Erez back because this is a story that is not even close to over yet. So much more technology to come. Growth to come. That looks to be very exciting, but as you can see, there’s a lot of technology, a lot of innovation going on. And it’s just exciting. And if you’re in the vehicle space and you like this kind of technology, I can assure you, we’re going to be talking about it more.
Hit that subscribe button. Join us for future episodes of the Futurum Tech webcast. We have a lot of great and exciting interviews with executives across the technology space. For this episode, though, I have to say goodbye. We’ll see you soon.
Author Information
Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.
From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.
A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.
An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.