On this episode of The Six Five – Insider, host Daniel Newman welcomes Sumit Sadana, Chief Business Officer and Executive Vice President from Micron Technology at CES 2024. They discuss the future of AI and its rapid advancement, and what this means for the memory and storage industry.
Their discussion covers:
- Micron’s view on AI
- What are the major impacts AI workloads have on Data Centers and how will it continue to change
- AI coming to devices and what role memory and storage plays
Be sure to subscribe to The Six Five Webcast, so you never miss an episode.
Watch the video here:
Or Listen to the full audio here:
Disclaimer: The Six Five webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Transcript:
Daniel Newman: Hey, everyone. Welcome back to another episode of The Six Five Podcast, Daniel Newman here. Six Five is on the road. We are here in Las Vegas and we are joined today by Micron, Sumit Sadana. He is the Chief Business Officer and EVP. We’re going to be talking about trends in AI and memory, one of the biggest and most important components to making all this AI stuff work, and it’s been a year since we had Sumit on the show and it’s exciting to bring them back. So without further ado, Sumit, welcome to the podcast.
Sumit Sadana: Thank you, Daniel. Thank you for having me. It’s good to be here again with you.
Daniel Newman: It’s great to have you back. It’s been a voracious 14 months since the advent of ChatGPT, and I think everybody felt it at slightly different times. Some of us felt it immediately when that trend line kind of hit. I remember there was a moment where I was in Redmond, maybe February, at a Microsoft event where they announced the Bing with the ChatGPT in it, and I’m like, “Oh my gosh, this is real,” and I think depending on if you’re a PC maker, if you’re a data center, GPUs, if you’re in the software space, but all year, every event was all about AI, CES here, no different, so let’s start there. Let’s talk about the Micron perspective. You heard me tee this up about memory and being one of the most important components, but how is Micron thinking about and looking at this really important inflection point with AI?
Sumit Sadana: It’s a great question because AI has been around, as you know, for decades and research has been going on, progress has been made over this time and both processing, memory, and then data have been important elements to make AI work and of course improving algorithms, but something special happened with generative AI. And I think this whole wave that we are on now with generative AI and transformer technology is really going to usher in exponential growth in the capabilities of these systems. And this exponential growth is going to drive new applications, it’s going to be very disruptive, it’s going to start out in certain areas, but then spread to all different parts of the economy.
And it’s going to start out in the data center, then spread to edge devices as well, and it’s going to be all requiring a lot of memory as this comes along because as you know, all of the data that’s in these large language models needs to be analyzed, needs to be moved in and out of the processors, and high bandwidth memory and high capacity memory DIMMs are going to be super critical for that. So, it’s a very exciting time. It’s going to be very disruptive for the economy, a lot of growth ahead, and we are very excited.
Daniel Newman: And if I was in your shoes and I was in the memory business, I would say after what has been a couple of more challenging years, that this is a really exciting moment. And by the way, across the portfolio because it really doesn’t matter if it’s the device or if it’s in the data center, memory is going to be growing, it’s going to have a very significant symbiotic sort of relationship with all this compute. And so, last year it was all about the data center. I actually in in jest I sort of say ’23 was the year of the GPU. That was really it. If you’re in the AI space, there was a couple other companies that really saw benefit, but unless you were selling GPUs and really almost only one company was doing that, everyone else was sort of trying to find their AI legs, but what we know now in our intelligence, and our research is showing that this is going to be a year of implementation. So, we are moving from kind of building out the infrastructure. We’ve heard companies like Cisco’s Chuck Robbins, he came out and said, “Last year people were buying a lot of gear. Now we’ve got all this backlog.” It’s an implementation year. So, talk about how the data center is going to transform, how do we get to implementation and how does Micron really see data centers changing because of this AI growth?
Sumit Sadana: The AI growth is really in its extremely early innings and thus far in 2023, as you have mentioned, it has been a lot of focus on training, so training infrastructure in the cloud, a lot of GPUs and memory that goes along with it deployed in the cloud, and as these trained systems become capable and have applications, they’re going to start moving into inferencing. And a lot of that inferencing will take place first in the cloud and it will have a lot of growth as you can imagine. If you take a simple thing like Copilot for programming, and that’s one particular use case. There are going to be a lot of use cases, but that’s just one of them, and that requires a lot of inferencing infrastructure to be deployed for programmers over time around the world to be able to use that capability.
And after we focus on all of the training and inferencing in the cloud, you are going to look at growth on the edge in devices. So smartphones, they’ll have their own smaller version of large language models, so call it 10 billion parameters or so. And to implement those, you’ll need 50% more average capacity of DRAM, four to eight gigabytes more of DRAM, so more than 50% on the memory side in a smartphone, and similarly on the PC as well, four to eight gigabytes of extra DRAM in a PC to be able to do a lot of applications locally on that device that don’t require a cloud backend to be able to do all of the inferencing, a lot of it that can be done on the PC and on the smartphone. So, those products are going to start coming out in the second half of calendar ’24 and calendar 2025, which we have said we expect record TAM for memory is going to be the first full year where we would have those new products on the PC and smartphone side to address these models, as well as ongoing continued growth in the cloud for both training and inferencing. So, ’25 we see is going to really build on the momentum in ’24 and become a really, really big year.
Daniel Newman: You sort of gave me the gambit, the whole continuum. We started in the data center, and I love what you said about inference. You heard me and maybe my simplistic ’23 being the year of the GPU. ’23 was about training, ’23 was… Most of the spend went into putting the infrastructure in place to allow to train all these LLMs and really only a small… It was less than 20 companies that you were really doing business with, the ones that were buying the massive infrastructure, AI and traditional hyperscalers, a little bit of enterprise, but just the biggest of the biggest. And then you really have everyone else now and everyone else who wants to consume AI. And when you consume AI, it’s inference. And to do that, you have to put significant amount of computing… You’re hearing about ASICS, you’re hearing about these special chips that are going to be designed for AI.
The big talk of CES here is the NPUs. So, I’ve had the chance to sit down with a number of the leaders of the PC businesses at the OEMs, and you didn’t say the words, Sumit, but AI PC, that’s the words. I don’t know if you’re allowed to say it, but I’m going to say it, and this is a super cycle. And the reason you’re predicting the massive TAM is a combination of more inference in the data center and more inference on the device. And by the way, the device is going to have to navigate sometimes on a say a 200 billion parameter model might be on the device. On these trillion parameter, it doesn’t matter, you’re not going to be able to put enough compute or memory on the device to successfully do low latency inference, but talk about the CES pivot. Talk about… Because we’re here, Consumer Show, all about AI PC… I’ve kind of teased it a bit, but this has to be just a massive opportunity for Micron right now to attach its innovation, some of the things you’ve been working over the last few years, and to really grab the market, the imagination, and to take some credit for all these cool things we’re going to be able to do on PCs.
Sumit Sadana: The PC market is going to get rejuvenated, the smartphone market is going to get rejuvenated by this-
Daniel Newman: Smartphones.
Sumit Sadana: Because even smartphones, we have AI-capable smartphones that will be coming out later this year, and again, just like PCs, PCs and smartphones, if I just take that as a category of consumer electronics devices that are going to become a lot more capable, you’re going to see definitely the extra memory, the extra compute capability in these platforms to be able to do these smaller version of LLMs on these devices. And it’s going to really energize these markets because we had a big boom in PC unit sales at the time of COVID, and they went up all the way to 340 million units from roughly 260 million units a year, and then they have come down all the way now PC sales are running below pre-COVID levels on an annualized unit basis.
So, you’re getting into 2024 and 2025 when a replacement cycle is going to start, and how good is it that we’ll have all these new capabilities in AI PCs for people to use as a catalyst to upgrade their PCs that were bought in 2020 and 2021. So, it’s going to kick off a replacement cycle with these higher capacity, more capable hardware late ’24 and into ’25 and ’26, and that’s going to be a big growth driver. PCs and smartphones make up half of the memory market in terms of consumption, and so that’s going to be a big tailwind because smartphones, on the other hand, have had in 2023, unit sales that hit 10-year lows. So, the average age of the smartphone in the hands of consumers is becoming pretty old now, and so again, consumers need a reason, a catalyst to be wanting to upgrade their smartphone.
And this is a great catalyst because again, a lot more capable hardware will be coming later in 2024, and we believe it will rejuvenate that market. So, a lot of that edge device capability will get kickstarted, and again, the exciting part is it’s only in the very early innings, and while all of this stuff is happening in the edge with early models of applications that can help consumers, even if you look at training and inferencing in the cloud, you are going to start seeing a lot more specialized models that get trained. For example, let’s say you’re in the legal profession, having an LLM just focused on the legal profession and be very, very deep in that area. And as you know, every country has its own laws, so there’s a lot of work in that vertical alone to be able to create a very sophisticated capability.
Let’s say the medical field, again, a very deep capability can be created where a model can be trained just on medical data alone. So, there are a lot of verticals where training will happen in very specific verticals, and then there’ll be a lot of inferencing when this is deployed to large number of users, whether it’s doctors in the healthcare field or consumers who want to take ownership of their healthcare or access to medicine in countries where doctors are in shortage. So, a lot of that is ahead of us, so we are very, very early in things. We are looking at a 10-year or 20-year growth cycle here.
Daniel Newman: All of those conceptual use cases, which of course we were using AI for those things before generative. Generative has kind of been the killer app for AI, right?
Sumit Sadana: Yes.
Daniel Newman: But it’s going to drive the need for more powerful and more frequent upgrades. And you’re seeing this across the board, by the way. It’s devices, it’s going to be phones because the reason phone sales, mobile devices hit all time low is because even not as an analyst, I’ll say it as a consumer, I can’t figure out what I get in terms of an experience when I’m… So, you see companies come out with their new spatial computing technologies and you go, “Okay, here’s an app. Is this a killer app? I don’t know,” maybe there’s a several 10, 20, 50 million units for a Vision Pro or something like that, but the point is the AI PC, the AI-enabled phone, the LLM locally run that can handle your personal concierge services, that’s game-changing. When you go, “Well, you can’t do it on this one anymore, you can’t upgrade it.” And by the way, cars, you didn’t really mention cars. I know that’s part of the Micron business, and CES is kind of an auto show.
Sumit Sadana: There’s a lot of auto companies here, really cool technology, really fun stuff to see how automobiles are evolving. And you’re right, it’s a big business for Micron because we are number one in the world in automotive electronics, and we have very high share in automotive compared to our otherwise supply share in DRAM. And it’s a very fast-growing business for us because automotive business for us has been hitting new records on a quarterly basis for several quarters, even as the rest of the business had been challenged with the market environment in 2023. So, this continues to be a growth area because we obviously have the trends of EV, which has a lot more electronics content in the car, as well as all of the infotainment. And even if you don’t think about autonomous vehicles, just the ADAS capabilities that are improving in the car, all L2, all of that uses dramatically more memory content.
Daniel Newman: The software-defined vehicle is creating-
Sumit Sadana: Exactly.
Daniel Newman: … It’s orders of magnitude more silicon in each vehicle. So, you don’t actually need more vehicles to be created or sold to sell more chips.
Sumit Sadana: That’s right.
Daniel Newman: And so the full stack-
Sumit Sadana: It’s a content increase.
Daniel Newman: It’s a content increase, which is tremendous, of course-
Sumit Sadana: Tremendous.
Daniel Newman: … The overall disaggregation of silicon: MPU, VPU, DPU, IPU, CPU is creating… And memory of course, is creating the need for more disparate silicon, which of course as the TAM grows, as the overall units grow, and memory is a corollary piece.
Sumit Sadana: And GenAI is going to come into the car, too, and it’s going to create a lot of capabilities and make it more natural for drivers and passengers to interact with the car.
Daniel Newman: Saw a few demos here and they were pretty cool. I always wondered what that light was that was on and my wife’s like, “You should probably check that out.” I’m like, “I’ll just drive until the wheel falls off,” but cars have been pretty smart, but not that smart. So, we have just a couple minutes left, Sumit. How does this go from here? What’s the next wave?
Sumit Sadana: I think the industry in 2023 went through a lot of challenges on the supply versus demand side, and so a lot of supply has gotten cut in the form of CapEx reductions, in the form of under-utilization, which is now transforming into lower structural wafer fab capacity. So, coming into this upturn on the supply side has been definitely being reduced. 2023 is one of those very rare first year in the history of the industry where bit supply growth was negative because of all of these actions. And in 2024 till the profitability gets back to robust levels, the appetite to invest in CapEx is still low because we have to be disciplined about this. And Micron is certainly continuing to see reductions in 2024 wafer fab equipment spending over 2023. And so, we are staying very disciplined on the supply side, and then all of these dynamics we discussed on the demand side are really helping drive improved pricing. We expect pricing to continue to improve throughout 2024, which would then drive improved financial performance as well.
Daniel Newman: Absolutely, I see it’s going to be a memory moment and some of that pricing power comes back and anyone that follows the sort of cycles of semiconductors understands this, but perhaps one of the biggest, if not the biggest cycle we’ve seen, at least in my life, is going to be powered by this AI movement, so Sumit, I really appreciate you spending some time here with me at CES 2024.
Sumit Sadana: Thank you, Dan. I appreciate it.
Daniel Newman: All right, everybody, hit that subscribe button. Join us for all of our Six Five coverage here at CES 2024, and of course, join us for all of our episodes of The Six Five we’ve got On The Road, Insider, In The Booth, our Six Five Summit. We appreciate you tuning in, but for now it’s time to say goodbye. We’ll see you later.
Author Information
Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.
From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.
A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.
An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.