On this episode of The Six Five – On The Road, hosts Daniel Newman and Patrick Moorhead welcome Forrest Norrod, Executive Vice President and General Manager, Data Center Solutions Business Unit at AMD for a conversation on open data center infrastructure ecosystem for AI and AI hardware.
Their discussion covers:
- What AMD’s customers are excited about as this AI inflection point continues to grow, around AMD and their AI vision
- AMD’s emphasis on the need for an open AI ecosystem (software, hardware) and how customers benefit from open ecosystems
- The strategy behind combining CPUs, GPUs and Memory into a single package for AI and HPC workloads
- The behind-the-scenes on AMD’s work on the exascale class supercomputers powered by AMD
- Where AI data centers may go in the future and the “next big thing”
Be sure to subscribe to The Six Five Webcast, so you never miss an episode.
Watch the video here:
Or Listen to the full audio here:
Disclaimer: The Six Five webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Transcript:
Patrick Moorhead: The Six Five is on the road at AMD’s Advancing AI event, and there have been some huge announcements all the way from the data center to the client computing and everything in between. Dan, it’s great to see you.
Daniel Newman: Yeah, it’s great to be here. Today was a high energy day. Right off first five minutes, Lisa Su, CEO of AMD, on stage and she was moving and covered a lot of ground, but this isn’t just about today, Pat. This is the day that the market’s been waiting for, a lot of people out there have been waiting for in the AI and silicon space. This may be one of the biggest days of 2023.
Patrick Moorhead: Yeah, and it’s incredible on a lot of things, and if nothing else was reinforced, it was the need for openness, right? Open models, foundational models, open software, open networking lanes that I know you and I have talked about networking a lot as that missing piece that not enough people were talking about, but we got it all today. And I’m really pleased to introduce Forrest Norrod, who runs the data center business at AMD. Forrest, great to see you.
Forrest Norrod: Great to see you Pat. Good to see you, Dan.
Daniel Newman: Yeah, it’s good to have you here. You must be smiling ear to ear even though you might look a little stoic. Pat, we’ve been around enough events to know the amount of tension, the buildup, the excitement. Also-
Patrick Moorhead: He was rocking on stage. I’m going to tell you the education that I got too. I took some notes. And as an analyst, sometimes we don’t admit when we take notes, but I was- particularly on the opus and networking, so I appreciate that.
Daniel Newman: As a modern, young millennial myself, I note things by tweeting and then I go back and look at them later.
Patrick Moorhead: And then we do the long form like analysts do, but we did put a lot out there.
Daniel Newman: I’m joking.
Patrick Moorhead: I know.
Daniel Newman: All right, we’re good. So AI is at this massive inflection and it was a theme of the day. It was the theme of the year really since about November 30th, 2022, but you’ve been working on it a lot longer than that.
Forrest Norrod: Correct.
Daniel Newman: And it’s at this inflection point, and here you are entering a market, you’re being looked at as the rising competition in the data center, very compelling numbers, very compelling metrics. How are you viewing this moment in time and how are the conversations with your customers? Are they feeling confident? Are you feeling confident that you’re prepared to compete and that they’re really willing to go all in? It seemed that way from today, but interested in how you’re seeing that.
Forrest Norrod: Yeah, we definitely think today was just an incredible day. It was the culmination from our point of view, not just of what’s been going on this last year, but quite candidly, we introduced the MI300. We’ve been working on it for over five years. People sometimes don’t appreciate the complexity of these chips and what we’re trying to do so that these multi-generational roadmaps get set in place years in advance. And particularly for the MI300A, which has some really interesting technology, the chiplets, the mix of processes, the large package, on the MI300A, the combination of CPU and GPU together. We’ve been working on it for a long time. And so for us it was an incredible day to have that culminate in not just a launch, but in really strong customer acceptance and the excitement of our customers that we’ve got. Not just the silicon, but all the work that we’ve done on the software as well has come together at one point where the excitement of AI is there. That’s the thing we didn’t know five years ago, right? We didn’t know. We believed that this AI inflection point would happen at some point, but you don’t know when. And so when we started this journey, we didn’t know that the whole world would be revolving around generative AI today, and the whole world would be saying thank heavens that AMD is offering us a really solid, complete AI solution, hardware and software that gives me choice and then also helps foster innovation across the industry.
Patrick Moorhead: Yeah, Forrest, a couple thoughts there. So first of all, thanks for reinforcing what I’m trying to reinforce when it comes to silicon, which is again, software is not easy, it’s really hard, but you don’t have to start five years and plan it. But with semiconductors locking in on architecture five years in advance, and then, oh my gosh, once you get the silicon even back, you’ve got another year to get it in high volume at least. But when you add the complexity of chiplets and then you layer on the software, it really is a big day. So I talked a little bit in the run-up about a lot of the discussion about openness and everybody has their definition of open and some people might say, well, when it’s not open, it goes faster, when you look at full stacks and so people go in. But can you talk about … and a little bit of a leading question in that Dan and I are both more competition is better, but can you talk about the value open in the context of what you said on stage?
Forrest Norrod: Well, I think, first off, we think open is absolutely crucial, and as a company that’s long been the orientation-
Patrick Moorhead: It has been the hallmarks of AMD.
Forrest Norrod: Yeah, we’ve always been open, we’ve always been partnering. And the reason is that, quite frankly, we think that’s where the most value and the most innovation flows is from open ecosystems where others can come together and add value and add new ideas to your platform. And so we think a question of, to my mind, open versus completely proprietary and locked down … if you’re proprietary and locked down, that’s a statement that you believe that your engineers are smarter than everybody else is.
Patrick Moorhead: The entire industry combined.
Forrest Norrod: That’s right, and we certainly have never been arrogant enough to think that, and so we think open’s critical, and we also hear that in spades from our customers. They want open platforms so that they can add value, so that they can add their own innovations around the ecosystem and quite also so that they’re not locked in to something proprietary that maybe has unfavorable economics, if you want to say.
Patrick Moorhead: Yeah, no, I hear you. And again, at AMD Hallmark for a long time. And the reality is sometimes open doesn’t go as quickly, but it absolutely looks at least by … and in the end, the response from your customers and your partners, it was really a tour de force today that reinforced that.
Daniel Newman: If you look at, I think just yesterday ahead of your event, AMD’s part of this new IBM, Meta-led AI alliance, which is not just tech companies. It’s laboratories, institutions, universities. It’s up and down the stack. It’s security, it’s SaaS companies. And that really does come down to the fact that I think people understand the critical nature of getting this right and that having too few control too much, it would be like having one or two companies having ended up controlling the entire internet. Now, I’m sure someone will argue that that did in some way happen, but really, it’s been more and more democratized as time has gone on. AI is going to be similar, it’s going to change the world. It’s changing our path. And like I said, Lisa alluded to how much it’s changed everything in the past year. Something else has changed a lot for us in the past few years. It’s been the packaging conversation. Well, a little pivot there. We were real deep, but now we’re…we’re going to get to pack. We like doing chip guys. We’ll go from being big picture guys to chip guys. But packaging in terms of bringing together, putting all your memory compute into a single package seems to be a trend line in the industry, and today it seems to definitely be something that you’re leading with. Talk about how that’s enabling you to advance, to innovate, to drive next generation designs, and of course to stay competitive, for us.
Forrest Norrod: People have talked for most of the last decade about how Moore’s law is under threat and it’s slowing down, and the traditional way that we used to get more capability and more price performance is just rely on the process, just get more transistors. And they’ve been talking about how that’s been slowing down. At AMD, I’d say we looked at it very dispassionately almost a decade ago and said, yeah, you’re right. And so therefore, and that’s the step that a lot of people didn’t take, therefore, we have to do it differently. We have to embrace chiplets. We have to look at how certain types of logic and other functions scale differently as the process generations continue. And so we’re going to want to have chiplets of different processes mixed together in one package in order to continue to deliver the most price, performance and power efficiency. And so once you accept that you’re going to have to do that, packaging becomes the obvious new critical thing that you have to … it was almost an afterthought in years past, but now it’s central to how you design these chips, how you design these systems. And so we invested massively, and I’d say I do think we’re generally four or five years at least ahead of most of the industry in terms of embracing this technology. You see Intel now with Sapphire Rapids has gone to a four die topology, very, very similar to what we did in the first generation Epic products back in 2017. They’re advancing rapidly as well, but they’ve begun that journey as well. We think everybody doing high performance parts is going to go after advanced packaging, 2D and 3D over time, but we think we’re ahead.
Patrick Moorhead: Yeah. Well, first of all, even when I was in the business nearly 13 years ago, packaging was an afterthought. It was something you threw over the wall. Was it organic, PGA, BGA? And that was almost the extent of it, but now it is equal partners with chip design. And the big bet I remember on Ryzen was … because there was this discussion about, oh, MCM was always slow because you just couldn’t get the interconnect fast enough and you couldn’t have it sucking too much power, right? So hats off to you. I think it started with Ryzen, with Freedom Fabric pulling together this, and then you’ve changed the topology and improved it with Epic for data center applications.
Forrest Norrod: Right.
Patrick Moorhead: And then here with Xilinx acquisition, they had a lot of HBM, multi-die designs as well, so you pull that capability in. And then here on the data center, GPU side, an accelerator, I think you did a really good job with the 300 A and the 300 M showing all the different pieces and how they come together. So yeah, you took the huge bet, I think you said a decade ago, and it has paid off in spades. As you’ve increased market share with Epic, you’ve increased market share with Ryzen, and we’ll see about data center GPUs. I want to shift the conversation a little bit to high performance computing, and you have won some very major national labs contracts and these things get won off a piece of paper and a belief in the technology. And you were chosen, AMD was chosen, the solution was chosen to power these. Can you talk a little bit about these two, I’ll call them exoscale class compute wins?
Forrest Norrod: Yeah. So the first one that we’re super proud of is Frontier, which took over the number one position on top 500 about a year and a half ago.
Patrick Moorhead: Right.
Forrest Norrod: And that was MI250 in a third generation Epic derivative part. And you’re right, we really won that deal three or four years earlier, right? And you’re working with the labs, you’re working with your design partner, in this case, both for Frontier as well as El Capitan. The other one we’ll talk about in a minute, it was Hewlett Packard Enterprises. Well, actually it was Cray to begin with at that time, and then Hewlett Packard Enterprises, but you’re doing advanced design and the customer has to bet on the credibility of what you’re saying long before anything exists.
But we viewed both projects as incredibly important projects for AMD in terms of our ambitions on HPC, our ability to have flagship design wins to anchor our roadmap as well as drive us, quite frankly, as well as getting partners, particularly in the HPC realm, getting partners aligned to our software ecosystem to help flesh out the overall solution. So we were super proud of Frontier and seeing that, and still number one on the top 500 a year and a half later. It’ll be number one for at least two years. We think that-
Patrick Moorhead: I haven’t seen that before, but maybe I’m not paying attention long enough where something has been there for so long.
Forrest Norrod: I think Summit was on there as well for quite some time, but yeah, that’s a little unusual, and we think the next big system certainly from us is El Capitan based on MI300A. And that’s a really cool part because it combines both the CPU and the GPU together in one package, which the strong feedback from the HPC customers was that really can help speed up their applications. And I think that’s really why they entrusted that design to us, and we’re building it now, and so I can’t wait to see it go live next year.
Patrick Moorhead: Yeah, I’d love to see … the AMD spearheaded an issue called HSA, which was sharing, this was I think 13 years ago, right? It was amazing for me. By the way, I wrote a white paper on it. I think it was my second that I did as my analyst company, but seeing that, the concept of shared memory coming to fruition is pretty cool. And by the way, it’s been adopted even by smartphone vendors too. So anyways, congratulations on that.
Forrest Norrod: Thank you.
Daniel Newman: Yeah, it was interesting. Out in Denver at Supercomputing this year, the show was all the rage. People were talking about pre-2020 when everything got shut down. The show had been very limited, a bit narrower researchers, institutions, and this year you couldn’t get in the door. It was jammed wall to wall, and my joke was flops to tops. This was the year that it all fit-
Forrest Norrod: Well, yeah, no, it has become … it’s the de facto AI hardware conference as well as Supercomputing conference, so it’s been great to see.
Daniel Newman: The pivot was palpable this year. So as we wrap up, for someone that sits in a position like you that’s leading a lot of the charge from a development standpoint, also oftentimes in front of customers, working closely with this partner ecosystem, you had some really impressive names on stage with you. Where does it go? How fast does it move? I mean, I know the law of diffusion of innovation would suggest that these periods continue to get shorter, but there’s not much half-life left. With humans, I’m watching the machines, but we need more … our GPUs need to slow down a little bit.
Forrest Norrod: I say the pace is incredible. And by the way, it is a challenge I think for many institutions. You see, the pace of AI hardware, I think is picking up too. You’ll see annual introductions now of new AI hardware, which is akin to the PC industry.
Daniel Newman: Yep.
Forrest Norrod: But the machines, the systems that these are going into are fantastically more complex than that. And it’s difficult, I think, for many organizations, customers and partners to absorb that rate of change. What I think you’ll see is you’ll see many customers will skip a generation. They’ll still deploy every other year, but they’ll be out of phase with each other. Overall, I think customers are struggling to deal with this rate of innovation, but they’re so excited at the promise of what AI can bring to everything and the productivity enhancements that we can see. That’s an absolute imperative for everybody across the industry to move as fast as they possibly can. And the things that we’ve seen from the productivity … even internally, we’ve got about 100 AI projects underway -By that, I mean to say where AMD is using AI, we’re seeing massive-
Daniel Newman: Things like hiring, things like HR, like sales or-
Forrest Norrod: All of those, but also on the engineering side for design, for validation, for compatibility. There’s all sorts of different applications, and we’re seeing massive productivity enhancements, which is helping us design faster, so to maintain this fast rate. But it also gives us a lot of confidence that this is not a bubble, this is not a flash in the pan, that the productivity enhancements that are being promised out of AI really are coming and justify increased investment in IT. So I think it’s going to be an exciting few years, for sure.
Daniel Newman: Well, Forrest, thanks so much for sitting down with us today.
Forrest Norrod: Well, thank you guys. Really appreciate it, and thanks for coming.
Patrick Moorhead: Thanks.
Daniel Newman: All right, everybody, hit that subscribe button. Join Patrick and myself for all of the episodes here at the AMD Advancing AI event in San Jose, California. Big day, Pat, for AMD, big day for the AI space. We appreciate you all tuning in. We’ll see you all soon.
Author Information
Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.
From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.
A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.
An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.