Search
Close this search box.

AMD’s AI Business Update with Six Five Media – Six Five Media at AMD Advancing AI

AMD’s AI Business Update with Six Five Media - Six Five Media at AMD Advancing AI

It’s Six Five Media at AMD Advancing AI, with our hosts Daniel Newman and Patrick Moorhead joined by AMD‘s SVP of Artificial Intelligence Vamsi Boppana, and Brad McCredie SVP of Data Center GPU and Accelerated Processing, to discuss the chipmaker’s full-stack data center solutions, EPYC processor market share, and the evolving roles of GPUs and CPUs in AI.

Their discussion covers:

  • The evolution of AMD’s strategy from providing compute engines to offering comprehensive data center solutions
  • The integration of AI, Instinct Accelerators, networking compute, and EPYC CPUs into full stack solutions amid changing customer preferences
  • The significant market share gain by AMD’s EPYC processors, customer feedback, and how AMD continues to execute its roadmap effectively
  • The emerging landscape of AI, with a focus on the balance between GPUs and CPUs in advancing AI technologies and what customers should consider as they venture into AI

Learn more at AMD and AMD Advancing AI.

Watch the video below, and be sure to catch our full coverage at https://sixfivemedia.com/amd-advancing-ai/

Or listen to the audio here:

Disclaimer: Six Five Media at AMD Advancing AI is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript:

Patrick Moorhead: The Six Five is On the Road here in San Francisco at AMD’s second annual Advancing AI event. It’s been a pretty incredible day here. We love AI, but when it spans the data center, CPU, GPU, networking, client computing, and pretty much everything in between. It’s a great day.

Daniel Newman: Yeah, it was a lot to digest, Pat, but we know how important AI is. It’s not just technology, it’s really changing the world. And so when companies are talking about it, there’s kind of these different talk tracks. But what I really liked about today was it wasn’t just, hey, here’s more numbers and here’s more parts and here’s more pieces. It was really a very system approach. It really started to put the whole story together and I think as analysts we want to see that. We want to see how it’s coming together and how it’s driving productivity and how it’s helping drive businesses and people’s lives forward.

Patrick Moorhead: Yeah, I mean so much was talked about, but really what everybody wants to know is what is going on with GPUs, what’s going on with software, the ecosystem. Vamsi, Brad, great to have you on The Six Five for the first time. Hopefully not the last, but thanks for coming on.

Brad McCredie: Thanks for having us.

Vamsi Boppana: Delighted to be here.

Patrick Moorhead: Yeah, you guys crushed it.

Vamsi Boppana: Thank you.

Patrick Moorhead: Congratulations.

Brad McCredie: Thank you for saying that.

Daniel Newman: Yeah, it’s been big. And Pat, nothing is probably more on people’s mind than this AI number. Lisa came out with a bang, she came out with a new 60% CAGR five years. Was it 500 billion? I think the two of you, where you sit, have to be particularly optimistic when you see that. That’s a really significant growth. I mean, I think that’s more than twice what we’re forecasting. And I think what I’ve been saying is that’s the acceleration of demand for accelerators. It’s going really, really quick. Brad, I’d love to start with you though. As I said, congrats on the day, but your data center strategy has evolved quite a bit, kind of from parts to solutions. Talk about that journey a little bit from where you sit.

Brad McCredie: Yeah, I mean the truth is, has it evolved? Yes. Why has it evolved? Customer driven, the scale at which our clients are doing things is at the data center level now. It is just there. And so when they come to us, if we just show up with just a GPU and say, here have at it, they’d kind of look at you kind of funny. So we’ve now gotten to the point where we’re delivering entire solutions across the whole rack, across the data center. Like you said, we had the CPU, the GPU, and the network as part of the company. Those are acquisitions we’ve made. One of the most recent acquisitions we made to get there was ZT. That’s a very big piece of us being able to deliver an at-scale solution. And that capability combined with a lot of the IP and the know-how of AMD we’re able to provide data center scale solutions there.

Patrick Moorhead: Yeah, it’s incredible the way the industry evolves. We start off with piece parts and we do our best to bring them together, the time it takes to get them to market and try to increase reliability. And what we saw with AI is it broke a lot of things and whether it was trying to do a training run and the networking didn’t work or we burned out a GPU doing something, and then you layer on all the important software that goes into it, we’re shifting from piece parts to complete systems. And that doesn’t mean that everybody wants to buy these, but they at least want to make sure that that was put into it and very thoughtful. So Vamsi, this question is for you, what are you doing to pull all these players together? You’ve got your internal offering that you need to pull together, but you’re also working with a lot of partners out there. AMD stands for open, so you’re dealing with a lot of people in the open ecosystem.

Vamsi Boppana: Yeah, so great question, right? So when we think about the ecosystem, it has so many elements. That’s the hardware elements, partnerships that we have with the OEM community, for example, to build efficient systems. When we think about networking, it’s the consortia, whether it’s the UA-Link or the Ethernet consortium. But I’m quite passionate about software as you guys know. So I’ll spend a lot of time on that because that’s where we spend a lot more our energy as well. And it’s basically a strategy that we identified about two years back that we said software would be a core pillar underpinning our overall AI strategy.

Patrick Moorhead: Right.

Vamsi Boppana: And we felt like if we hitch our wagon to the open community, it just goes so much faster and AI is innovating so fast. We felt like that would be a really, really good approach for us to go to market with. And within that, what we said we would do is obviously we have to do the foundational pieces of the stack, but then we said everything that we do above the closest to metal pieces, we are going to put it out in the open so that the community can co-innovate on top of what we produce. And that’s actually proving out whether it’s the PyTorch of the word in the frameworks or whether it’s the model hubs like Hugging Face that are starting to work with us. And you saw in the developer community event today where the lead guys that put together these amazing frameworks like Triton and VLM and so on, they’re all sort of very happy to work with us and it’s just such a positive feedback cycle.

Patrick Moorhead: Well, I mean there’s two ways to approach it. I mean there’s close and there’s open and more times than not open is the accelerator for innovation. It lowers costs and just moves things more quickly. I have to mention, I mean what you did with ROCm the past couple of years has been pretty amazing. ROCm had been around for a long time, had been primarily HPC focused, and I knew last year when Meta got up on stage and gave you compliments about your software, okay? I mean, they’re some of the hardest graders in the industry.

Brad McCredie: They are.

Patrick Moorhead: Invent…?

Vamsi Boppana: We get grades pretty frequently.

Patrick Moorhead: No, No. Good feedback. Good feedback.

Vamsi Boppana: Good feedback.

Patrick Moorhead: I noted right then that something had changed. They didn’t say, this is great. They said big improvements, okay? And what we heard from them on stage today just kind of validated the accelerated growth that through hardware and software you’re driving there. So I had to note that.

Daniel Newman: As I was listening to Vamsi talk about it too. I was thinking about the parallels between Meta and AMD, and how it seems you’re sort of thinking about architectures a bit the way they’re thinking about software and about applications and probably a reason why you’ve had so much success working together. Of course that 405B announcement was pretty stunning. I mean, as an analyst, I was looking at that and I kind of did a double take and that was instant tweetable material. I’m like I said, anytime I’ve kind of seen it go outside of what’s been sort of the only acceptable way AI gets done, I like to share. And so it was a great example. Brad, I know you’re on the GPU side and on the Instinct side, but data center as a whole, there’s this relationship between CPU, GPU networking. I feel like we can’t not talk about Epyc a little bit. It’s north of 30% market share. I mean who’d ever thought you were there like 50 years ago and possibly you got to 10 at one point, right? And that was a big…

Patrick Moorhead: We were in the 20, I think it was 27%.

Daniel Newman: You got there. Well, I only discounted you by 17. The…it was a long time ago. I was in high school, but thirty-something percent almost 50, 60, up to 80% of some of the largest cloud providers, AMD, talk about how that’s kind of impacting your strategy, the customer approach and of course how that impacts your part of the business.

Brad McCredie: No, it’s a good question. And actually it’s been a place where we’re actually continuing to learn about this amazing and fast-evolving workload. First of all, I often always say at any time I get a chance, I have never in my career seen a workload that absorbs every piece of the computer like AI. I mean, it uses every gigabyte a second of network bandwidth you give it, uses every gigabyte a second of memory bandwidth, but also CPU to GPU bandwidth, big piece of it. We store so much of the data on the CPU side and the memory attached to that. You got to move it back and forth quickly. So the code development of the CPU and the GPU is very important.

Then another thing we’re seeing also is actually the performance of the CPU is having an impact. Forrest alluded to it a little bit in his talk today, but we’re seeing double digit impact of performance on the whole AI solution just based upon the headnote, which is largely the CPU. There you got the frequency, the CPU impacting kernel launch latencies and things like that. So, getting all the pieces designed together as this workload continues to improve in performance each day, each year, designing the pieces together is going to be just a backbone of being able to continue that trend of performance increase.

Patrick Moorhead: Yeah, Dan alluded to this before, but the whole tracking down fact versus fiction and one of the conversations is that the importance of the CPU is diminished when it comes to, it’s a headnote, it’s like a controller, it’s like a traffic cop. It’s moving stuff back and forth and the hardware side, you’re super leaning into that. A question, Vamsi, and Brad, you can chip in here too. How does this going to look like in the future, right? Is the CPU going to become more important? Is it becoming less important? What’s the future look like?

Brad McCredie: I don’t want to contradict you on this, but I have a definite opinion.

Vamsi Boppana: Why don’t you go first and I will follow?

Brad McCredie: Think it’s going to become more important. I think that’s an easy answer, and it really just goes back to the prior point I made. I’ve never seen a workload with so many programmers, so many brilliant programmers, quite frankly, working on it every day. I mean, like you were alluding, you’ve done CPUs before. It used to be in the old days, we all got to do all the innovation we wanted to do as long as they didn’t have to change the code.

Patrick Moorhead: Yes.

Brad McCredie: Here, 50 people show up every meeting and ready to change code if you give them more performance. So as you give them performance in the CPU, they’re going to find they, being the model developers, they’re going to find a way to use it. So I think anything that is offered anywhere in the system, it’s eventually going to keep getting soaked up. And I agree with Dan, I assume you’re talking about Dan McNamara, that that’s kind of a wives’ tale, that it’s just going to be a traffic cop. We’re seeing the impact on performance, and I just think it’s just a matter of time until as the models develop, they’re going to be like, oh, there’s some flaps over here. I’m going to go use them for some useful purpose.

Patrick Moorhead: All right Vamsi-

Brad McCredie: You can argue with me.

Patrick Moorhead: Is he speaking the truth here or what?

Vamsi Boppana: No. Look, my view largely the same as Brad, but I might just say it slightly differently. I think this workload’s much, much more different than anything we have seen in the past. It’s all about core design, right? Now, core design is a, well, we’ve done core design forever. People use that word, but it’s nothing like anything we’ve seen before. The core design here is truly just remarkable. And so what gets run on the CPU? How does the orchestration work? How does actually an agentic pipeline get stitched up, right?

Patrick Moorhead: Yes.

Vamsi Boppana: All that is actually just in front of us, right? Yet to evolve. And so I do feel like having a thoughtful system design with some elements that are more scalar compute, some elements that are more parallel compute, I think is very much so… by the way, another completely different perspective that I’ve gained is from our embedded portfolio. Some of you I’ve spoken with as well, we had Warsaw some time back where we had a scalar compute arm complex together with a FPGA, which was doing insanely parallel things. Then we actually had an AI engine dedicated, a VLIW-dense processing machine all in a heterogeneous system. That’s proving super valuable for certain use cases.

Patrick Moorhead: Sure.

Vamsi Boppana: So…

Patrick Moorhead: I love it.

Daniel Newman: Well, Vamsi and Brad, I want to thank you so much. I know it’s been a really busy day here. Thanks for diving into all of this. Thanks for the technical breakdown. I love getting the software-hardware perspective. I know it’s all sort of becoming this one plane. I’ve never seen it come together more. And by the way, Pat and I like to say it’s nice to be back in an era where chips are cool again.

Brad McCredie: Yes.

Patrick Moorhead: It is!

Vamsi Boppana: Hennessy, not Hennessy. Patterson, right? It is the golden age of architectures again.

Daniel Newman: Yeah, it’s good to be back. It’s cool to be a chip guy again. Pat, I was worried about you.

Brad McCredie: Job security.

Daniel Newman: Brad, Vamsi, thanks so much. Look to have you back again soon.

Vamsi Boppana: Thank you so much.

Daniel Newman: And thank you very much for joining us and being with us here at Six Five On the Road in San Francisco at AMD’s Advancing AI. Subscribe, be part of our community. Join us for all the content from this week’s event, but we got to go for now. We’ll see you all soon.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Nakul Duggal, from Qualcomm, joins Patrick Moorhead and Daniel Newman at the Snapdragon Summit to share his insights on the latest developments and the future of mobile technology. This brief overview captures the essence of their conversation, highlighting the innovation and strategic visions discussed.
Bruno Aziza, VP of Data, AI & Analytics Strategy at IBM, joins Keith Townsend to share insights on harnessing business intelligence and BI assistants to anchor and advance IBM's AI strategy.
Intel Is Undergoing a Significant Restructuring, Focusing on Investments in Artificial Intelligence and Advancements in Its Foundry Operation
Bob Sutor, VP and Practice Lead for Emerging Technologies at The Futurum Group, analyzes Intel’s Q3 2024 earnings report, focusing on the company’s restructuring efforts, investments in artificial intelligence, and advancements in its foundry operations.
Dion Hinchcliffe, Camberley Bates, and Steven Dickens share insights on this episode of the Six Five Webcast - Infrastructure Matters on how AI is revolutionizing the tech landscape, while mainframe technology solidifies its presence through unprecedented innovations.