AMD Advancing AI Event

AMD Advancing AI Event

The Six Five team discusses the AMD Advancing AI Event.

If you are interested in watching the full episode you can check it out here.

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Daniel Newman: All right, so let’s talk about AMD advancing AI. I’m going to do the highlight reel and then I’m sure you’ll like to get into the mud with it because you’re always really good at that stuff. First of all, this day was a big day that had been in the works for a couple of years. We’ve been hearing about the MI series. We’ve been hearing that AMD is going to have a NVIDIA compete strategy. We’ve been hearing the market needs a competitive data center GPU. We were hearing that maybe it was a inference, a powerhouse, and then we heard maybe it will be training and inference. On this day, AMD was able to march out with a one very, very competitive data center GPU for the cloud, the MI300X.

Second, they were able to march out with partners that are incredible validators of what they are doing: Meta, OpenAI, Microsoft on stage with them talking about utilizing their new MI300X as part of their go-to-market strategy for different uses. Some for all uses, some for inference uses, but nonetheless, using the AMD products. The company was able to come out and talk about higher layers of abstraction in their ROCm 6, which is the critical get-it-right that AMD needed to be able to lure in a broader part of the ecosystem to be developing for their hardware, and had some very positive momentum around ROCm. They were also able to announce a on-prem friendly for HPC and accelerated computing. They had the likes of Dell and others on stage that they were partnering up with this.

Then, Pat, they really ran down the gambit and brought the PC in and they’re like, “Well, let’s not just make it a data center show. Let’s do PC’s too, while we’re here,” and were able to announce the newest version of their AIPC. You and I got to talk to their entire executive team, talked to Lisa Su, had a great interview, can’t wait to share that with all of you. Also, talk to their heads of data center, their CTO, Forrest, Mark Papermaster. Others got to talk to their AIPC team. Very, very exciting week. Pat, here’s the question. This is the question everybody asks. There’s two questions. One, is AMD competitive within NVIDIA? Two, is AMD going to be able to…? Are the cloud providers going to be a partner or a competitor, and when does that happen? Those are the two questions that everybody asked me this week, on both US and on international CNBC, and we both did some of that and those were the questions.

One, Pat, this is not a finite race that’s been run and it’s not over. Lisa mentioned that. You look at the TAM expansion, you look at the demand, you look at the supply chain, you look at the need for alternatives. AMD is in the race. They got a multi-billion dollar pipeline, and I think that pipeline will expand. I think this announcement is validation. By the way, I still think NVIDIA’s got a really big lead and they’re going to continue to innovate. It’s not A or B, it’s A and B. The other thing is, look, I think that the cloud providers are going to continue to be good purveyors of merging silicon into the world. They will build vertical integrations. They will partner up with all of the silicon providers, NVIDIA, AMD, Intel, and they will do that for as long as it makes sense. This is collaboration and competition in its best.

And by the way, they don’t even talk to the same people about it all the time. It’s just not the same thing. But the fact of the matter is that you’ve got companies that are deploying workloads in the cloud that want Intel, some want AMD, some want NVIDIA. They are going to be successful because they’re building a good product with good specs. Pat, last thing I want to say, because I can talk for a long time about this, but I’m trying to keep this on time, is that I was really impressed at the boldness of AMD. They’re not always the company that wants to really come out and punch the competitor, but to some extent I thought they were very bold coming out. I know there’s a new GPU coming from NVIDIA, but look, you can only compare what’s out and available in the market right now. AMD was able to take advantage of the moment comparing with what’s in the market and show impressive training and really incredible inference capabilities on the new MI300X, which that, let’s be candid, was the star of the show.

Patrick Moorhead: Dan, there is so much to talk about here. You’re right, we could do an entire show, but tune into all their interviews we did with the senior executives, Lisa Su and three-

Daniel Newman: Forrest Norrad, Mark Papermaster.

Patrick Moorhead: Exactly. Victor Pang. I was on CNBC last night, like you. It’s funny, they should just bring us on at the same time. That’d be hilarious. But here’s where I am. It took AMD many years to field a credible AI-focused GPU, and it’s the MI300X. This is not just about NVIDIA having a 52-week lead time. This is about AMD bringing in some killer hardware, and also on its sixth generation, I think after 13 years. One of my first white papers I wrote we published was on ROCm. But ROCm, which is essentially like NVIDIA’s CUDA, this abstraction layer that sits above, let’s say, or sits below a PyTorch, like a framework, you can directly write to that, but it is competitive for AI. I would say ROCm 5 was competitive for machine learning. ROCm 6, and based upon what people said on stage…

And even Meta, people don’t give Meta enough credit for the research, the science and the code that they delivered. They fricking invented PyTorch, folks, and they’re very good. They have LLaMA models now that are open source to really shake up this industry. And for them to say anything nice about ROCm 6 I think is a huge accolade for the company. Now, they didn’t say, “We’re using it for sure, a hundred percent, not just going directly to PyTorch,” but for them to say anything nice about that, I thought was a big deal. You can’t argue on X with Azure and OCI and Dell technologies and Lenovo and Supermicro.

By the way, Lenovo and Supermicro especially equates to, people are asking for AMD. Dell doesn’t sell what Dell customers don’t ask for. They do not push products. That’s just not their thing. Lenovo did in the last three to four years help create markets for AMD. In some ways, I like to look, they’re kind of the replacement for HP and HPE as it relates to partnering with AMD.

Final word on AIPCs, it’s interesting, I really didn’t even talk a whole lot about that the company had this 7,400, this rise in 7,400. Not a lot of people were talking about it, and quite frankly, it’s because of the software. What struck me is that AMD’s PC strategy is very similar to Intel’s in the beginning, which is, we’re going to leverage CPU, GPU and NPU to deliver the AI magic. That’s a challenge, and it takes resources because it’s harder to program across CPU, GPU and NPU as it is to just going to the NPU. That’s where I think Qualcomm in the middle of the year is going to have an advantage on this whole thing. It’s going to be interesting how this parses out. But at a minimum, AMD will be competitive with AIPCs in 2024 as long as they can get some key ISVs on board optimizations in the beginning for this, CPU, GPU, NPU that doesn’t heat up the system.

Then secondly, they didn’t say in their roadmap that they were going to up the NPU, but it’s just kind of an obvious thing, that’s going to happen. Qualcomm’s got the 40 to 45 tops showing up mid-year and it’s just so easy to program.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Brad Shimmin, VP and Practice Lead at The Futurum Group, examines why investors behind NVIDIA and Meta are backing Hammerspace to remove AI data bottlenecks and improve performance at scale.
Looking Beyond the Dashboard: Tableau Bets Big on AI Grounded in Semantic Data to Define Its Next Chapter
Futurum analysts Brad Shimmin and Keith Kirkpatrick cover the latest developments from Tableau Conference, focused on the new AI and data-management enhancements to the visualization platform.
Colleen Kapase, VP at Google Cloud, joins Tiffani Bova to share insights on enhancing partner opportunities and harnessing AI for growth.
Ericsson Introduces Wireless-First Branch Architecture for Agile, Secure Connectivity to Support AI-Driven Enterprise Innovation
The Futurum Group’s Ron Westfall shares his insights on why Ericsson’s new wireless-first architecture and the E400 fulfill key emerging enterprise trends, such as 5G Advanced, IoT proliferation, and increased reliance on wireless-first implementations.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.