Search
Close this search box.

AMD Datacenter & AI Event

The Six Five team discusses AMD Datacenter & AI Event.

If you are interested in watching the full episode you can check it out here.

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Pat Moorhead: So AMD had a big data center and AI event in San Francisco, which I was not available to attend but was invited. But I copiously took notes and tweeted as Lisa Sue and company were going through all of the announcements. And I got to tell you it was a big one. I mean literally they covered at least eight very meaty topics. They talked about, first of all, you had the data center side where they talked about three flavors of EPYC, Genoa Bergamo, Genoa X and how they were doing in those areas. They also talked about the next generation DPU, Pensando DPU, but particularly investors wanted to know, and quite frankly the entire market is AI crazy, is what is the company doing in hyperscaler data center AI?

And AMD did not disappoint. They made a bunch of announcements. So first of all, they gave an update on the Instinct MI300A that’s targeted at HPC markets. That is essentially a CPU, a GPU with a ton of memory on it, really for HPC markets. It may or may not be computing with Nvidia that has a combination with the GRACE platform. They also brought out the MI300X, which I think was really the star of the show, was what everybody was waiting for.
It is for large language model inference, sampling Q3, 192 gigabytes of memory and where most of Nvidia’s cards have 80 gigs of memory, which essentially equates to being able to do more work with less GPUs. Super interested to see though how it competes with Nvidia’s 188 gigabyte H100 NVL solution.

Andy showed running a 40 billion parameter hugging face model on one card and this was super impressive. They also brought out a new platform called the AMD Infinity architecture platform. That’s eight MI300 cards pulled together on one platform with infinity interconnect as its interconnect. So cool stuff. Now let me boil all this text speak down to where I think AMD is an AI. But I want to first caveat that I think the way to look at AI is holistic, going all the way from the smallest IOT endpoint to the hyperscale and everything in between. That is the full capability. Its training, its inference, its CPU, GPU, NPU, FPGA across multiple types of platform. But when we narrow in on this hyperscale AI accelerator market that AMD CEO, Lisa Sue, says will be $150 billion, coming up from $30 billion this year. We have to auger in on that.

And AMD did not disappoint with this MI300X. Now, did I expect AMD to come out with some statement that they were going to put a knockout blow on Nvidia? Absolutely not. You’d have to be a fool to expect that. And some short term investors sold AMD stock based on this. I view this as at the beginning of the AI data center move that AMD is making, not some middle point. Now there’s been a lot of work on high performance computing, but don’t confuse that with AI. Sometimes they cross, but many times they don’t. Now I believe that in the end, I would say within the next six months we will see a major hyperscaler make an announcement with AMD and they will drive some serious volume. Now, is that based on some mistake that Nvidia is making?

Absolutely not. But here’s the thing, these hyperscalers want choice and Nvidia has 95% market share in the hyperscaler data center AI accelerator space. So AMD I think can pick up 10, 20% share over the next four years. But that still means that Nvidia can drive a heck of a lot of volume. And by the way, I also think Intel is in the mix. So again, anybody who thinks this is going to be a winner take all scenario doesn’t understand technology ecosystems or for that matter, business logic. And then by the way, if all that doesn’t work, as we saw with Microsoft in 1998, Google and Amazon in this decade, the global regulatory folks show up on your doorstep. But again, I don’t think that’s going to happen. I think AMD will get 20% of that market by 2027. But AMD first needs a major hyperscaler to show up with big support that leads to big volume and revenue. If not`, all bets are off. Check out my Forbes article will where I auger into 1,200 words of pure joy and analysis on AMD’s event.

Daniel Newman: You’re done?

Pat Moorhead: I’m done.

Daniel Newman: Anything else?

Pat Moorhead: Nope.

Daniel Newman: No one? All right, listen, that’s really good analysis. I’m sitting there and you did hit a lot of the talking points one to one that we’re were on my mind. So I just want to maybe make a few reiterating points about what happened yesterday. First of all, this was a seminal moment because what we just saw was the first company that’s really declaratively showing a competitive roadmap to Nvidia right now in, and yes, I’m talking mostly related to data center GPU and hyperscale AI. And this has to happen. I want to be very clear about that. Technology does not move at a pace that it’s capable of if in fact you have a single sole source competition or lack of competition you could call it. Let’s be very clear, right now when it comes to the AI stack for enterprise, it is a monopoly. There’s really no other way to look at it. You have-

Pat Moorhead: By definition it is a monopoly. Anything over 50% market share in a certain market, then the only question is are you abusing it?

Daniel Newman: Right, and then that’s more or less something that the TomToms on the street are suggesting as the company has gotten bigger, it has had more power in terms of how it bundles its solutions. It’s had more pricing power margins up in the mid-seventies now, the kind of power that Intel perhaps once enjoyed when it had a much more significant market share than it has today. And let’s face it, everybody called that into question and called for potential abuse of that power. And it actually to some extent, limit it. And you see over time one mistake or two mistakes or a few mistakes and suddenly that monopolistic power or just even that market dominance can quickly fade. But let’s even going away from some of the regulatory concerns, AI is not… We are still in the infancy of seeing the applications and the power of AI.

We’re in the earliest days of seeing the number of workloads. For a long time we’ve thought about data centers through the lens of compute data centers for running the traditional workloads applications, ERP, things that we’re thinking about for our business. There’s a whole new data center build out on a global scale that’s going to need to take place for AI. And so you’re going to walk into these Equinix facilities and you’re going to turn right and there’s going to be traditional data center compute and you’re going to turn left and there’s going to be AI data centers with racks and racks and racks or what do you call them? Rock them, rack them, rock them. There’s going to be racks and racks and racks of AI, of compute power for AI workloads and networking for workloads. And that’s going to be the next big boom of opportunity.

So when Pat, you talked about a 20% market share claim, I think they could get 20% without actually taking any revenue share. Meaning Nvidia could continue to grow substantially while AMD ends up with 20% of the market because there is just so much market and the capacity for one vendor to take care of all the business opportunity is unlikely. The other thing is these hyperscalers, you already see that they’re moving dependence away from any of the traditional semiconductor and fabless manufacturers and a lot of them are carrying up with ARM or others to build their own ASICs or chips, whether that’s been Facebook, Meta, whether that’s been Alibaba. So anyways, that’s another big thing that’s going to happen.

But I said this yesterday Pat and I think I’ll end on this note. Yesterday was a big moment because what the market needs is more than one. The market needs more than one and it also needs an open source player to come in. If you look at right now, the opportunity for an open source means much more of an ecosystem friendly play. So you can bring together the networking, you can bring together the compute, you can bring together the applications and the frameworks, and you give the community a chance to build, you will see a certain share of the market. I think your predictions are probably in the right ballpark.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Tackling Mainframe Skills: Upskilling, Reskilling, and External Expertise Solutions for Modernization
Steven Dickens, Chief Technology Advisor at The Futurum Group, shares insights on how Mainframe modernization is accelerating with AI and hybrid IT, but skills gaps remain, with the majority of organizations relying on external providers.
Dion Hinchcliffe, VP of CIO Practice at The Futurum Group, shares insights into NVIDIA’s Q2 2024 earnings, challenges with its Blackwell chip, rising cloud costs, and the evolving AI skills gap.
NVIDIA, VMware, Salesforce, Dell, Pure Storage, and HP Inc. spotlight AI growth and cloud innovation. In episode 230 of the Six Five webcast, Patrick Moorhead, Daniel Newman, and guest Dan Ives discuss their innovations and market challenges.
Max Peterson, Vice President of Sovereign Cloud at AWS, joins Daniel Newman to share his insights on the burgeoning field of digital sovereignty, underlining its significance for businesses and governments alike.