The Six Five team discusses Computex 2024.
If you are interested in watching the full episode you can check it out here.
Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.
Transcript:
Daniel Newman: I think I did have the opportunity to attend what will be the most formidable Computex possibly in my lifetime. There’s a lot to cover. It started off with-
Patrick Moorhead: … so much to cover. I mean, so much to cover.
Daniel Newman: Yeah, I’m going to pick a couple things, because we got to move. I got a hard stop. I got real work to do, you got to work. Listen, it started with a non-keynote of Computex, but it was the biggest keynote of Computex. And what do I mean by that? Well, Jensen Huang went out to the market. He riled up the town, he rented out the NTU gym, the stadium, whatever you want to call it. And did a 4,000 person, what I would say, a modified version of the keynote that he gave at GTC. There wasn’t a ton of new there. I’m going to be candid about that. But I think in Taiwan there’s one thing that became immensely evident. Jensen also has… His pseudonym is David Lee Roth. He is also the David Lee Roth of AI.
And that was even more evident when he started signing the chest area of women on video. And that got out there, that was a super interesting moment. But overall, he’s a rock star. I think he had a bigger detail and more bodyguards and people running around with him in Taiwan than any star. I think he’s like Ronaldo here. I think he’s like the Ronaldo of AI now. I’m trying to think of what Dan Ives would say other than the Godfather. Because he likes to do that stuff. But you know what, Pat, this show wasn’t really a data center show, in my opinion. Yes, there was data center technology, but this show was the amalgamation of a January keynote that you gave with six or seven of the biggest OEMs on the same stage at CES, and said the super cycle for AI PCs, now known as the Copilot+ PC, is coming.
And let’s just be clear, that’s what this show was all about. It was all about leapfrogging and hurdles. Qualcomm came out with a bang, followed on the build release. Satya Nadella giving them the backing that they were the Copilot+ PC partner. Built on their architecture following their Nuvia acquisition, Orion architecture, some really, really positive designs. But this show was all about Intel coming out and saying, “Wait, wait, wait, we’ve got something too.” And then AMD coming out and saying, “Wait a minute, we’ve got something too.”
And what it really became was a really exciting competitive situation in which AMD, Intel, and Qualcomm. And then you could say there was a little bit of a standing ovation for the arm MediaTek and NVIDIA on their own little end of the spectrum, all talking about what the future looks like. Do AI PC, Copilot+ PC experiences look like. And we, of course, had The Six Five there doing a bunch of exclusive coverage.
Patrick Moorhead: Very jealous, man. Watching you and Ryan replace me. And I got to tell you, man, I-
Daniel Newman: Yeah, it was comfortable.
Patrick Moorhead: I almost jumped on a plane.
Daniel Newman: Jumped. I’m going to stop there. Because like I said, I could talk for two hours about this. The net/net TLDR, as I like to say in my tweets. The TLDR is effectively, this was the moment, this was the coming out. This was all about client. Commercial client, consumer client, but about this next generation of mobile PCs powered by AI experiences. Call it what you want. That was my big takeaway.
Patrick Moorhead: I’m going to do very quickly, probably one comment per company. NVIDIA, a pretty good job segmenting their AI PC play. It’s different from the Copilot+ PCs, but I like the way they segmented light AI, heavy AI, and CloudScale AI. They did come in with RTX AI laptops. And let’s not forget that NVIDIA was the first company to introduce accelerated AI on any PC platform, which the market needs to give them a little credit for. But does NVIDIA really need any more credit? They also introduced a developer toolkit called the RTX AI Toolkit, to outline how ISVs would leverage all this goodness. And by the way, you’re looking at 1,300 TOPS on the highest end NVIDIA GPU compared to 45 TOPS battery-powered, super-duper optimized. I can’t wait to see what actually pops out of the hatch. Can’t run recall on it yet, but we’ll see.
Qualcomm was really a reaffirmation, blew my mind. I’ve been in the industry going on 34 years, my gosh, almost to the day. I’ve never seen a company land this hard, this fast, with a new class of notebook processors. AMD, again, I’m focusing on the client stuff here. Zen 5 made showing, 16% IPC. IPC means everything when it comes down… Almost everything when it comes down to doing CPUs. AMD got back in the game with its Zen architecture, and then has been making some incremental improvements to that.
Now, what I do want to see is the crawl chart though, to show where those advances come from. And again, IPC, it’s not about clock speed, so it’s instructions per clock. Ryzen AI 300, classic AMD by providing more. That’s 50 TOPS. That’s what AMD does, they give more for the same. Very rarely do they give… Well, sometimes they give more for less. But classic AI move, really interested to see the efficiency of those 58 TOPS.
Daniel Newman: That was the hot topic, Pat, was great performance. But everybody, every press person I talked to, how efficient? What is power per watt?
Patrick Moorhead: And it’s not that people don’t believe it, but the numbers are big and impressive. I cannot wait. I’m hoping Signal65, a sister company of ours, does do the testing on it. Intel, Lunar Lake, Pat Gelsinger came out guns a blazing. Basically said, “Lunar Lake running in our labs today outperforms the ex-elite,” that’s Qualcomm’s, “on the CPU and the GPU and an AI performance. Delivering a stunning 120 TOPS of total platform experience.”
Again, can’t wait to see the power testing, can’t wait to see the efficiency testing. Ryan Shrout, if you’re listening, you probably aren’t. You’re probably watching Apple. But if you are, let’s get one of those into your labs. And I love Pat Gelsinger coming out and just going after it. He had a comment about Jensen and Moore’s law as well. Finally ARM, Rene Haas came out and said basically there’s no reason that he couldn’t see getting 50% PC market share in five years.
Man, I love… I don’t know if you’re too young to know this, but GOBOSH, go big or stay home. I don’t know if millennials use that, but it was big for us gen Xers. Not to be confused with boomers. But anyways, I appreciated that. And the other thing that ARM came out and talked about was the claim of 100 plus billion ARM devices ready for AI from the cloud to the edge by 2025. Makes a lot of sense, given that most AI is done on the CPU. But anyways, whether it’s ARM, Intel, NVIDIA, AMD, Qualcomm, it was just an absolutely chip extravaganza.
Daniel Newman: Yeah, it really was, Pat. And by the way, our newest intelligence actually came to this really interesting conflicted data point because it talks about how much Intel is being used in AI. When we talked to the decision-makers about where AI is going to run, and the workload volume was still super high. And I was looking at it, I’m like, you share this and you don’t contextualize it, most people won’t believe it. Because I think you and I and three other people on the planet are talking about the fact that AI is still being done on a lot of traditional CPUs. We’ve gotten super caught up in everything being on a GPU, which is great, but it’s not really the case. It’s kind of interesting just because I think it’s going to take time, but it also shows how much market there is left to be had in terms of the pivot that’s going on. Pat-
Patrick Moorhead: Well listen, it totally makes sense. I mean CPU, GPU, NPU, FPGA, they all have different roles. And I can’t ever imagine the CPU just becoming a dumb controller, or something to move data around like the memory controller. I just don’t see that happening. It is so easy to program.
Daniel Newman: Does it become the hierarchy of need? Meaning, depending on the type of compute and the workload. I think it’s also extremely wasteful. But, I don’t know. Remember, Pat, CEO math. The more you buy, the more you save. So we buy GPUs, buddy
Author Information
Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.
From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.
A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.
An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.