Search

Intel’s Computex Report Card – Six Five on the Road

Intel's Computex Report Card - Six Five on the Road

On this episode of the Six Five on the Road, Daniel Newman and Patrick Moorhead break down the latest developments and key messages shared by Intel at Computex 2024.

Their discussion covers:

  • Intel’s showing at Computex 2024
  • The announcements of Xeon 6 for the data center, AI accelerators Gaudi 2 & 3, and AI PC processor Lunar Lake
  • The execution of Lunar Lake and Xeon 6 processors (which are on time or even ahead of schedule)
  • Intel’s AI PC strategy
  • The need to show more real-world use cases for Gaudi in the data center

Learn more at Intel at Computex 2024.

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Patrick Moorhead: The Six Five is back, and we are on the road, well virtually from Computex in Taipei, and we are here to give what Dan and I are lovingly referring to is an Intel Computex report card. Dan, you arrived back from Taipei a little over a week ago. How are you doing?

Daniel Newman: First of all, I’m doing well. I’ve actually been through the timezones. I went to Taiwan, excellent event. It was probably the biggest Computex in many a decade. This inflection with these new next generation PCs, we’ll talk about. Huge moment. It was a great event. It was tons and tons of action because of how many different companies we’re really trying to break through.

But Pat, I’ve actually made it back to the states over the pond to Europe and back to the States. Again, I don’t even know what day it is right now, but what I know for sure is that we have a grade to give and that’s why we’re here. We’re going to give the grade and lovingly so because that’s how we do it. But Pat, I mean, look, it was a big week and it’s time for us to break it down.

Patrick Moorhead: Yeah, I think first and foremost, Pat Gelsinger got up on stage and really talked about the overall theme, which was AI everywhere. That’s essentially from data centers in the cloud to PCs and AIs, Intel’s role in that. And my first impression was, “Yeah, that makes sense. We could pick it apart for smartphones. Hey, Intel’s not inside of smartphones,” but smartphones don’t have very good utility unless they’re connecting to the cloud. So is that a smartphone play? Maybe.

Patrick Moorhead: What’s your take on that, Dan?

Daniel Newman: Well, yeah. I mean, look, Intel has a mandate for both market perception and to fully realize the impact of the work that it’s done, that it has to be seen as a company that’s providing AI from edge to cloud. And so like you said, mobile devices themselves may or may not require a bit more explanation, but as a whole, Intel is supplying the compute for PCs, for data centers, for the edge, and it’s connecting everything.

And right now, the way the market wants to perceive it and the way it’s developing is it is doing AI everywhere. And of course wasn’t a big part of its focus, but Foundry also a big story for Intel. And when you’re in Taiwan, it’s hard to not be thinking about that as well. So mission accomplished, definitely made the point, reiterated the point and it had to.

Patrick Moorhead: Yeah, so maybe let’s jump, maybe we’ll do easiest first, I don’t know, hardest first depending on where your brain is. But Intel came in guns a blazing on Lunar Lake processors. This is Intel’s second generation of AI PCs. The headline was basically super power efficiency, a 40% reduction in SOC power consumption compared to Core Ultra boosting compute, enhanced AI with an exclamation point and graphics.

Daniel Newman: So why don’t we take a quick moment and get a clip from Pat on the stage.

Pat Gelsinger: So let’s dig a little bit more into why Lunar Lake is such an important step for the industry enabling this next generation of thin and light AI PCs. First, it starts with a great CPU, and with that, this is our next generation Lion Cove processor that has significant IPC improvements and delivers that performance while also delivering dramatic power efficiency gains as well. So it’s delivering Core Ultra performance at nearly half the power that we had in Meteor Lake, which was already a great chip. The GPU is also a huge step forward.

It’s based on our next generation Xe2 IP and it delivers 50% more graphics or performance. And literally we’ve taken a discrete graphics card and we’ve shoved it into this amazing chip called Lunar Lake. Alongside this, we’re delivering strong AI compute performance with our enhanced MPU, up to 48 TOPS of performance. And as you heard, Sachin talk about our collaboration with Microsoft and Copilot+ and along with 300 other ISVs, incredible software support, more applications than anyone else.

Daniel Newman: Yeah, so Pat, listen, I first of all think that Intel played the whole AI PC movement pretty well. Now, look, the company doesn’t lack criticism, so maybe call me a techno optimist, maybe call me an Intel bull. I think Intel gets scrutiny for things in sins of its past and is not necessarily being properly credited for things that it’s done recently. One of the things that it did around this whole AI PC movement was it was able to pull forward timelines meaningfully from Meteor Lake to Lunar Lake. Now it’s got this conflict where it wants to sell lots of Meteor Lake, it needs to sell out to Meteor Lake, but also create this kind of forward-looking demand.

And now almost immediate, because it’s only September that it’s going to hit the shelves, that it has this next generation capability with Lunar Lake that is more competitive with that first version of CoPilot+ PCs that came out from others. And I think the company needed to make very clear that it had a product now, it’s selling lots of this product already in Meteor Lake. It had a huge component of design winds for its Lunar Lake, and the Lunar Lake product is going to deliver on that lower power usage. I think they’re talking about 40% lower. So it’s more powerful.

It’s got the ability to run lots of AI features. It’s got a significant and growing set of independent software vendors. It’s got features, and of course it’s got AI models that are going to run on what is called the Core Ultra platform for others that Intel in its many, many names. So that’s one of the things that people can always benefit from Pat, but I think the company has done well to create some inertia and to de-risk a little bit some of the expectations that they were going to be too late to the party.

Patrick Moorhead: Yeah, Intel brought its game and it brought what it had been talking about for a while and what Intel brings and that scale. For the Six Five Summit, I interviewed Intel’s head of developer relations, and we talked a lot about the hundreds and millions of units that Intel brings to the table. And I think, I don’t know, I want to make sure I have my facts here, we can edit this out afterwards, I think Intel is the only one that’s committed to an AI PC number so far. I have not heard that from any other company out there. The other thing that’s unique is Qualcomm, they obviously led with NPU. They’ve got a very capable CPU and GPU, but what Pat did is got up on stage and he talked about the full platform approach. It’s connecting the CPU, the GPU, and the NPU delivering 120 watts, which he went right after the competition basically saying, we are the highest performance, have the highest performance AI out there.

And I do think first of all, that is how ISVs do their optimization. I wouldn’t say this is out of the other side of my mouth, I mean, but the CPU is the easiest platform, least efficient way of doing AI be most efficient, hasn’t really happened a lot on the PC is the NPU and the GPU is kind of right in the middle. And I think Intel’s approach is, “Hey, I’m going to give folks multiple ways to optimize it where they are,” which is difficult in that all the tools aren’t there yet, but Intel is providing its own tools and performance analyzers to go in and optimize for that. So a lot of puts and takes, right?

Daniel Newman: So let me give a couple by the numbers though on Lunar Lake and why I think Intel has reasons to be optimistic that it didn’t miss that they weren’t going to get completely outflagged. First of all, Boston Consulting has a data point/ They said about by 2028, about 80% of the PCs in the market will be these AI PCs. The second thing was generation to generation improvements were really palpable for Intel. So forget the other competition, just within Intel, I think it’s about a 4x AI compute on the MPU, it’s 48 TOPS is what they call it to the market, terra operations per second for you technical gurus. It is the sort of introduction at scale of this new kind of P and E core, which are performance and efficiency cores or efficient cores, which is supposed to help them with operating more efficiently, which power is going to be a big one because we saw other companies come out with really top-heavy devices but weren’t talking power and people were very inquisitive about what that means.

And they’ve got a couple other things that are interesting with how it’s these low power islands, Pat, that it talks about. People don’t give a lot of credit, and this is why I think I’m over rotating to credit here, Pat, but they’ve got this compute cluster and Intel innovation to take background and productivity tasks at higher efficiency to extend battery because in the end, long battery life, that’s why a lot of people are super interested in the other architectures because it’s known for mobile, it’s known for long battery life. If Intel can do this on x86 with Lunar Lake, I mean they’ve got the numbers and Pat, that is the number. They’ve got 20 OEMs, they’re expecting 40 million Core Ultra processors in market in 2024. So that’s a pretty big number. It’s hard to look away from.

Patrick Moorhead: Yeah, that’s a scale number coming in. And it’s interesting that they have the P and the E cores, which is something that AMD has not adopted yet. So Dan, it was more than PCs. Intel talked a lot about the data center. They talked a lot about Gaudi, of course, which is their data center accelerator, non-GPU, more of an ASIC in design. And one really interesting thing that came up, I think for the first time that I’ve been checking is real pricing came up and Intel, which by the way is we’ll talk about maybe why other people don’t do that, but eight Intel Gaudi 2 accelerators at $65,000, they said is one third of the cost of what NVIDIA can do, which first of all, that’s pretty amazing, but second of all, how can they do this? But why are we just talking about pricing now when data center accelerators have been out there forever?

Daniel Newman: Yeah, I mean, look, I think it was a very interesting moment to reflect on the market and the pricing. Of course, there are door openers for Intel and others to enter this space that are somewhat substantial. First, it’s just availability. So Intel has a huge opportunity if it can prove that it’s Gaudi-based systems, especially Gaudi plus Xeon package. We talk about Gaudi individually, but of course we talk about CPU maybe being inefficient, but there are a lot of certain workloads for AI and then traditional compute with AI, that entanglement workload that goes on, that benefits from having both together. Intel doesn’t talk about as much because I think it drives criticism around GPU availability, but I think the two together is actually a very powerful combo.

So you have the availability issue, you’ve got the price issue, you’ve got systems resellers, which many… I think they have six new systems resellers join the biggest OEMs in selling these Gaudi systems. But at the same time, Pat, you’ve got a lot of enterprises, some smaller cloud providers, companies that want to get in on AI infrastructure that simply can’t get in. That alone is a market. The second part of the market is you have lots that have deployed a lot of Xeon. You can benefit by quickly amping up and getting that accelerated computing on top of your traditional computing with Xeon. And by the way, economics do matter. I mean, sure to the five largest hyperscalers in the world, perhaps price doesn’t matter. Perhaps they can get enough value out of it, they can depreciate it quickly enough and they can make it work on their financial balance sheets.

But whether it’s banks, healthcare companies, and others that want to be able to use existing compute infrastructure, add accelerated compute infrastructure, and do it at an efficient price level. And again, every workload needs to be judged because Gaudi being an ASIC does not have the same level of flexibility as a NVIDIA GPU, but on certain workloads, the purpose-built workloads where Gaudi is efficient, you see in the MLPerf numbers, it works pretty well, in some cases better. And if you can do it more efficiently on a price that’s compelling.

Patrick Moorhead: Yeah, it really is. So just kind of by the numbers here, sometimes people ask, “Hey, who’s using Gaudi,” it has to start off with the OEMs and ODMs, and then it gets to the broader end use market. Intel has a lot of people on their developer cloud using Gaudi, and I’ve seen a few major installments of Gaudi out there. But it all starts with OEMs and ODMs and ASUS, Foxconn, Gigabyte and Ventech, Quanta, Wistron, Taiwanese makers joining the Dells, the HPEs, Lenovo, Supermicro, they’re planning to offer Gaudi 3.

That’s pretty comprehensive and by the way, really interesting given where we are with the private cloud and this notion that enterprises want to run AI where their data is, and 75 to 80% of that data is still onsite or on the enterprise edge. It just makes sense that you would bring that compute to where the data is. And that’s not saying you won’t do AI in the public cloud, you will, but I think a lot of people are thinking you won’t do any AI on-prem or on the private cloud, which just isn’t accurate.

Daniel Newman: Dan, if you listen to those OEMs, Pat, the Dells, the VMwares on software side talking about AI, they’re all recognizing that it’s actually a huge opportunity to do it on-prem. And a lot of that volume is going to private data center and on-prem deployments. So it’s not zero. It’s actually somewhat consequential or very consequential.

Patrick Moorhead: Yeah. Dan, you talked about the benefit of the combination of Gaudi and Intel Xeon, and I think maybe we should drill in on that. So you have Xeon, the processor, or let’s call it the server SOC, and then you have Gaudi, which is the accelerator. The reality is that you have different workloads over a period of years that need a different way of operating. For industrial strength training, I think pretty much we can say that you need a GPU or a hardcore accelerator to do that. When it gets to the inference and actually running the program that the inference is helping, it gets a little murky right?

If it’s a lower AI requirement and latency is a big deal, but having it run on the CPU, and by the way, it’s not just running it on the CPU core itself. It could be getting accelerated with something like AMX as an example, which is an accelerator built into the processor. And if you use both Gaudi and Xeon, the value prop says that, “Hey, you can use the same software stack across those.” So let’s say you start with Gaudi and then you move it over to the processor, or you want to run it on both. Let’s say you’re running your data center at 50% capacity. Why not take the other 50% and do AI on it, maybe do some training on it?

That would just make a whole lot of sense. And with Xeon 6 with E-cores, and by the way, they’ve got a version with P-cores, you can take that power savings that you’re using on the E-cores, and maybe you want to slosh that over to your accelerator. I know some people don’t like the terminology and me calling it power sloshing, but what it’s doing is you’re saving energy with your E-cores and you are able to give that energy to your accelerator. So yeah, for a lot of enterprises, this is going to make sense.

Daniel Newman: Yeah, I think that’s a pretty nice way to tie that one-off, but I agree 100%. You’re going to have both architectures. We’re not going to go to just these AI data centers. We have tons of workloads, tons of traditional compute, tons of SAP workloads and Oracle workloads that are still running on CPU architectures, Pat, and adding accelerated computing to it is where the efficiency is gained. And when you can simplify the software stack, that’s also where you gain efficiency in terms of the deployment.

So all those things together definitely create a compelling case, Pat, and then you add the pricing and the availability, that allows the enterprises and cloud providers that are looking to move quickly to potentially move quickly. And it’s probably… it’s parallel in nature. These companies are going to be moving forward on different architectures, different compute, but I think Intel has a strong case to make and I think it did a good job at Computex. So let’s sum this up. What do you think?

Patrick Moorhead: Well, before we sum it up, I do want to hit on execution. And I think, Dan, you made the astute comment which says that people are focusing on when Intel got wrong in the past versus what they’re getting right today. And what that does is that colors the ability to internalize and give them credit for what they might do in the future. When I look at Lunar Lake and I look at Xeon 6, that is a complete turnaround in execution. Sapphire Rapids, which was Xeon 5, I don’t know if it’s three years late, maybe more, but for the most part, Xeon 6 is on time. And as you mentioned on Lunar Lake, they actually pulled in that schedule. Lunar Lake wasn’t even supposed to even show its face until the first quarter of 2025.

And here we could very well see it in units in September, late September, early October. So that’s something the company, I think people need to recognize what they’re doing. It’s a lot easier to be a detractor than it is to look at holistically. And by the way, I’m not saying that Intel is hitting on all cylinders, but it’s hitting on most cylinders here. And the fact that they’re doing all this without having a data center GPU until Falcon Shores and AI GPLs say in 2025, pretty good. So how did they do? I thought they did pretty well. I don’t know if we want to grade it. I mean B+, A-. Is that what we’re doing here, Dan?

Daniel Newman: Yeah, I think you give the report card. I always liked my kids got satisfactory, but I like to see A’s more. I mean, I think you hit that really nicely, Pat, in balance. Look, we have this over rotation societally about names. We have names that can do no wrong, and then we have names that can seemingly do no right. Intel for a long time was the former. In recent years, it’s felt more like the latter. And I think we are here as analysts to be the arbiter of that. And as the arbiter of that, it’s like, look, we can recognize challenges in past performance and also recognize and give credit to successes in recent ones.

On the PC side, on the AI PC side, I rate it really highly. I think you got a solid A, A-. Of course, more designs, more wins, more units shipped and more efficiency are all things that people want. But you got a 48 TOP in market this year that’s improved efficiency. We’ll have to let Signal65 in the lab, sort of look at how these things perform on a device-to-device basis. But it’s compelling and they’ve already got the distribution and channel, so you got to give a lot of credit to that because they should be able to move units quickly. On the data center side, I think it’s a story and it’s a lack of visibility. People need to see, they need real-world use cases put into motion.

They need to see the use cases, hear the case studies, see the customers because the pricing’s compelling, the availability is there, they’ve got the right OEMs, but the market’s not hearing enough about it. And that’s where I’d give them probably maybe a B on that one. But the point is you average it out with Lunar Lake. Yeah, you’d land on about a B+, A- on my end. A lot of proving to do. But Pat, to the credit of Intel, it deserves some credit. And that’s why I think we are here to provide the balance. We are judging and analyzing and arbiting the entire industry. And so it was a positive moment for the company. But the proof will come in the numbers. It’ll come out over the next several quarters.

Patrick Moorhead: That’s right.

Daniel Newman: All right, well with that, let’s wrap this up, Pat. I mean, look, it was a big week. Next year you go, I stay home. That’s how this works. But everyone out there, we appreciate you tuning in, checking out and listening to us kind of give our perspective on Intel’s Computex this year. It’s more than really just the Computex, it’s Intel’s strategy in market. And it is Intel bringing AI everywhere. And I think in many ways our conclusion is yes, work to be done, numbers to be evaluated. That’s what we do as analysts. But that wraps it up for this one. Appreciate you tuning in. We’ll see you all later.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

GPT-4 vs Claude and the Implications for AI Applications
Paul Nashawaty discusses Anthropic's launch of the Claude Android app, bringing its AI capabilities to Android users and also, a comparative analysis of long context recall between GPT-4 and Claude.
Dynamic Chatbot Is Designed to Support Seamless Collaboration Between Digital and Human Workforces
Keith Kirkpatrick, Research Director with The Futurum Group, covers Salesforce’s Einstein Service Agent, which is designed to help improve self-service and agent-driven support experiences by leveraging AI and automation.
New Release Brings AI and Automation Across Business Cloud, Business AI, and Business Technology Offerings
Keith Kirkpatrick, Research Director with The Futurum Group, covers the release of OpenText Cloud Edition 24.3, which incorporates AI to drive enhancements across its Business Clouds, Business AI, and Business Technology offerings.
Experts from Kyndryl, Intel, and Dell Technologies share their insights on enabling practical and scalable Enterprise AI solutions that drive impactful outcomes. Discover the potential of AI factories, the critical role of tailored infrastructure, and the path towards AI readiness in enterprises.