HPE Discover 2024

HPE Discover 2024

The Six Five team discusses HPE Discover 2024.

If you are interested in watching the full episode you can check it out here.

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Daniel Newman: HPE, Discover, I mentioned the Sphere, but there was a lot going on. What is, oh, sorry. Not at financial advice, people, don’t do anything we say. All right, onward, HPE.

Patrick Moorhead: Yeah, let’s dive in. So as you would expect, the entire conference was about AI. And to be more specific, it was about private cloud AI. And you can essentially look at enterprise AI infrastructure and services in two buckets. You have public cloud plays and then private cloud. And private cloud can be on-prem and enterprise data center. It can be in a colo, it can be a sovereign cloud, but essentially it is governed and managed by the enterprise itself. And it’s interesting, HPE was in a very difficult position as it was coming in as one of the last enterprise infrastructure companies to come in and do this, right? We saw IBM, we saw Lenovo, we saw Dell Technologies and then all the enterprise software folks. So they had to come out, they had to come out bold. What they came out with was NVIDIA AI computing by HPE. And a couple key points here.

First of all, very focused on integrated. It is a very integrated solution with two vendors software, NVIDIA that brings some of its AI enterprise software and also HPE’s with AI Essentials, Data Lakehouse and the HPE private cloud control plane. So it’s literally a turnkey. And the goal here is simplicity. And simplicity means AI time to market. And I think Dan, some of your research has suggested that time to market is a very big deal here. The other thing, and the company talked about three clicks to be up and running and I did a booth tour, I don’t do a lot of those anymore, but I literally saw them start over, boot this thing up and literally it’s all there, three clicks and you’re there. We’re going to be doing a detailed writeup on this. You can buy it on-prem or run it managed service via GreenLake.

Oh, and there’s four sizes, small, medium, large in Excel where small is inference only, going all the way up to Excel, which is inferencing plus rag plus fine-tuning. So yeah, it looked very simple. While I would’ve been would’ve appreciated to get a pre-briefing a lot earlier to think about it, it took me about three to four hours for it to really sink in how this was distinctive out there. And I will say NVIDIA AI computing by HPE is a differentiated solution. GA is in the fall, which means they must have been working on this for a very long time. So you and I both got the chance to talk with HPE’s CEO, Antonio Neri afterwards, and some interesting things came out. First off, they are targeting enterprise only with this what I’ll call an integrated appliance with NVIDIA, which Antonio is very, very clear on this.

Now AMD and Intel and Red Hat and VMware and Cloudera are very much also part of an experience if somebody wants to piece part this together, but clearly he has burned the boats and this is an NVIDIA only solution for now. And I think what would it take for AMD and Intel to get in there and be part of this is they need to have software to be able to plug in to the stack, I hate to say it, like NVIDIA. So a couple questions going to leave some oxygen for you. Where’s the training done? If small to Excel doesn’t include any training, where is this happening? By the way, the GreenLake for LLMs has been put on the back burner, not a lot of customers. I get it. The big action here is on an inference rag and fine-tuning. What’s the connective tissue to the public cloud? If that’s where the models are being created, how do I have some interchange between the public cloud and HPE’s private cloud? And the name choice was interesting. NVIDIA is the first name in this solution and not HPE, it’s not HPE AI computing with NVIDIA . It’s NVIDIA AI computing by HPE.

Daniel Newman: Yeah, that was an interesting decision, Pat, and I’m going to double click on that a bit with you because I agree. First of all, this will be a theme with me for a bit, but we are at this interesting inflection where there is no question that the front end, the CapEx, the build out of infrastructure for AI is in full effect. Where we’re starting to see, I believe is companies trying to figure out how to take the product to market and create consumption at the other end of the spectrum. Meaning it doesn’t matter how much of course Meta is using it for itself, as it for instance, but Meta is using it for itself so that when you open up WhatsApp and you’ve got your little AI assistant in there, you are using it and you’re increasing use and that’s driving more data, quality of advertising and that’ll create revenue.

So they’re spinning up their own vertically integrated AI experience and they’re spending a lot of money to do it, but they are also creating so much cashflow that that’s the big bet they’re willing to make. The hyperscalers for instance, are building this stuff out to be able to make it consumable the way we did compute in the first era. The private cloud comes down to there’s so much complexity in data and so many unknowns in data that are going to continue to create a challenge for the industry to be able to implement AI. You and I talked a lot about IBM last year and their end to end that included an AI and a data and a governance capability. Well, that type of capability is what I believe we have to bring to scale. And so HPE, basically Antonio told us, Antonio Neri, CEO who we talked to, told us like, “Look, we’re only going to focus on the hundreds of thousands of enterprises that could benefit from the consumption of AI.” Remember that. That was the demarcation between-

Patrick Moorhead: Dan, say that one more time because I think it’s super important.

Daniel Newman: So Antonio said they only want to focus on the hundreds of thousands of enterprises that could benefit from the consumption of infrastructure and AI. And so that’s where they’re focused on. So these T-shirt sizes, none of their T-shirt sizes are really when you talk about these out of the box AI solutions have anything to do with hyperscalers. Now of course, they said for those 10, 25 companies that want to consume and buy as many GPUs as possible, they’ll sell those, they’ll sell supercomputers to labs that want to buy supercomputers, but they’re talking about the average enterprise, hospital system, bank, university that is literally implementing their infrastructure for AI. Pat, those folks can buy off the shelf three clicks, like you said, turn it on and start to make meaningful progress in their AI journey with GreenLake. That’s what they’re talking about. By the way, that’s what Dell’s doing in its own way.

That’s what Lenovo’s doing in its own way. You even have the IBM I mentioned, VMware and what they’re trying to do. So this private AI thing and basically connecting this data that has to have residents on-prem, data that cannot cross borders, sovereignty, you’ll hear about sovereign clouds a lot. They’re trying to solve for that problem. And of course the cloud providers are going to try to solve for that problem too. So do not mistake for a minute that the cloud is not going to try to deal, they have hybrid options and offerings. But the truth is, Pat, we’ve said this endlessly on this podcast, 70 to 75% of enterprise data lives on-prem. So these companies have a good opportunity to provoke the market with AI that can be consumed easily. And that is what I believe HPE’s mission was. Ending this topic because I want to keep moving, but the choice of leading with NVIDIA to me is interesting, but Antonio didn’t mince words.

They’re not going to build these out of the box solutions with anyone else. And so I guess making your bet on HPE, we’ve had a continuum of companies wanting to make the whole bet and companies making the minimum bet. So far the companies that made the biggest bet have done the best. Over the long run, though I still believe AMD is a player. I still believe Intel was going to have a say. I was quoted, Pat, on this in my Asia CNBC segment, they started me off asking me about our friend Dan Ives, a former guest of the show. And he made this, he loves to make these comparatives, metaphors. He calls it a party. It’s 9:00 AM and this party goes till four. I jokingly said, “This is Coachella. The party hasn’t even started and this thing’s going to run for multiple days.” The thing is we’re in the earliest innings. There’s so much AI left to go and so this is not a short play for HPE. It’s a long play.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Brad Shimmin, VP and Practice Lead at The Futurum Group, examines why investors behind NVIDIA and Meta are backing Hammerspace to remove AI data bottlenecks and improve performance at scale.
Looking Beyond the Dashboard: Tableau Bets Big on AI Grounded in Semantic Data to Define Its Next Chapter
Futurum analysts Brad Shimmin and Keith Kirkpatrick cover the latest developments from Tableau Conference, focused on the new AI and data-management enhancements to the visualization platform.
Colleen Kapase, VP at Google Cloud, joins Tiffani Bova to share insights on enhancing partner opportunities and harnessing AI for growth.
Ericsson Introduces Wireless-First Branch Architecture for Agile, Secure Connectivity to Support AI-Driven Enterprise Innovation
The Futurum Group’s Ron Westfall shares his insights on why Ericsson’s new wireless-first architecture and the E400 fulfill key emerging enterprise trends, such as 5G Advanced, IoT proliferation, and increased reliance on wireless-first implementations.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.