HPE Discover, Pure Storage Accelerate, MongoDB and Enterprise AI the Next Frontier for Business – Infrastructure Matters Webcast

HPE Discover, Pure Storage Accelerate, MongoDB and Enterprise AI the Next Frontier for Business - Infrastructure Matters Webcast

In this episode of the Infrastructure Matters Webcast we review the latest conferences, including HPE Discover, Pure Storage Accelerate, and MongoDB .local NYC. Plus, we explore the impact of generative AI on the industry– it is more than hype, and it is here to stay. The Futurum Group’s Steven Dickens, VP and Practice Lead, Hybrid IT, and Camberley Bates, VP and Practice Lead, Data Infrastructure, give their views on how this market is evolving and what to expect in the coming months.

Topics include:

  • HPE Discover 2023 highlighted new GreenLake offerings for private cloud and large language models.
  • Pure//Accelerate 2023 showcased new FlashArray//X and FlashArray//C models with performance and capacity improvements, as well as a new ransomware recovery SLA.
  • MongoDB Local 2023 highlighted the company’s growing traction in the cloud native database market and its focus on AI.
  • Enterprise AI is the next frontier for enterprises:
    • Enterprise AI and generative AI have the potential to deliver significant value to all enterprises, raising this topic to the top of the list
    • Many vendors are launching new products and services, with much more to come in the next six months. Recent entries in the market include Dell’s Project Helix and HPE GreenLake for Large Language Models
    • The challenge for enterprises will be to choose the right AI solutions for their specific needs and to mitigate the risks of bias and misuse
    • They agree that the C-suite will need to be involved in the decision-making process around AI adoption

Be sure to visit our YouTube Channel and subscribe so you don’t miss an episode.

Watch the full episode here:

Listen to the audio here:

Or grab the audio on your streaming platform of choice here:

 

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Facing the Unknowns of Generative AI, Key Industry Players Collaborate, Incubate, and Back Startups

The Six Five On the Road: A look at MongoDB / local NYC

Industry SnapShot: Pure Accelerate 2023

Transcript:

Steven Dickens: Hello, and welcome to the second episode of the Infrastructure Matters podcast. I’m joined this week by Camberley Bates. Hey, Camberley. How are you?

Camberley Bates: Hey, good afternoon.

Steven Dickens: Looking forward to digging in. I’ve got the pleasure this week of being on the road and spending some time with you, and it was good to catch up in person, but let’s dive straight in here. We were both out at HPE Discover this week. Great event. Lots to unpack a whole bunch of announcements, but I’ll give you the floor. Maybe you can kick us off here, and then I’ll dive in.

Camberley Bates: Well, I’ll start with one that’s probably not at the top of everybody’s list, but it was on the top of ours. It was the GreenLake for Private Cloud Business Edition. They already had GreenLake for Private Cloud. They did announce some expansions of that, which included the Red Hat OpenShift and a couple of other things. But the big thing on the private edition is what came out is this is a self-managed environment, which is different than a bunch of their other items. It’s on-prem or it can be also in Equinix, which is really cool.

Basically, they’re staging these systems in Equinix so you can just flip a switch and go, creating that cloud-like kind of environment that’s going on. And as I said, it is self-managed, so it’s consumption-based, but it’s still the self-managed. It’s based upon their technologies they’ve had for a long time that were more HCI-based that they’re bringing out. So it’s a really good solid product. It’s not like a brand new code or anything. And so it should take off very well, especially for clients that may be migrating off of some older equipment over to the new environment. But I also-

Steven Dickens: I was really interested in the Equinix piece. I thought that made really good sense to me. You want to get out of your data center, but maybe for sovereign cloud reasons, maybe the application’s got some kind of private cloud characteristics, but you don’t want it in your data center. I thought that made perfect sense to me.

Camberley Bates: Well, aligned with one of their big things that Neri stated, he talked about intelligent edge, which is a lot about Aruba but also about bringing the cloud, coming to where your data is, et cetera. So recognizing how many points of presence we are going to have over the long haul. But the second one that they talked about was the hybrid cloud by design. And there is a bit of push-pull right now, and that’s for another podcast, about repatriation and all that, things that seem to be exploding all over the place. But this concept of hybrid cloud by design is to say, “Give me the ability to have something that I can stand up and go run pretty quickly.” And so they were calling it the VM Vending Machine, which was kind of cute, what they were offering out there. And essentially by pre-positioning these things into the Equinix… And I think there’s seven locations. Is that what they said?

Steven Dickens: Mm-hmm.

Camberley Bates: Okay. You’d be able to go in there and flip the switches if you were setting up in a private cloud someplace. So, yes, it should be a very nice offering. I haven’t seen the pricing and it always has to come to shares, so we need to take a look at the pricing. So what else did you get?

Steven Dickens: Yeah. The other one from HPE Discover for me was the GreenLake for Large Language Models. Unsurprisingly, HP made an AI announcement, but this one was a little bit different. I’ve got a call, a briefing this week that the team are trying to set up to go into more detail on it. But what I took away from the initial briefing and then a couple of the one-to-one discussions were that they’ve packaged up compute, GPU access, put this into initially a pretty sustainable data center in Quebec, and they’re going to be able to offer a consumption-based AI model in the cloud.

Now, there’s a smaller start-up, which I think called Both You and I, out about who they’re partnering with for the large language models piece. But I think foundationally, this, for me, is going to be access to GPUs in a model, in a consumption model. And I think if you strip this down to its basics, and I’ve got to understand more about the offering, but I see a shortage over the next 12 months for H100s and A100 GPUs. So if you can provide those as a service and people don’t have to get in a very long line to buy them as CapEx, it’s going to be interesting. Again, no pricing, don’t know how this is going to be. They’re talking about node instances rather than a full shared model. So that’s going to be interesting. Don’t know how they’re going to price it. To your point earlier as well, that’s always where the rubber hits the road, but I think it’s going to be interesting to see how that model takes off, for sure.

Camberley Bates: Well, and the other piece of that is they are using and leveraging their Cray expertise, which is really big. Over the years, they’ve picked up a couple of companies, Determined AI, Pachyderm, which is data workflow, Determined AI machine modeling, and that layer of enablement software should simplify the use and the leverage of the GreenLake for Large Language Models. And I think we’re going to talk a little bit more about that later on all the announcements that are happening. But that was the third layer that Neri talked about, which was the usable data, that enabling the AI environment for the entire market. And a precursor to probably what we will see is the supercomputing as a service over time.

Steven Dickens: Yeah. They were hinting towards that, and we’ll have to see that come out, but they’ve certainly got the HPC chops to be able to do that. I mean, my takeaway from the event overall was really starting to lean into where the market is going. So they’ve been early on consumption models. I think they were the first of the big vendors to really pivot the entire company towards GreenLake. Lots of work going on around rebranding and simplification from a marketing point of view. So I think we’re going to see a slick a message from them. But Antonio Neri is definitely in his stride. He was on main stage saying he’d done six HPE Discovers. You could just tell, I think we talked about it, this is just a comfort that he’s picked the strategy early. He’s got into private cloud. He’s got into consumption. He’s pivoted the company over maybe the last four or five years. Now they’re starting to see that really gain traction. Would you agree? Is that what you picked up? I know we spent some time talking about it.

Camberley Bates: Well, he did cite some numbers that were up there. I believe it was 10 billion in contracts, 1.1 in ARR, which is pretty significant numbers that are going for GreenLake. It’s the question we’ve had, is it all consumption-based? Is it all managed-based? It’s a combination of all of that that people are latching into depending upon their financial metrics and what they’re looking at. But there is that piece of it is once you supposedly lock in and as a service, it’s that renewal process that is a little bit more difficult to probably unwind. Or the other flip side of it, as long as you are delivering well on the service and the service level agreements, you’ve got a very strong probability of renewals, which was another interesting thing that came back on that private cloud offering, the BE private cloud offering business edition as well as the private cloud offering period for GreenLake, they all have six nines, SLAs. Significantly bigger number than you will find on an AWS or Azure. So it’s their commitment to delivering the service to the client.

Steven Dickens: Yeah. And I think that’s one of the criteria for me. If you’re looking at private cloud availability, there’ll be data sovereignty. There’ll be other non-functional requirements. But I think if you’re demanding from an availability point of view and you really want to get into the guts of how that data is replicated and the clustering and how the hardware works, and you do want to get your hands somewhat dirty, something like a GreenLake’s going to make a lot of sense for that, I think.

Camberley Bates: Yeah. The other show that we were at recently, I did not attend, but one of our peers, Randy Kerns, was there at the Pure//Accelerate that was held. They did some big announcements there with the FlashArray//X, it’s never easy to say, and the /SC. Basically, it was big updates on there with the Sapphire Rapids from Intel PCIE gen four, DDR5 memory, et cetera. So with that new update, they have a 40% performance and 30% compression increases with the direct compress accelerators. What’s really cool about that is since a lot of clients have the Evergreen contract with them, meaning that if you have the high-level premium support contract with them, every three years, you get an upgrade in your controller. And so that you’re not paying anything for that 40% performance boost. You’re already paying for the premium. So that’s a real nice kickup. They also added in a bigger FlashArray//C model. The C model is their big QLC models that they’ve come out with. Frankly, basically saying, “Hey, guys, we can take over the ACD line,” which is now their latest and greatest discussion in the market space. Lastly is the FlashBlade//E, which is supposedly priced to take on directly disc-based systems. And it’s one to four petabytes raw. So I mean, we’re going to see some rattling of the sabers out there like we’ve seen before. It has a 75 terabyte direct flash module, significantly big module that they’re developing themselves.

I mean, remember, I’m not sure if you were watching Pure when they first came out many years ago, but they were all about off the shelf stuff. And now what they are is all about the custom design because they have figured out that they can really get a whole lot more out of the systems when they do the custom design. So all in all, good show. In talking to Randy, the clients were very engaged. It was a whole lot more energy this year, but I think everybody finally crawled out of the COVID cages and are back out there. Although, folks, I’ve got a friend that had COVID this last week and she’s really super sick, so-

Steven Dickens: Oh, God.

Camberley Bates: … I’m watching for this stuff. Anyway, so that was the stuff with Pure//Accelerate.

Steven Dickens: So bigger, better, faster was the takeaway, you think, from the show? I mean, lots of product announcements that sound like they’re performance. Anything architecturally changing or just same as before but just bigger and faster?

Camberley Bates: I think it’s more bigger and faster. They did some things, hardening for ransomware. One, they announced a ransomware recovery SLA for Evergreen/1. What that is, it’s an add-on. You have to pay for it, but Pure will ship a clean storage system to you the next day that’s got an isolated recovery. So let’s say you go belly up on your… You didn’t have your data protection and security wrapped down, you got locked down, et cetera. You can’t unlock that box. They’re going to ship that box to you next day for you to get stood up and going, to set it up. So, nice offering. It’s, again, your insurance policy if you want to spend the money for it and it’s…

Steven Dickens: I mean, I think some of the big banks and the telcos and the retailers may well do that. I mean, I think belt, braces, suspenders, everything from a security point of view, I can see that getting traction.

Camberley Bates: Yeah, yeah. So that’s that one. So we’re on to maybe we get a breather for a couple of weeks before we have to go get back on the road again. So I think we’re-

Steven Dickens: Yeah. July 4th, people maybe slow down. I know we’re both-

Camberley Bates: Do you know what July 4th is?

Steven Dickens: I’ve heard about it. I’ve heard about it. It may be best if I don’t comment with this accent on July 4th.

Camberley Bates: No, thank you.

Steven Dickens: So I mean, latter part of the week I was out at MongoDB. That gives us a nice segue into some of the stuff we’re seeing around the whole AI space. So I’ll maybe summarize MongoDB. And then I think they’ve changed their event. So I’ve been going to MongoDB World or whatever it’s been called now since 2015. They’ve changed the event now. They’re doing it instead of a couple of big events around the world, I’ve been to it in London as well, they’re now taking it on the road with MongoDB Local. So it was an action packed one-day event in the Javits Center in New York. Really good to see all the content in one place. Lots of announcements, vector search, obviously a lot of focusing on AI and providing where they’re going to fit in the stack.

I think some of the biggest things, and I tweeted about this as part of Dev’s keynote, was just the sheer volume of traction that they’re getting. I’d have to check the numbers, but I think it was 43,000 customers now. I mean, I’ve been tracking MongoDB, as I say, from 2015. They’re now a really established big database company, really taking share. I think as people think about new applications and cloud native, they’re really starting to bolster their presence. And some of the names and vendors that they were talking about partnerships with, a lot of that work now is coming on Atlas from a cloud perspective. So they’ve seen that explode. I think it was 2017 or ’18 that they launched Atlas. So that’s now over 50, maybe even over 60% of their revenue.

So I think developers are looking to launch those cloud native applications. MongoDB is starting to become at least one of the default choices as you’re looking for that database architecture to support. So fascinating event, lots of announcements. I wrote a piece which we can put in the show notes that covers that. As always, lots and lots of product drops, new releases, new functionality, but lots to take away from that event, I think.

Camberley Bates: Yeah.

Steven Dickens: So that gives us a segue. Enterprise AI, I just literally published a piece on Forbes. There’s a lot going on in this space. We just talked about what HPE is doing with large language models. Maybe we spend a bit of time, just, this space seems to be evolving on a daily basis. There’s a new product announcement, somebody’s launching something. What’s your take, Camberley?

Camberley Bates: I don’t believe it’s a flash in the pan. That’s the one thing I’m going to say. I don’t think it’s a flash in the pan, although there is the hype that we’ll have and it will have a tapering effect. But one of the things that Antonio Neri talked about in the private Q&A, and I can’t quote the numbers but I will allude to it, I’m sure he’d be happy that I did, was the increase in their pipeline based upon in the last 120 days that is directly related to AI, and it’s significant uptick. I believe if we went to Dell, we’d see the same thing. I think you got the same kind of feeling coming out of Lenovo.

Steven Dickens: Yeah, I did. I did exactly.

Camberley Bates: Hearing that coming out of the Microsoft, and I’m sure that IBM would chime in on the same thing. So there is this rush that’s going on. We think this is an easier path than where we have been over the last four or five years of trying to get to AI. If you recall, probably five years ago, talking to one of my buddies that’s a CIO advisor. And what he was doing is you had these centers of excellence for AI, and they were more or less putting up environments just to figure it out, find out what was going on. And there wasn’t a whole lot of success coming out of them. We were carrying some of the success stories. And I know that IBM had their Garage Events, or I think it was called Garage Events, that they were doing to help customers identify something that they could get to in a quick 90 days and get to something that was deliverable.

But it’s been a real struggle to get to something that has been really truly a value to the enterprise. This has got the potential to move very quickly as we streamlined how we interact with the technology. We figured out that we can take a huge amount of data that we already have and do something with it and train it. Now, the training models are going to take a bit to get out there and get solid, and we’ve already seen and heard about a couple of not necessarily ChatGPT failures but failures that were along this line that had to get pulled back. I suspect we’ll see a couple more coming out in the fall as they get ready to launch them and put them out and say, “Hey, let me pull back.” But that’s based upon the amount of trading it has to do. So I’m going to stop. I can keep on talking if you want me to-

Steven Dickens: No, no. I mean, it’s interesting.

Camberley Bates: … or you can interrupt me and say, “Hey, I’ll take on from here.” I’ll say one more thing. There is a little bit of debate. Dell launched Project Helix, which we can talk about. You had HPE bring out their GreenLake for Large Language Models. Project Helix has got to focus on maybe small language models, containers that I can set up in my environment. HPE is looking at it and saying, “I have to have my large language models. They’ve got to train on this very, very large environment.” That’s going to be an interesting debate.

Steven Dickens: Yeah. I mean, I think my take on it so far, and it’s evolving, is every vendor’s jumping on the opportunity to launch something. I don’t think that is just getting on a bandwagon. I think they’ve genuinely had these technologies in back rooms being developed over the last couple of years. Maybe over the last three months they’ve turned the speed up to a level to get them out. As we saw with the GreenLake stuff from HPE, they haven’t invented that and brought that to market in the last two weeks. It’s been in development for a bunch of time. Yes, Antonio has probably lit a fire under the product management team to get it out for Discover, and maybe more naturally it would have been back end of the year. And the same from Dev Ittycheria at MongoDB. I think they probably would have naturally launched some of the vector search and some of the AI stuff later on in the year, but they’ve probably turned the speed up.

But I think all this was coming anyway. I think what we’re going to see, and as I say in this Forbes article, I think every vendor in the next 60 to 90 days is going to launch something for AI. But it’s going to be interesting to see how that enterprise AI stack starts to firm up. Every vendor’s going to have to find their place in the stack, put their flag on it and go, “We’re going to do this.” Dell and HPE and Lenovo are probably going to go after that infrastructure piece. People like Elastic and MongoDB are going to go after maybe search and some of the data components.

I think all the attention is on public generative AI. I think what you and I are getting access to is all of the big players establishing a role and a position in enterprise AI. I think we’re going to see Google and Microsoft and maybe one or two others fight it out in that space. I heard over the weekend that one of the big social media networks spent a billion dollars within Nvidia in the last 90 days. So I think we’re going to see that public generative AI space, my kids asking questions on Snapchat AI. So we’re going to see that space evolve, but I think where the big money in this space is going to be spent is going to be on those enterprise models, and that’s going to be fascinating.

Camberley Bates: Yeah, absolutely totally fascinating. And I have that belief, and I know I’ve said this before, that if a CEO is not looking into how this impacts their P&L or their balance sheet, they are behind the game. I think it goes back to when we started implementing, I mean, okay, I’m going to age myself again, when we started implementing transaction systems. Transactional systems had a huge impact. We went from batch systems to transaction systems that were really truly online in real time. Now we’re going to something that is not only real time, but it’s human trained by our information and the data that we’ve gathered over the many years. And then hopefully, if they do it right, human guardrails, because there’s a lot of questions or issues that as smart as the computers can be, they’re only as smart as zeros and ones. And how we train that and the more data we give it, the better it’s going to be over time.

Steven Dickens: Yeah, it’s interesting you mentioned that. I literally had a briefing today from the Red Hat OpenShift team. They made some announcements at their event a few weeks back, and I got a chance to double-click on that today, around OpenShift AI. And the team were doing the sort of classic feature drop, “It does this. It does this.” And on page five, they had a piece around notices bias in models, and I stopped them and went, “Guys, that can’t be on chart five on the fourth bullet. That’s got to be right up at the top.” Because I think, as you say, people are going to start to get paranoid about bias in these models and being able to… This was the plumbing of how a model gets put into an enterprise and containerized. So it was a pretty technical sort of space, but I think it’s going to become a C-suite concern.

Especially one of the examples we were talking about is mortgage processing. If you’ve got a mortgage approver/denier AI platform, and that’s denying people in underrepresented groups for reasons that it shouldn’t, that’s front page news if they get found out with that type of stuff. So I think increasingly the C-suite is going to want to understand where these models come from, how were they trained, how were they deployed, and I think it’s less about some of the plumbing of how that happens and gets put in a container. It’s going to increasingly become an ethics and bias, and are we a good steward of AI and our corporate data? So I think it’s going to be fascinating as we see this space evolve in the next… And it’s evolving on a daily basis. It seems to be every day; there’s a new announcement that we need to track.

Camberley Bates: Well, when we were with Dell, I was talking to John Roese, and some of the things that they have already done to clean up their data, if you will, data being documentation, code development, et cetera… So there are a lot of terms that we use that maybe are not nowadays politically correct, if you will, or could be misconstrued. So white hat, black hat would be one. Slave master, which we’ve always used in terms of talking about databases, right? And there’s a myriad of other terms that they’ve been through and they started going through quite a few years ago to clean up their data and their environments.

So that is one of the things before it starts getting put into the systems, that kind of scrutiny has to be… People have to go through and scrutinize their information that they have before they put it in the system. Because once you train it, it’s going to be a little difficult to untrain, I think. I mean, I’m not a data scientist and professional, but I think it would be better to clean it up before you drop it in than have to clean it up after the act. So we have some time.

Steven Dickens: Yeah. I mean, I think on this podcast over the next few weeks, as we cover the infrastructure of AI, it’s going to be in a rapidly evolving space.

Camberley Bates: Yeah, yeah.

Steven Dickens: Well, that’s this week’s episode. You’ve been listening-

Camberley Bates: There we go.

Steven Dickens: There we go. We’ve covered it. It’s amazing how fast these go, Camberley. So really great to talk to you, as always. We’ll be together next week. We seem to be on the road together at the moment, which is great. Thank you very much for listening. This is the Infrastructure Matters podcast. Please click and subscribe, and we’ll see you on the next episode.

Camberley Bates: And you can find us on LinkedIn as well, Camberley Bates.

Steven Dickens: And Steven Dickens on LinkedIn and Twitter for me as well. So-

Camberley Bates: There you go.

Steven Dickens: … we’ll see you next time.

Steven Dickens: Thank you very much for listening.

Camberley Bates: Thank you, guys.

Author Information

Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the Vice President and Practice Leader for Hybrid Cloud, Infrastructure, and Operations at The Futurum Group. With a distinguished track record as a Forbes contributor and a ranking among the Top 10 Analysts by ARInsights, Steven's unique vantage point enables him to chart the nexus between emergent technologies and disruptive innovation, offering unparalleled insights for global enterprises.

Steven's expertise spans a broad spectrum of technologies that drive modern enterprises. Notable among these are open source, hybrid cloud, mission-critical infrastructure, cryptocurrencies, blockchain, and FinTech innovation. His work is foundational in aligning the strategic imperatives of C-suite executives with the practical needs of end users and technology practitioners, serving as a catalyst for optimizing the return on technology investments.

Over the years, Steven has been an integral part of industry behemoths including Broadcom, Hewlett Packard Enterprise (HPE), and IBM. His exceptional ability to pioneer multi-hundred-million-dollar products and to lead global sales teams with revenues in the same echelon has consistently demonstrated his capability for high-impact leadership.

Steven serves as a thought leader in various technology consortiums. He was a founding board member and former Chairperson of the Open Mainframe Project, under the aegis of the Linux Foundation. His role as a Board Advisor continues to shape the advocacy for open source implementations of mainframe technologies.

Camberley brings over 25 years of executive experience leading sales and marketing teams at Fortune 500 firms. Before joining The Futurum Group, she led the Evaluator Group, an information technology analyst firm as Managing Director.

Her career has spanned all elements of sales and marketing including a 360-degree view of addressing challenges and delivering solutions was achieved from crossing the boundary of sales and channel engagement with large enterprise vendors and her own 100-person IT services firm.

Camberley has provided Global 250 startups with go-to-market strategies, creating a new market category “MAID” as Vice President of Marketing at COPAN and led a worldwide marketing team including channels as a VP at VERITAS. At GE Access, a $2B distribution company, she served as VP of a new division and succeeded in growing the company from $14 to $500 million and built a successful 100-person IT services firm. Camberley began her career at IBM in sales and management.

She holds a Bachelor of Science in International Business from California State University – Long Beach and executive certificates from Wellesley and Wharton School of Business.

SHARE:

Latest Insights:

New pNFS Architecture Addresses Data Storage Needs for AI Training and Large Scale Inferencing
Camberley Bates at The Futurum Group covers Pure Storage FlashBlade //EXA announcement for the AI Factory.
Strong ARR and Margin Expansion, but Investor Concerns Over CapEx and AI-Driven Shifts Remain
Olivier Blanchard, Research Director at The Futurum Group, shares insights on Samsara’s strong Q4 FY 2025 earnings, the 11% stock drop, and key investor concerns over CapEx slowdowns and AI-driven edge computing. How will these factors shape Samsara’s growth?
Google’s Latest Pixel Update Brings AI-Driven Scam Detection, Live Video Capabilities in Gemini Live, and Expanded Health and Safety Features to Pixel Devices
Olivier Blanchard, Research Director at The Futurum Group, examines Google’s March Pixel Drop, highlighting AI-powered Scam Detection, Gemini Live’s updates, Pixel Watch 3’s health tracking, and Satellite SOS expansion.
Discover how AI is driving major shifts in tech earnings on this episode of the Six Five Webcast - Infrastructure Matters. Learn about Broadcom's AI-fueled growth, VMware's Private AI Foundation, Salesforce's Agentforce, and the satellite internet race, and the impact of tariffs and the future of AI in business.

Thank you, we received your request, a member of our team will be in contact with you.