Curious about how the interplay of hybrid cloud and gen AI is shaping the future of enterprise?
Hosts Patrick Moorhead and Daniel Newman are joined by Hewlett Packard Enterprise‘s Menaka Sundar, VP, NA Enterprise Architecture, as he reveals how pivotal technologies like edge computing and Generative AI are driving HPE’s innovation strategy ahead of HPE Discover 2025.
Key takeaways include:
🔹Navigating Unified Private Cloud Architectures: Discover how HPE is tackling the complex architectural challenges and opportunities within the Unified Private Cloud, highlighting their multi-hypervisor capabilities and Broadcom VMware integration.
🔹Enterprise Architecture for Scalable AI: Explore the evolution of enterprise architecture specifically designed to support large-scale, secure AI workloads, featuring insights into HPE’s AI Factory and Private Cloud AI solutions.
🔹Cloud-to-Edge Integration: Understand the critical importance of architectural coherence between core cloud systems and distributed edge environments, with a spotlight on HPE Aruba Networking and OpsRamp integration.
🔹Driving Successful AI Deployments: Learn the key factors that distinguish successful AI projects and how HPE embeds these essential drivers into its enterprise architecture strategy for practical, sustainable AI adoption.
Learn more at HPE Discover.
Watch the full video at Six Five Media, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Or listen to the audio here:
Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Transcript:
Patrick Moorhead: The Six Five is back and it is a couple weeks until HPE Discover 2025 and the Six Five will be there non stop with all of our analysts. Daniel, how are you my friend?
Daniel Newman: Doing great. The road to discover begins and Our team with the Six Five could not be more excited. Pat. But we are just a few weeks out and it is time to start looking at what people that are going to attend, partners, the customers, the, you know, analysts and everyone else. What we should all be expecting.
Patrick Moorhead: Yeah. So let’s dive in here. You know, I’ve been a big fan of the hybrid cloud going on 10 years. They called me the cloud denier. But we kind of knew that this is the way that enterprises wanted to do that. And now we have AI. But hybrid cloud and AI, do they cross? Let’s have that discussion here. So I can’t imagine a person better to have this discussion than Manaka from HPE. Manaka, great to meet you and welcome to Six Five for the first time.
Menaka Sundar: Thank you very much for having me here. As you said, the road to Discover began a long time ago. So we are super excited for this event and to bring all the innovation to our customers.
Daniel Newman: Yeah, so we’ve been following HPE’s journey for a long time and you know, going back even just it feels like it’s only been a few, but it’s been several years now since Antonio Neri got on stage, said we’re going to move everything to service. Locked in on GreenLake, really focused on that kind of consumption experience for on prem and hybrid environments. But in 2025, with the advent of AI and this acceleration, hybrid cloud is still a central theme in your innovation narrative. Talk a little bit about the kind of architectural challenges, the opportunities that you’re seeing that are sort of driving enterprises to go with, you know, continue with hybrid cloud and adopt even unified private cloud solutions.
Menaka Sundar: Yeah, definitely. So a little bit of background about myself just to, you know, give the audience where I come from. For the past 15 to 16 years I have worked in leadership roles in a public cloud provider myself. Just a few, you know, months ago I joined HPE. The reason why I joined HPE is exactly for that reason. Because 10 years ago we were advising customers to go cloud all in on cloud and drive innovation from platform and above. Quickly we realized that all in on cloud is just not possible. The reason for that is because if you see the temporal nature of the workloads, some are containerized, some need high performance computing, some require Extreme in memory capabilities, it is extremely diverse. And not only that, if you look at the other characteristics like databases running on a container versus databases running on exadata, the proprietary nature of the workload was very diverse.
So fast forward today, customers have realized that the future of enterprise is hybrid. Now, while we are in a phase that customers have realized it, HPE has thought about this 10 years ago. That is why we have an industry leading product today. Because if you think of yesterday, you wouldn’t be having a product. The innovation should have happened long ago. That’s why we have a product and the product is in the market leading situation right now. The beauty of this, where we are today is if you look at the complexity of AI, data residency is a huge investment. Customers, even if they are in the process of going hybrid or looking at hybrid, they have to continue to stay hybrid because of the data residency challenges. Because that fuels AI. Because AI is really a data challenge. So what HPE provides is the best of both worlds. One is providing the customers the flexibility and the option. Also providing the customers the cloud modules. If a customer wants to be on cloud, but they still want to have a manageability solution, HPE provides that along with that the co innovation that we are driving with Nvidia and several other of our key partners in the AI space. So those are the things that make me very excited, bringing both the hybrid and the AI together.
Patrick Moorhead: Yeah. What my team and I have commented on your approach is that optionality is good, but the easy button is good too. Right. Because the reality is you’re going to have some customers who want the easy button. Hey, I’m going to go full stack. But then they might want different modules of that stack as opposed to going to the full stack. Maybe they have a lot of technical debt or they need to stick with their virtualization provider that they’ve had for a while. Or maybe they want to delay any changes along those lines. So enterprise it, the CIO agenda is all about modernization. It’s all about AI as well. And there have been some debates on whether the two shall meet. How do you see the role of enterprise architecture helping to take AI to scale? Because AI so far is enterprises have had a lot of nice experiments. They’ve done some POCs and they are starting to see some value here in 2025. Some have scaled and some haven’t. But what kind of architecture do you need? Or what is the role of that architecture with generative AI and even agents?
Menaka Sundar: Yeah, I mean it started with generative AI but then the complexity is going more and more with agentic AI and then robotics is going to add even more complexity, right? So I’ll talk a little bit about the complexity of the AI from an architectural standpoint and I’ll tell you how hybrid it is going to marry with the AI infrastructure, right? So if you look at the complete stack of the AI that we are building, right, we talk from high performance networking capability we need to have in order to run the tokenization at the throughputs that we are talking about. Because token is a new currency, we need to make sure that we have those high performing networking capabilities. And that’s exactly what we provide. And then if you come about the stack of networking, we are talking about high performance computing. So for the past 10 to 15 years we have been leaders in the space of high performance computing. So that experience is really paying off when it comes to the high performance computing of GPUs and GPU accelerations. Then the firmware that we need to integrate with the GPUs because every use case requires the necessary firmware acceleration that we can drive down up to the firmware layer. Now if you go a step above that, it’s the data problem, you have a unified data fabric and with, you know, we co innovated with Nvidia on the extent case where we have the high performing data fabric layer that matches up with the Nvidia data platform reference architecture as well, which we recently launched and the customers are adopting to that. So from network to compute to the data layer, we got it. All right. And then if you go a little above the stack, we have a control plane. The reason why we need to have a control plane is because AI systems are an extension to IT, right? And if you’re managing an AI workload, you need to look at an enterprise as a holistic workload ecosystem.
So some workload is sitting in the cloud, some is sitting in your private cloud and some is sitting in the AI stack. All of that is orchestrated through a control plane. And that’s what we have from Morpheus’ standpoint. We have the control plane and then we have the software stack that sits on it, right? So we have the options of third party software stack and also our ML operations software stack. So that system with the common control plane now integrates and sits side by side with the AI ecosystem as well as with the IT infrastructure that the customers have already invested in. Now it doesn’t stop over there, right? The biggest hurdle to AI is the power, right? Because you need to make sure that a high density rack requires high powering. Now those are the capabilities that you don’t easily get access to in the public cloud because of the multi tenancy nature. Now coming to HPE, we’ll be able to provide you a full stack and provide you even modular. If it requires being modular. Sitting on a legacy data center where we can’t acquire the power, but we can build the AI infrastructure in a modular fashion and leveraging the control plane, we can integrate it to an existing infrastructure that the customers have and provide that seamless interface between their AI investment and between their existing IT investments. One thing that really strikes me in the era right now is if you look at any new innovation that happens, it’s always optimization of the prior investment. Before that, I look at cloud as a prior investment. So customers will optimize cloud spend with the help of hybrid IT and take that investment and put it in AI. So that’s how I see the future going.
Daniel Newman: I really appreciate that you hit on a few of the big challenges as well there. I mean I like to, I said that Pat, remember last year, or maybe it was two years ago, supercomputing, I said flops to tops. These companies like HPE that have this pedigree, this provenance in high performance computing have made a really powerful transition to AI. The other thing that you brought up and something that’s very much in your DNA at HPE is network. The network is going to be one of the rate limiters. And of course there’s many ways to attack the power thermals challenges. You know, we hear a lot about thermals and liquid cooling. Of course, we hear about total power and capacity as being these big challenges. And I think it’s really a combination of all. We need to be as efficient as possible per rack. We also need to figure out how to get the power developed to scale. Then we need to figure out how to develop software that is most efficient. That could be things like, you know, test time, compute, doing inference, you know, in real time, things like that where you can make models more efficient. Another thing though that’s coming really fast and furious is the edge. Like this is something that HPE has been buying into in a big way for a long time. It did this industrially, but we’re seeing it sort of, I laughingly say things like XR IoT, they kind of went out of style. And you’re seeing with AI quickly coming back into style, all that data, that sensing that edge, that real time, it’s bringing a new opportunity for the problem that wasn’t quite ready to be solved yet. So talk a little bit about that with Edge AI. The speed talk about how HPE is sort of making sure that its cloud to edge strategy is reborn for this AI era.
Menaka Sundar: See, that’s a wonderful question, right. Because people have looked at edge from just a computing standpoint. Give me two nodes, give me one node, give me four nodes. I have an oil and gas ecosystem out there I need to deploy in my oil fields. Give me a two node rack. So the era of preorder Edge was all about just computing, but it’s no longer that. Right. Because the inferences that I drive in my cloud should also be taken to the edge to drive more intelligence at the edge as well. So the architecture that we are building with the higher form factor in the cloud should also be an architecture as you reduce the form factor and should also be pushed to the edge. Edge to the Cloud really not only talks about our networking, our interconnectivity, our switches and the gateways, but it’s also about virtualization. Because the older narrator of the virtualization has been standardized on VMware. Now look at the opportunity we have with our VM Essentials and the control plane with Morpheus Enterprise Stack that we have. The same architecture that you deploy in a data center can now be reduced in a form factor and accelerated on a GPU with the same cloud modules and be pushed to the edge as close to the customer for computing. That uniform architecture and network integrated is what edge to the Cloud really stands for. Because there’ll be no customers. Take a retail customer for example. They’re not just going to be giving AI experience to their customer after the customer has purchased something. They will give that experience to the customer when they are live in the store. Right. So giving that faster processing time and taking the same models and getting it close to the customer, giving them in store experience is just going to be vast and differentiating for our retail customers as well. So that level of experience is what we want to offer them.
Patrick Moorhead: Yeah. Nothing is ever new. And the edge is going to get exciting again and everybody’s going to start talking about it. Yeah, Daniel, it’s the accordion. There’s aggregation and disaggregation. But what history shows is over a set period of time, compute goes to where the data is created. And as long as you have a management structure to be able to manage and secure everything on the edge, it’s the best place to do this. Because typically it’s the fastest response and it’s also cheaper typically as opposed to having, you know, 14 hops up to your data center or to your public cloud. So it just, it just makes sense. Some of these things are easy to call. I see no reason why the edge isn’t going to be, this new trillion dollar opportunity out there. So I talked a little bit about the map of where enterprise is on its AI journey and it’s nascent, right? I mean you have POCs that went on, you have some scaling that’s happening here in 2025. There are some who are adopting enterprise SaaS, which is a different way to consume this as well. But what are you seeing with your customers? What differentiates or distinguishes success from a failure?
Menaka Sundar: Yeah, I mean there are four varieties of AI use cases that we see. The first variety of AI use cases, large Fortune 50 enterprises actually building a high scale AI platform and then opening it up to their lines of businesses and developers to bring a third party tool or an open source tool where they can bring any models that they want customized for their industry. So we see a huge platform play. The second thing we see is the lines of businesses and the developers supporting the lines of businesses. They don’t care about the infrastructure. They want privacy, the sovereignty, but they don’t want to manage the infrastructure for them. It’s all about give me a fully loaded platform. All I wanted to do is deploy the LLM and get the outcomes faster. So that’s why we have the private cloud for AI which is the PCI stack. Then the third use case that we basically see is a sovereign AI use cases. I mean, you know, if you look at the telco providers, recently they have announced that I will be providing the telco environment for GPU as a service to my customers. So we see that very, very predominant as a sovereignty of the AI coming in.
The fourth and the last but not the least is basically R&D, AI for R&D. Because if you look at the complex enterprises, nobody is going to deploy a multi billion dollar rollout just like that without investing in R&D. So lots of investment is going in the labs to make sure that they have really hardened the ecosystem, make sure that there is ethical AI that is followed in the labs before it’s being deployed out. So if we are able to address all these four use cases with the mindset of return of investment for our customers and also make sure this physical formula, which is high performance, is low cost, that is what really our platform is all about. We give you the high performance and low cost when it comes to processing a token in all of the four use cases, either its edge or its cloud. So the better we do this and the better we have an ROI, the better the customer is at is converting the POCs into production. When they’re not able to link that, that’s when the POC stay as POCs and they never go to production.
Daniel Newman: I like that you brought up sovereign because I do think, you know, I think we’ve heard some of the various announcements in places like the Middle east where you see these giant spends also throughout Europe which has very specific and narrow data policies. And I think one of the big opportunities that is still to come for infrastructure and for premises based is going to be the sovereign opportunity. I also like that you reminded everybody about trust. Sometimes I think at the speed in which we’re moving, people have sort of just, you know, like we did in the app ecosystem, we just stopped reading the policies and I think there’s a little bit of, at the cost of moving quickly, we’ve sort of forgotten. And I think when it comes to the risk of enterprise data and how it needs to be managed, protected, utilized, because remember, most of the data from enterprise hasn’t touched AI yet and that’s a big opportunity that still exists. We like to think about the fact that the world’s Internet data, you know, the public data, has been used for AI, but the stuff that we own, that high value enterprise data has still not been used at scale.
So we only have a few minutes left. Really interesting conversation. We’re excited for this road to Discover. But this is also part of the strategy. The strategy is going to be, it’s a series of fast paced announcements. Now if I’m a customer or an analyst, which I am, sometimes. I am fatigued at the pace, the volume, the number of announcements and things being thrown at me day in and day out. I would love for you to tell us all, everyone out there, the people that are going to join us at the event, how are you sort of recommending helping these enterprises to absorb and prioritize and integrate so much coming at them at one time so they can be successful with this important, I think maybe the most important technological shift that most of us will experience in our careers.
Menaka Sundar: Yeah, I would boil it down to three different things that the enterprise needs to look at. They need to start looking at enterprise in terms of optimizing their spend because if you don’t optimize your spend, you’ll not be able to fuel the innovation that’s coming in. AI. Now optimizing the spend really starts with very good observability of their workloads regardless of the placement group. They had some in the Colo, some in the data center, some in the cloud and it’s all over the place. So they really need to drive that enterprise observability. Only if you know what is there and how well it’s been utilized, you’ll be able to optimize. Now optimization leads to cost saving. Now cost savings leads to the fueling the next generation of innovation, which is AI. Now when it comes to AI, there are two sets of ecosystems that customers will really need to bring in and think about. One is the data sensitivity to the AI and how are they trying to build that knowledge base of data and how are they securing that knowledge base of data? Then the second thing is driving those use cases and adoption that returns an investment to the corporations. It’s not for the sake of AI, right? It has to have. And of course ethical comes as a critical piece. Security and ethicality comes in a critical piece.
The last but not the least is looking at the data center and the investments and trying to figure out what the next five years of that modernization is going to look at from the network stack all the way to the platform stack. Because we are talking about really cutting edge technology and customers have really legacy data centers. So they really need to look at it from all folds, from cost observability, driving innovation through the platform and building that data fabric layer, followed by making sure that the data centers are capable of handling the next five years of innovation. So bringing all this together is all about Discover, Aruba, VMware, virtualization. I mean everything is all packaged together now. Customers think like, oh my God, that’s actually hundreds and hundreds of our products. But from a strategic standpoint, what we like to see is pace it out, phase out the innovation, engage with us. And we are here to really help our enterprises transform in this new era.
Daniel Newman: Yeah, make it consumable, outcome driven. That was why I was very positive when we wrote one of the earliest research pieces. We wrote on why GreenLake would work and you know, why HPE’s pivot would work. And so congratulations on all the progress. We’re very excited to join you on the road to Discover. It’s always been a great event. Pat, what have we been for at least a decade now to this event? Previous when it was one HP, now HP, HPE, then HPE then the GreenLake era but so much progress. We’re excited to continue to stay with and watch how the company continues to transcend and transform form in the in the AI era. Let’s have you back on again soon. Let’s make sure this isn’t a one and only and we will see you.
Menaka Sundar: In Vegas 100% let’s see you guys in Vegas. And I am excited to meet with many of our customers and be a part of their problem solving strategy and drive this adoption. Thank you very much guys.
Daniel Newman: Absolutely. And stick with us. Subscribe to be part of the Six Five community. We will be at HPE Discover there will be many conversations, interviews, partners, customers, executives all going to be happening on the ground there but for this show, this episode you’re going to have to take this sneak preview and you’re going to have to wait for the rest but subscribe be in our community. Can’t wait to see you more. Got to go for now. Bye bye.
Author Information
Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.
From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.
A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.
An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.