Exploring Scalable AI Journeys With Dell Tech’s Server PowerEdge XE – Six Five On The Road

Exploring Scalable AI Journeys With Dell Tech’s Server PowerEdge XE - Six Five On The Road

Just starting your AI journey or scaling up? 🚀

Dell Technologies is working to provide solutions for customers scaling or adopting AI and HPC systems.

Patrick Moorhead and Daniel Newman are joined by Dell’s Director of HPC Product Management, Armando Acosta at Dell Tech World as he walks through the XE portfolio, designed to support a wide range of AI workloads — from early experimentation to full-scale production. Tune into the full conversation to learn how their solutions are remaining competitive and at the forefront of addressing the dynamic demands of AI and high-performance computing needs.

Key Takeaways:

  • Flexibility and Scalability Are Key: Customers needs vary, which highlights the need for flexibility, ease in transitioning to production, and the necessity for scalable infrastructure and robust data science tools.
  • The Need For Speed: They explore the critical role of network fabric and the architectural considerations imperative for optimal AI performance and scalability – often lacking in legacy systems.
  • Built Different: The XE Compute and L11 offerings from Dell are built AI-ready to serve as foundational elements for organizations starting their AI ventures.
  • Tackling Power & Consumption Challenges: Dell’s approach to addressing power and cooling within data centers, introducing innovative solutions like direct liquid cooling (DLC) and heat capture cabinets.
  • The AI Arms Race: Armando underscores the unique value propositions of the Dell XE portfolio in catering to the burgeoning needs of AI workloads.

Learn more at Dell Technologies.

Watch the full video at Six Five Media, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript:

Patrick Moorhead: The Six Five is On The Road here in Las Vegas, Nevada for Dell Technologies World 2025. Dan. Its infrastructure, its software, its services, from client to the data center and everything in between.

Daniel Newman: And there’s that little pesky thing called AI. And it’s been in every part of it, injected, included. It’s in, it’s on, it’s around, and it’s for the customers, which has been a big focus here at Dell Technologies World 2025.

Patrick Moorhead: It is good. You know, we’ve kind of kept pretty high level about the infrastructure out there, but it was great on stage today, right? Michael dove in, he’s talking about giant rack servers, with Grace Blackwell, with AMD, with Intel, pretty much all the above. And I’m glad we’re going to get into this. And here we have Armando from Dell to go. And let’s geek out. Welcome to The Six Five.

Armando Acosta: Thank you for having me. It’s. It’s a pleasure to be on stage with you. I’ve seen you a lot, so it’s great to finally be here. Yeah, it’s great.

Daniel Newman: Yeah. It’s great to have you here with us. And by the way, I’m not going to just gloss over hearing your voice crack. Congratulations.

Patrick Moorhead: Thank you. I’m going through puberty.

Daniel Newman: That’s a big moment for you.

Patrick Moorhead: Yes.

Daniel Newman: It has been a long day. We’ve been with you most of the day. I think this is our 9th or 10th conversation, all of which have been great. Armando, excited to have this one with you.

Armando Acosta: Thank you.

Daniel Newman: You’re really involved in product planning across the board. You’re looking at a number of servers that you own, you’re responsible for. We’re seeing a lot of new stuff rolling out. Kind of curious, you know, you’re the one that really is one of the people at Dell that’s really helping sort of understand demand, drive demand. We know that you’re part of the business growing really fast. What are you hearing from the customers?

Armando Acosta: It’s a really interesting time in AI, right? I think right now, when you look at AI, you know, it’s much further along than it was five years ago. You know, models are better, infrastructure is better. But really what we’re trying to do at Dell is really always, you know, our customers are always going to be our guiding light. We want to listen to them learn and really just bring the products to market that they want to have. But what’s interesting is, you know, not all customers are created equal and not all data centers are created equal. So really what we try to focus on is trying to meet our customers wherever they’re at in the AI journey.

Patrick Moorhead: Right.

Armando Acosta: And as you know, one size doesn’t fit all. So the beautiful thing about DTW is that I get to be here for three days and I get to talk to 40 different customers and really understand their needs. You know, from, hey, how much power per rack can you bring? Right. Essentially, you know, what type of AI use case do you want? Right. Everybody that rages large language models. Right. Everybody wants to do LLMs with 20 billion, 30 billion, 40 billion parameters. But that’s not the only AI use case out there. Right. There’s still image recognition, those types of use cases, healthcare use cases. So we really try to make sure that we essentially have a product for all those different use cases.

Patrick Moorhead: Yeah, and that’s a challenge. I mean, we just had a discussion with the SMB group a couple conversations ago and the difference between what they want and a hyperscaler are very different. And then you have the enterprise. Right. We needed a brand new GPU from Nvidia announced today or last night to be able to fit the cooling, reliability and software compatibility that they’re looking for inference. But a lot of different trends are going on, a lot of pressure and feedback. Are there some that are driving change bigger than the others?

Armando Acosta: Yeah, I mean, when you look at the market, you do see this bifurcation, right? So you have your tier one CSPs. You know, they’re going to be a certain animal, they’re going to want certain things. But really what we’re trying to focus on is enterprise and commercial. And really what we want to do is lower the barrier of entry. Right. You know, you know, we talk about, hey, you have GB200 and you can have 72 GPUs talking to each other, which is amazing. And we have a customer set for that. But when you look at our enterprise and commercial customers, they’re really not there yet. Right, right. And so what we want to be able to do is come to them with a point of view to say, hey, where are you in your AI journey? Where are you starting from? And then essentially, let’s meet you there. Right. So for example, you know, we just announced our AI PCs. You know, this. A data scientist doesn’t start just running a model in the data center, right? They have an ideation phase, right. They want to do a little bit of data, you know, munging, you know, data cleansing. They run a small model and then once they see some value there, hey, they get a little bit bigger. Maybe they want to run it on one server. Right. Hey, once it gets a little bit bigger from there, more data, hey, they see more value, hey, let’s run it on five servers, hey, maybe let’s start with the PCIe GPU first, right? That might be good enough. Maybe then let’s go to something like NVLink or something that AMD has where the GPUs can communicate with each other. And then eventually, hey, we want everybody to run hundreds or thousands of racks at a time. But you know, it takes you time to get there.

And so for us, enterprise commercial, lower barrier of entry, come into the point of view. And then not only that, you know, you have that build versus buy spectrum with the enterprise commercial customers. You really see them to where they hey, I want you to come with a point of view. But not only that, solve all the hard problems for me, right? Make sure all the open ecosystem tools are there so that I can develop my use case, right? Hey, make sure I have the right server building block, right? Make sure I have the network building block, make sure I have the right storage and then, hey, I layer the software on top of it and make sure that I’m going to be able to execute that use case. Because here’s the deal, I talk to a lot of customers and I say, hey, I want to have 20 use cases in production next year and I’m going to say, hey, let’s start with one, right? Let’s get that first one right, let’s knock a home run and then hey, we can get to the next one. So that’s really what we’re trying to come in is really with a point of view, a guided view of where you need to get to. And then not only that, let’s help you define the use case and then let’s help you execute that use case.

Patrick Moorhead: And that’s a much more mature conversation above the box that I think is refreshing for Dell to have. And in the end they’re looking for an advisor for their business. And not all people show up on the doorstep. A lot of people show up on the doorstep with a generic server. But value around that is not always there.

Armando Acosta: Exactly.

Daniel Newman: We’ve entered the era of all companies or tech companies. We sort of said that over the last decade of digital transformation, but in reality now every company that’s kind of going through this process if they’re doing product management, they’re basically looking at this is how a process used to work. We want to develop a new product, a new service that our customers can consume. They get the benefit of AI. I mean, I was thinking a lot about this XE partnership with Nvidia, kind of the Blackwell footprint, but it’s small. It’s whether it’s the footprint of the desktop devices on that client side. And then of course, now you’re building these servers that are really inference and tuning, you know, like, hey, you know, you may have a small rack and you’ve got a lot of data on prem. You want to do a certain amount of the AI here. You want to get the AI, the infrastructure, close edge cases, small data center use cases. And it seems like XE is kind of perfectly designed to help start that journey for companies that are going to evolve to be doing probably a hybrid, because enterprises do hybrid. I mean, they just do. It’s not all on prem, it’s not all in the cloud. I mean, so how are you sort of telling that journey of XE being kind of the get it started for AI and the enterprise?

Armando Acosta: Well, one of the biggest things we focus on XE is really how do we make sure it’s the best building block for AI, right? And the other big thing we try to do is offer flexibility and choice. You know, when you look at, you know, you know, GPUs, you know, we have Nvidia, which is a market leader, but we also have interest in AMD GPUs. You know, we’re just talking about Intel Gaudy, you know, those opportunities. So really what we’re trying to do is address that silicon diversity, give our customers flexibility and choice on which, you know, silicon they want to, you know, use. And then not only that, we test and certify these building blocks. Right? We have strong partnerships with Intel, AMD and Nvidia. And not only that, we do tests and certification to make sure that that is the right building block. We do benchmarking. We talked about benchmarking earlier, right? Hey, how well can you train a llama model on this? How fast can you expect, right? Not only that, we go into the details of, okay, hey, how many data scientists do you have? How many concurrent users do you have? How far do you want to scale? How big is your data? And so when you have those types of conversations, we want to make sure we have the right building block for every one of those. And it’s not just, hey, here’s a server with a bunch of GPUs, here you go, have at it. We really want to be able to show them, hey, here’s your right building block. And if this is the way you want to scale, here’s the path you take. All right, if you want to do GB200, here’s the path you take. If you want to do a PCIe version, here’s the path you take. And so that’s what we’re trying to do with xe. Do we all want everybody that’s here once? Yeah, we want to get you there, but we know that it takes time and so you just can’t start there. So we want to give you the right building blocks.

Patrick Moorhead: So one of the biggest challenges of late that we found with AI infrastructure is the amount of power draw and the ability to cool it. And you’re looking at a 6x increase in power per rack going from H series to Grace Blackwell and it could be 10x of H200 to the next generation on that. And we’re looking at chilled water or special non chilled water. I know you just made an announcement today on that, but how are you approaching what looks like increased power draw up and to the right. And what was looked at 10 years ago is exotic. Nobody wanted to touch, particularly in the enterprise, only high performance computing was even using water cooling.

Armando Acosta: Right. I mean it’s interesting. Right. You know, there’s, there’s a large spectrum but really what we try to do is we’re going to still continue to do air cooled systems. We’re still going to do a 19 inch form factor. Yeah, everybody wants DLC, the 21 inch form factor, the OVR3 rack, you know, those good things. But for us we’re, we’re focused on both air cooled and liquid cooled. Right. The biggest thing that we really try to do is, you know, for example, in Europe, you know, some places in Europe can only bring 15 kilowatts per rack. You know, they’re landlocked, they can’t go up. You know, if you look at Europe, you know, the buildings are very, very old. Right. And so what we try to do is really understand the customer’s needs. Right. You know, we talked about the enclosed rear door heat exchanger today. You know, there’s some big news around that, when you look at that type of news, really what we’re trying to do, because if you look at GB200 and running a full rack of that, that’s 136 kilowatts.

Patrick Moorhead: Yes.

Armando Acosta: All right. Not everybody has 136 kilowatts, but with this enclosed rear door heat exchanger, we’re actually able to up your density by 20%. And hey, you can run it in an 80 kilowatt rack versus a 136 kilowatt rack. Right. And then when you look at the innovation, you know, rear door heat exchangers, we talked about hpc, they’ve been around for a long time, right? But what we did is, hey, we enclosed it so that when you’re blowing that hot air, you’re not blowing it onto the next row of servers. Right. With that heat exchanger and that closure, we’re reusing that hot air and we’re refilling it and, and we’re making sure that it’s into your capsule and that’s how you get that, you know, efficiency of scale and scope. You know, we have stuff like liquid assisted cooling, those types of things as well that we’re, we’re looking at. So it’s an interesting time. But like I said, we’re going to do both. You know, we want to do air cooled, we want to do direct liquid cooling, because there’s going to be a need for both. And I do really see enterprise and commercial customers sticking with air cooling as long as possible. Right. Because that’s how their data centers are built. Eventually there’s going to be that tipping point where you go to direct liquid cooling. But we want to make sure that, you know, we meet those customers halfway and say, hey, all you can do is DLC. And if you don’t want DLC, we don’t have anything for you. No, we’re not going to be that. We’re going to give you everything that’s good.

Daniel Newman: So, Armando, as you’re sort of talking to customers, you’re helping plan this journey, you’re helping them traverse the biggest systems, bringing enterprise a little bit closer out to the edge. What are a couple of the key recommendations you and your team are making to these enterprise customers in terms of getting on the journey and partnering up with Dell?

Armando Acosta: The number one thing we say is focus on the use case. Okay, what is the use case and what are you trying to accomplish and what is essentially the business results you want? We always start from there and then we work backwards. Right. What we always try to do is, okay, how big is your use case, how big is your data set? You know, how many GPUs do you actually need? What are essentially your data center barriers that you need to essentially work against? And then from there we’ll work backwards and we’ll build the right solution for you. Right. The other big thing that we also try to do is we really try to hone in on the AI use case. Right. So, hey, okay, what type of model do you need to actually use? Okay, hey, how big is your data set? Where is your data? Okay, is your data clean, cataloged and tagged? So you can actually go and do something with that data and it’s really breaking down those things because you, you know, everybody has that great building block and that’s what we do really, really well. But what we understand is you still have to have the model management and governments, you still have to have the data management, the data governance. You still essentially have to have the data production, right? You still have to have the security. And then, oh, by the way, those things all have to be interoperable with each other and they all have to communicate and they have to give you one single version of the truth. And that’s where we’re guiding our customers towards.

Daniel Newman: Smart. Armando, I want to thank you so much for joining us here at Dell Technologies World 2025. Let’s have you back some time on The Six Five and have a great event.

Armando Acosta: Thank you very much and it’s a pleasure talking with you.

Patrick Moorhead: Thank you.

Daniel Newman: And thank you everybody for tuning into The Six Five We’re On The Road here at Dell Technologies World 2025. We’re going to step away for a few, stick with us.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Erik Day and Jillian Mansolf from Dell Technologies join us to share insights on democratizing AI for SMBs, challenging the notion that AI is only for large enterprises and showcasing the transformative potential AI holds for businesses of all sizes.
Sam Grocott from Dell Technologies and Jay Philip from Glean join to shed light on their AI-enabled collaboration aimed at transforming the future of work.
Arthur Lewis, President at Dell Technologies, joins Patrick Moorhead and Daniel Newman to share insights on revolutionizing data centers with AI and the importance of disaggregated infrastructure for agility and efficiency.
Lori-Lee Elliott, CEO at Dauntless XR, and Jon Siegal, SVP, CSG & Online Marketing, at Dell, share insights on leveraging AI PCs for revolutionizing data visualization in high-stakes industries. Their collaboration exemplifies the potential of technology in enhancing productivity and creativity across sectors.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.