Looking for practical advice to successfully deploy enterprise AI? 💡
At Dell Tech World 2025, hosts Patrick Moorhead and Daniel Newman are joined by Dell Technologies‘ John Roese, Global CTO & Chief AI Officer, as he delves into Dell’s firsthand experience with AI, providing insights into the company’s journey, challenges, and strategies. Tune in for a unique peek into how leading organizations are navigating their way through implementing and integrating AI into their operations.
Key takeaways include:
🔹Overcoming AI Implementation Hurdles: Roese shared Dell’s own journey and strategies for navigating the common challenges organizations face when implementing AI, emphasizing the importance of understanding AI maturity frameworks.
🔹The Evolving AI-Infrastructure Nexus: The conversation explored the dynamic relationship between AI and technology infrastructure, anticipating AI’s future trajectory and its profound impact on business and society.
🔹Practical AI Advancement for Businesses: Dell’s insights offered actionable advice for companies looking to advance their AI endeavors, highlighting the critical role of AI upskilling and proactive engagement with new use cases.
Learn more at Dell Technologies.
Watch the full video at Six Five Media, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Or listen to the audio here:
Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Transcript:
Patrick Moorhead: The Six Five is On The Road here in Las Vegas, Nevada. We’re at Dell Tech World 2025 and we’re talking morning, noon and night about AI. Whether it’s on the edge with PCs, the industrial edge, small scale data centers or even hyperscaler data centers. It’s AI all the time here.
Daniel Newman: Yeah, I mean we knew that would be part of the plan. Yeah, I think there were some surprises here. I mean some of the surprises were, you know, sometimes these shows can be very product focused. Like this is the newest thing we’re launching, this is the newest device, the newest silicon, this is the newest networking switch. This was very outcome focused. I think the keynotes were very much about bringing value. And so I think that’s a pivot. We’re going to see more. I’m, I’m not surprised at all to see Dell Technologies leading that conversation, but that really was the biggest theme for me was we know we’re going to get new servers, we know we’re going to get new pieces, but hearing how they’re being put to work is what I think everyone’s been waiting for.
Patrick Moorhead: It is, it’s what we’ve been waiting for. We called 2025 the year when AI really makes an impact. And even our summit is themed after that.
Daniel Newman: Which Michael Dell will be the exact keynote.
Patrick Moorhead: Very happy about that. But hey, let’s jump in here and have a chat with John Roese. And John has two jobs now. You knew him as global CTO and now he is chief officer as well at Dell Tech. John, welcome to the show.
John Roese: Great to be here.
Patrick Moorhead: Yeah, yep, absolutely. It’s great. Like you’re getting paid twice as much and. But it is funny how a lot of people at Dell have two jobs on the executive staff.
John Roese: Well, you know, this intersection between, you know, my world, it’s sometimes it’s intersection between technology and business. In my world as a CTO, I mean, I mean I started a lot of our AI journey like nine years ago.
Daniel Newman: Right.
John Roese: And nobody noticed for about seven years and then generative AI happens. But that was the responsibility for the long term view of technology. The chief AI officer job is very much the immediate need of AI inside of the company. And so this idea of having kind of the same person who actually kind of understands where we’re going and knows is responsible for making sure we do the right things today makes a bit of sense given these kinds of topics. So they aren’t two entirely distinct jobs, but they do have very different outcomes. But they’re all related to just quite frankly, navigating the AI journey, which is going to define the industry for the next couple of decades. So it makes sense.
Daniel Newman: The good news is the long. The extra compensation for John is all long dated treasuries. Exactly 2% yield.
John Roese: Yep.
Daniel Newman: So, John, you know, one of the things, and for years we’ve been having various versions of this conversation. I’ve always really appreciated you trying to think five and 10 years out into the future. But the organizations right now, I mean, we’ve done large studies, CEOs, they all want to do AI, but they’re not really quite sure how it’s going to work out. We’re seeing, I think, this endless appetite for data center construction. Capital’s poor, pouring in. But when you kind of look at where do we all get nervous? It’s the consumption. Who’s using it? How are they using it? How much are they paying for it? Your role gets you working closely with the teams, helping them talk a little bit about sort of what is motivating enterprises and kind of how you’re helping them actually get from the we want to do it to we are doing it.
John Roese: Yeah. I mean, here’s the deal. The goal of an enterprise with AI, whether they understand it or not, is to get into production, which means you have applied AI to the most impactful processes and the most important parts of your business. And you have changed in a positive way the productivity of your company. You have made more money, you have improved your profit, you have reduced your cost, you have become more competitive. And so fundamentally, you know, people may not fully understand that’s the goal, but that really is the outcome. That’s what we’ve done at Dell. That’s what some of the bigger-JP Morgan yesterday was talking about. The ones that feel like we’re kind of over the first finish line can declare that we’re in production. And in production isn’t a technical thing. It is, have you made a business impact? Now the challenge is to get there. There are these two questions you have to be able to answer, and many people haven’t answered both of them and they’re struggling. The first one is, what are you actually trying to do? What business process are you fixing? What changes with AI that makes you a more productive, more effective business? And then the second one is, if you knew that, then how are you going to do it? And that’s the mechanical part of it. What does your organization look like? Which technology are you going to use? Where are you going to run that technology? How is it going to be instantiated? And so these are not trivial questions, but if you can’t answer them, you get stuck. And what we’ve seen is the vast majority of enterprises are kind of in this POC prison if they have interesting technology activity but they haven’t quite figured it out with any kind of laser focus.
Where am I trying to bend the curve on productivity? Where is the impact to my business actually going to be that I can measure and focus all my energy on? And then even if they have that okay, well how am I going to do that? Because people are confused. Do I have to use my multi cloud strategy as the foundation for AI? I personally think that’s a terrible idea. You made your multi cloud decision five to ten years ago before AI existed. What makes you think that is the right infrastructure for AI? Well, that’s a pretty good discussion to have. In the case of the business process, you know, just simply achieving goodwill and happiness is probably not a great outcome, but actually bending the curve on financials will be. And you know, at Dell we have kind of taken a very structured process on this and we decided we wanted to achieve financial impact on the company. And we believe that the sources of that were our supply chain, our sales organization, our services organization, engineering. And we believe that the only way to actually achieve this is to understand our processes deeply and target AI in the process areas that are the most inefficient and can be improved. And that’s the exercise that I’ve been driving for about a year and we’ve gotten over the finish line and with the Dell sales chat that we just rolled out about seven weeks ago, all four of those places are now in production at scale. Achieving significant ROI. That’s the journey people have to get to and the impediment to get there. It all comes back to do you know what you’re really targeting and how it’ll impact your business? And do you have a clear point of view about mechanically how you’ll go about doing that?
Patrick Moorhead: John, a follow up to that. Are most of your largest customers following the same path that you are following?
John Roese: Here’s the interesting thing. I’ve used this analogy before. We’re in year three of the gen AI cycle. ChatGPT happened about a little over two years ago in November. Year one was a wasted year and it wasn’t if you’re CoreWeave or X because there was a lot of model building. But for enterprises, year one, the only tool we had was ChatGPT. That is not an enterprise tool. Okay. We had a lot of ideas, but we didn’t accomplish anything. Year two was the do it yourself year, because even if you knew what to do, there were no off the shelf tools anywhere. And so fundamentally you had to build everything yourself. Most companies couldn’t do that unless you were us, unless you were a very big company with a lot of technical resources. It wasn’t efficient, but it was necessary because there weren’t turnkey products. We’re in year three now where fundamentally a couple things have changed. Two big things are we have more and more of the AI stack consumable as a product. You can buy an AI factory, you can buy cohere, you can buy a coding assistant. And that makes it far easier for an enterprise to go from here to in production. And the second condition that’s true is you have examples now of companies like Dell and J.P. Morgan and others that have actually gotten into production. And we’re all sharing what we learned, which means you as a, you know, an enterprise CIO or chief AI officer without immense resources do not have to be at the bleeding edge. You can follow other people. And so I think when we talked to our customers a year ago there were very few that were at the same stage we were at. There were a couple and we were kind of keeping track of each other and having these conversations. This show, what you’re seeing. If you noticed who we brought up on stage, who we’ve been talking to. Think about how many customers stood up at this show and didn’t just say AI is cool. They said, this is what I’m actually doing with it and what the impact is and I’m in production. That is a marked change from a year ago. So now I would argue probably 95% of enterprises are still before that stage. But we’re seeing that tipping point. Your opening comment. We’re seeing the enterprise start to activate not because everybody did more work, but because it became simpler to do that work and there were archetypes and patterns and processes that they could learn from. So I’m super bullish. Year three is like the year that we actually see the enterprise start to take off. And that is good for everybody.
Daniel Newman: It seems like another driver too. And I know this is a question I’ve asked with agents, but I really ask this about all AI is where sort of it starts in the enterprise, meaning that, you know, the data lives on prem not in the cloud. The cloud has sort of historically been a little bit faster at making instantly consumable tools, meaning, you know, we’ve got a build Your AI. Because you said, like you said two years ago, it was like, buy your own server, build your own model. Like this. There’s a lot of people that got about an inch down the road and were like, wow, I just wasted millions of dollars.
John Roese: Yep.
Daniel Newman: But the third thing that happened in the last year was kind of all the ISVs and SaaS companies saying, don’t build anything or don’t build much. And I’m guessing that you’re kind of looking at that, like, in the real enterprise environment, it’s probably a little bit of A plus B plus C. Yes. But like, you are kind of getting these pressures. And if you’re an enterprise, you’re like, where do I start? Do I bolt it into my Salesforce instance, or do I need to kind of get back to the data I have on prem?
John Roese: Yeah. The important thing for an enterprise to navigate that world that you just described, which is very accurate, you have a lot of sources of where AI can materialize to help you do stuff. Some of it is stuff you own, some are things you rent, some are things you use. That is all true. But the most important thing you have to work out on to really navigate is understand where the source of competitive differentiation for your company that you need to control. And if you don’t understand that and make AI decisions, let me give you a really terrifying example. With agents, you should think about agentic as the process of outsourcing work to a machine. That’s what it does. It’s not a chatbot. It doesn’t just make a human better. It’s literally doing the work and that you’re taking a skill and doing it in a machine. If you, I don’t know, use a machine that you don’t own, that you just get to use that someone else fully controls, and you are just a consumer of it, and you inadvertently move a piece of work. Like, I don’t know, let’s say at Dell, we took the expertise necessary to build RAID algorithms in storage, and somebody gave us a compelling offer that if we would just use their agent in a third party that we had no control over, we gave them our data, they would provide that service back to us. That would be a terrible decision because if they fail, then we no longer have the capability to do radar algorithms. That’s a very extreme example. But when you start thinking about this ecosystem, if you know that your source of competitive differentiation is X, Y and Z, in this case, we have a very exceptional engineering capability that is differentiated in the market. We do not want that to be done by a third party in a way that we lose control of because that unique skill set makes us us. On the other hand, if you go and look at, I don’t know, let’s take HR IT having a unique approach to HR in the AI world at Dell probably will not differentiate us. Having no AI in HR will also probably put us at a disadvantage. But having just best in class standardized ways of using AI to make HR better is probably something that is not a distinguishing factor between us and other companies. You want to be at the front end, but you don’t need to be unique. Those ones, you know what, there are plenty of third parties that will probably provide us with great tools and we should pick them carefully and we should use them and make that function better. But in a place like sales or services or supply chain or engineering organization, it’s a completely different discussion.
So that first question, what are you trying to do with this? Is so important because it describes where you want to preserve value and where AI can differentiate you, and it also describes where it won’t. And as soon as you know that, you know where you should do the heavy lifting and really control your destiny and you know where the other places, honestly, you just need to be good enough, go with the pack, use the tools, keep the investment at the lowest level to get the biggest return. And honestly, you know, it just makes you more productive, but it doesn’t make you unique. On the other hand, if I make my engineering organization significantly better than anyone else’s, I am unique. I have a sustainable competitive differentiation. So it’s weird, we never thought about technology at this kind of aggregated level, that it was the source of competitive differentiation or could be the vehicle that you lose it if you get it wrong. And that’s why this is so important to kind of think about this strategically.
Patrick Moorhead: Yeah. The argument that you brought forward, it’s very pragmatic in that it augers in on what you do the best at and what is your strategic advantage at, and then outsource by lease, everything else. Every other capability. By the way, very similar to the conversation on should you write your own application?
John Roese: Yes.
Patrick Moorhead: That was always typically the answer stuck in augur in. I want to auger in. I want to go about three or four clicks deep into the conversation. I want to talk about infrastructure. So Dan and our company, we have the luxury as analysts and I know it’s not a luxury. You do this as a CTO at understanding Silicon roadmaps, Foundry roadmaps, core technology and Universities and looking forward, not just one to two years out, but, you know, dare I, anything beyond five when you’re doing a foundry is probably too far and you’re going to get it wrong. But we’re in a situation now where we’ve got 600 kilowatt racks that were using direct liquid cooling. The technologies in foundries, the nodes are becoming a lot more expensive and they’re becoming a lot slower. In fact, node shrinks are barely actually happening. We just call them nanometers because I don’t know why we do this.
Daniel Newman: Markitechture.
Patrick Moorhead: Markitechture. But seriously, how is this industry going to keep pace with the future of AI? And let’s just pick an arbitrary five years from now, we’re headed to 600 kilowatt racks and probably 1 megawatt racks.
John Roese: No, not probably. We are heading to. Okay, I just.
Patrick Moorhead: It’s not a fact.
John Roese: There’s no debate about that. That’s what I’m hearing.
Daniel Newman: I’m gonna let John say it.
Patrick Moorhead: Yeah, no, no, seriously.
John Roese: No, it’s a when, not an if. Trust me. I mean, and by the way, and beyond there. And we know that because we are gonna continuously densify these systems. We haven’t run out of juice on the semiconductor. We have lots of tools at our disposal. Some of them are actually silicon geometry, other ones are around silicon architecture. How do you build the systems? How do you build better interconnects? There are lots of levers to pull. They are not infinite. But at a system level, when you bring this up to rack and row scale, you have a lot of opportunities to continue to densify and increase the overall aggregated capacity. So I’m not worried about us running out of juice in the near term here. That being said, we have to think beyond just that data center architecture about what is the footprint of AI. The first thing we know, and this is especially true with agentic, is that it becomes a distributed system more and more. But the idea that everything runs in that data center, especially on inference, doesn’t make sense when you are moving into an agentic environment. And because of that, you now have to look at what infrastructure is. Is infrastructure just the large racks in the data center or is it broader than that? And in fact, we actually firmly believe now that your edge infrastructure, which is really a distributed set of compute that is potentially more compute than the data center if you add it up, but it’s constrained to units of small nodes, and then you expand it out even further to the client devices. We have this thing called an AI PC. Why are we doing this, we’re not doing it because we just want to improve background performance on a video call. We’re doing it because we think that if we put 50 or 100 or a couple hundred tops on every device, we can distribute some of the agentic framework directly out onto those devices and actually move and share that burden with the data centers, which kind of bends the curve a little bit in our favor. We’ve seen examples in agentic where if you actually look at distributing the functionality where the agents on the device play a meaningful role, up to 80% of the compute necessary to accomplish the task doesn’t happen in the data center. And by the way, that’s not going to hurt our data center business because there’s almost infinite demand there. And now we have another level of capacity which is edge nodes, and then we have another level of capacity which is the computing node.
And the nice thing, back to your point about silicon geometry and improvement, the cycle that the AI PC is in in terms of densification and performance is way earlier than the cycle of data center GPUs. So we got a lot more headroom to innovate. And you saw us announce a discrete GPU in a laptop with some pretty hefty performance and very low power. That’s like the first one of those. We’re going to see lots more, which means that we’re going to have many levers to pull to increase the total aggregate tops capacity available to get to an outcome. But if we don’t change the AI architecture from classic monolithic to distributed agentic, we can’t take advantage of it. These things are all very interrelated. But looking out further, I’m not one of the people that thinks that we’re going to run out of juice and hit a wall. If we think about it as a system, we have a lot of levers to pull both in terms of technology architecture, where compute lives, optimization techniques, all of these things are really nascent right now. Which, you know, maybe we go back five years from now, we’ll say, okay, there seems to be some walls materializing, but we’re probably good for a while here, right?
Daniel Newman: Well, John, all we got to do from here is build enough power and energy to support everything that you just said. I know we’re going to get there. We do have to leave it here. I want to thank you. I appreciate the conversation through both of your roles as global CTO and Chief AI Officer. Enjoy those long dated bonds. I hope they yield a big payout in the long run. But seriously. John, thanks so much for joining us here at Dell Technologies World 2025. John, let’s do it again soon.
John Roese: Yep, absolutely.
Daniel Newman: And thank you, everybody for being part of The Six Five On The Road at Dell Technologies World 2025. We’re going to go on a break. We will see you back here shortly.
Author Information
Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.
From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.
A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.
An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.