On this episode of Six Five – On the Road, host Patrick Moorhead and Daniel Newman are joined by Dell Technologies’ Varun Chhabra, SVP Product Marketing, for a conversation on the collaboration between Dell and NVIDIA known as the Dell AI Factory. This innovative initiative was one of the highlights at the recent NVIDIA GTC event, showcasing how both companies are at the forefront of driving enterprise adoption of AI technologies.
Their discussion covers:
- The unveiling of the Dell AI Factory in partnership with NVIDIA at GTC, and its significance for the AI industry.
- Insights into Rapid Adoption Group’s (RAG) findings on enterprise adoption of AI technologies.
- How Dell’s collaboration with NVIDIA, including the integration of NVIDIA’s Blackwell series, is shaping the future of AI solutions for customers.
- A sneak peek into what Dell Technologies has planned next in the field of AI and emerging technologies.
Interested in learning more? Watch the State of AI Adoption with Dell Technologies – Six Five – On the Road with Kyle Dufresne and Ihab Tarazi.
Learn about how Dell Technologies accelerates your journey from possible to proven by leveraging innovative technologies, a comprehensive suite of professional services, and an extensive network of partners at https://www.dell.com/ai.
Register for #DellTechWorld in Las Vegas to hear how Dell is unleashing the #AI revolution and igniting the power of technology at https://dell.to/3PCBdBR.
Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Or listen to the audio here:
Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.
Transcript:
Patrick Moorhead: The Six Five is back and we are doing post GTC wrap-ups all over the place. I’m here with my bestie, Daniel Newman. Dan, isn’t it amazing that an event like GTC is kind of like the Hallmark card, it just keeps on giving, right? I think since we’ve been on TV, we’ve been pontificating, our teams have been writing a lot of analysis about it. This has been great. What did you call it, or somebody call it? The Woodstock of tech?
Daniel Newman: Yeah, that was the AI Woodstock. I think it was our friend Dan Ives that said that, maybe on CNBC, but for some reason I said it, as I quoted him, but then I got credit for saying it. So you know what, I’ll take it, Pat. Sometimes the best ideas land on the second try. But yeah, it was a great event. And by the way, when you think about selling out big stadiums and venues, you generally think about bands, you think about sports, you don’t think about tech conferences. But this particular one ran riot.
And I think we knew this was going to happen, it’s been a great year. It’s been super-exciting to see all of this AI excitement come to life. There’s a pragmatism and there’s implementation, but then there’s also this excitement and enthusiasm. Hopefully, not too much exuberance, though, but yeah, what a great event.
Patrick Moorhead: Yeah. One company that seemed to get call-out, I don’t know, preferential treatment, by Jensen-
Daniel Newman: Deserved, maybe?
Patrick Moorhead: Deserved, yes. And we’re going to talk about that, was Dell Technologies. One of them was the only person mentioned with a camera shot. That was Michael Dell, who was in the front row, a little bit of a nice little golf wave there. And then we saw Michael post this pretty awesome video of Jensen going through the Dell Technologies booth and what did he say, Daniel?
Daniel Newman: I think he said, basically, “If you’re building AI,” and don’t quote me because I know Michael watches some of the stuff we do, but, “if you are building, implementing AI, you can get what you need. You need compute, you need network, you need implementation services, consulting. You can get it all from Dell Technologies.” Jensen the spokesperson. Now, if he starts selling vehicles, I’m going to worry. But this was about the best I’ve ever seen, in terms of an endorsement, from NVIDIA CEO, Jensen Huang.
Patrick Moorhead: Yeah. So let’s dig underneath. What was all the excitement about? Let’s bring in our guest, Varun, great to see you again.
Varun Chhabra: Hey guys.
Patrick Moorhead: I’ve lost count of how many times you’ve been on the show but thanks for coming on. And I want to talk about what got everybody excited here.
Varun Chhabra: Hey Pat. Hey Dan. Thanks for having me. It’s always a pleasure. I love our conversations all the time. Yeah, as you said, even though GTC was a few weeks ago, it still feels like such an amazing… The glow from the event, having attended the event, just seeing the innovation that’s happening across the industry and how you had everything from concept cars to robots. It was amazing, it was eye-opening and so inspiring, as someone who’s in the tech industry.
But yeah, just really, really exciting event. And of course, as you mentioned, we have a really, really strong partnership with NVIDIA that, as it turns out, showed up really well, both in the keynote and at the booth and other places as well.
Daniel Newman: Varun, it really is a testament to the work that you’re doing to get that kind of recognition. It was not something that came about lightly. And while Pat put out one of the fun tweets, jokingly showing, when he was on CNBC next to Jensen, being… Nobody got on stage with Jensen, to be very clear. It was his stage, it was his event, and it was the moment. And really, back even in 2022 when NVIDIA and the whole market was really dwindling off and both Pat and myself at times were saying, you got to watch this AI trend. This trend is going to be huge. And now that everybody’s right in arrears, we saw all that enthusiasm, but why? Varun, give us the, what Dell was talking about at GTC and what is the Dell AI factory that you’re building with NVIDIA?
Varun Chhabra: Yeah, there’s a lot to unpack, Dan, but let’s start, as you said with the AI factory. I think the genesis for the AI factory with NVIDIA really comes down to one question we hear from our enterprises again and again is help me simplify this. Most organizations see the potential, but when they start getting into implementation, looking at the different piece parts that have to come together, the questions that have to be answered that are not just technology questions, but strategy questions, stakeholder alignment, there’s a lot, as both of you know, to actually get done. And to go from concept to selecting a use case to actually implementing and then deploying a production, it’s a rapidly evolving ecosystem.
And the amount of stitching together of different piece parts you have to do is quite significant. What the Dell AI factory with NVIDIA really aims to do is simplify adoption of AI technologies or AI workloads within enterprises specifically. And the way it does it is it combines Dell and NVIDIA infrastructure. So if you think about the infrastructure layer, compute, storage, networking, even laptops and workstations, because let’s face it, you can have very large factories, you can have small factories out of the edge. And then also with capabilities to be able to extend these things into off-premises environments or off-premises hybrid scenarios.
It takes all of the infrastructure capability there, packages it together with NVIDIA’s AI enterprise software, and then also has professional services from Dell that are jointly developed by Dell and NVIDIA to help customers wherever they are in their journey to be able to deploy and get the most value out of their AI workloads.
Patrick Moorhead: Yeah. One thing I appreciated, listen, I love infrastructure. I’ve been doing that for a long time, not just pretend, but actually doing infrastructure in my past. But I like that you started off with the use case, that you have services that you’ll sit and go through this. My first reaction was, wow, isn’t this what McKinsey does, or somebody like that? And they do and so do companies like Bain, but here’s the deal again. I ran corporate strategy for a company and managed Bain and McKinsey and all these relationships. They don’t typically understand the technology.
So it would seem that with a Dell, you would get the tech and going through the process of, hey, where do I start here? So I really liked that more consultative approach as opposed to waiting for a GSI or waiting for a McKinsey to make some recommendation. I like the threading, and I would have to imagine that you’re going to get to a result a lot quicker. Now, yeah, is that true or I’m making up things here?
Varun Chhabra: No, look, we have an ecosystem that we play with, right? It’s never one size fits all, and certainly the GSIs that you mentioned are in our customer accounts and they are getting advice from them as well as they rightfully should. But as you also pointed out, I think there’s a different level of trust and trusted advisor status that you get when you’ve actually implemented these things yourself. Right? So one of the common things that we talk about within Dell, since we’re going on our own on AI journey, is that, well, when you start whiteboarding and saying, where are places where Gen AI or AI in general can help, you could have 500 or 600 use cases in a large organization. Where do you start? Even a question like that, we get that question from enterprises all the time.
Well, I know I can do 500 different things. I can work on my supply chain issues. I can improve my customer experience. I can help my marketing teams with content generation, et cetera, et cetera. Where do I get started? So as a very specific example, we have consulting engagements where you can actually not only work on your AI strategy, but actually I think both of you will get a kick out of this because you’re both practitioners, is that it’s not just about the strategy, that we actually have workshops that drive stakeholder alignment. Right? Because in a large company, you can agree on one thing to do, but how you do it or even what you want to go focus on, can often have different opinions. So very much at the start of that journey all the way through, okay, I know what my four use cases are. Or my first use case is, how do I figure out what part of my data state do I use?
There’s data in the cloud, there’s data increasingly out at the edge, there’s data in my 304 data centers, et cetera. Well, which data should I use? What’s the data preparation pathway? Or how do we prep the data? What’s good data? What’s bad data? Et cetera. Then you go down to, okay, I’ve got my use cases figured out. I’ve got my data strategy figured out. How do I implement this? As you were mentioning workloads, if I’m developing a chatbot and I’ve decided I want to use retrieval, augmented generation, what’s the piece parts that I use to bring that together? What software modules do I use? Do I know if they work with the infrastructure that I’ve chosen? How much storage do I need? To scale to this number of users, how many GPUs should I be looking at? All of these things packaged together, not just services, but software infrastructure, that’s really what the Dell AI factory with NVIDIA is trying to do.
Patrick Moorhead: Yeah. One of the biggest things that came up in the conversations that I’m having with enterprises and also people who are helping enterprises is with 75% of the data on-prem or on the edge, and by the way, that’s 15 years after the introduction of the public cloud, how do I light up this data? Right? And RAG seems to be the, I don’t know, the latest darling of a way to light that up.
Varun Chhabra: Yeah.
Patrick Moorhead: What are some of the recent conversations that you’re having maybe to light on this? Is this a snipe that we’re looking for in the forest, or is this the real deal that you believe is going to help enterprises make responses better for their generative AI implementations?
Varun Chhabra: Yeah. Retrieval augmented generation, or RAG, is something that has a lot of interest from companies and there’s a good reason for that. So I would say there’s two things driving that, right? First is, as you said, there is a desire, understandably so, to use proprietary data, internal knowledge bases, knowledge about your end users, knowledge about supplier base, et cetera, et cetera, to create better, more relevant, accurate answers or accurate input for applications that are using Gen AI, even if they’re not directly talking with end users. So there is that desire.
Now, the question then becomes how do you do that? There is either you build a model from scratch, which in some cases, it could be an industry that has a lot of proprietary terminology or processes, like take the legal industry, for example. It may make sense to build a model from scratch. It’ll probably be a very small model, very specific niche model. You can do that. Or you can take an off-the-shelf model and tune it with your data. It turns out both of these options are computationally pretty expensive, as you both know. Right?
The GPU capability you need, the power that’s needed and the amount of data that you need to actually do all of this is inherently very expensive. In some cases, again, it makes sense. It could be financial services industry or healthcare industry where you have regulations that you have to be very careful about and make sure that the data doesn’t enter the public cloud’s sphere, et cetera, or public domain. So it may make sense for you to tune a model or create a model from scratch, and you may decide for those reasons that those are competitive advantages for you. So that’s totally happening in the industry.
But like you said, for 75% of the organizations, what we’re finding is as they look at what RAG can do, which is as the name suggests, you take an off-the-shelf model, maybe you’ve done a little bit of tuning on it, maybe you haven’t, and you just augment it with your knowledge base. You point it to documents. We played around it with it in my space. It could be five documents on a specific product. Or you could actually just point it to your massive knowledge base. And it can actually work and give you those domain-specific answers almost as well, or in some cases better than tuned models, and certainly at much less cost. So that’s the reason why I think time-to-market costs, the ability to be able to bring in your knowledge base and really tailor that specifically to certain things is why we are seeing such a big interest in RAG when it comes to enterprises.
Daniel Newman: Yeah, there’s so much interest in figuring out a way to unlock the power of the proprietary data that exists within the enterprise, Varun, and different mechanisms and methodologies to attack the data. Because again, what’s available to the public is becoming table stakes.
Varun Chhabra: Yeah.
Daniel Newman: These sort of open-source models. And so the value, the differentiation, the ability for companies to unlock the multi-trillion-dollar AI opportunity is, hey, what do we have that’s unique? And so different mechanisms, different methods or modalities, we’ll just start with them, that we can utilize to get that moving. Now, listen, the star of the show is always the next chip, right? The star of any big show is always the next chip. And so it’s fun to kind of have the leapfrogging, NVIDIA announces something, AMD announces something, NVIDIA announces something. Dell fortunately can do it all. You have the ability because of the great partnerships.
But Blackwell, right? That was the big thing. I did love the part when Jensen announced it and he said, it’s okay, Hopper. And he held out the two chips, and then he went through the specs of Blackwell. But we of course know that this is what’s next. More powerful, faster, trillion parameter models. There’s a lot of discussion on pricing and everything else, but in the end, people are going to be consuming this through Dell. Dell is one of the avenues. And that’s how they’re going to actually be able to buy these systems. The market loved it when they were able to discern how much AI horsepower you had last quarter. You got that huge bump when people were seeing all those servers moving, that backlog building. Talk a little bit about how Dell is working with NVIDIA to bring Blackwell and other future innovations to market.
Varun Chhabra: Yeah, Dan, absolutely. I think that was the moment everybody was waiting for. It was much rumored. So when Jensen walked on stage with the chips, it was really amazing. And so absolutely, as you said, we’re partnering closely with NVIDIA to bring the B100 and the B200 to market. And let’s not also forget, Dan, we also have the H200, even the upgrade to the Hopper series coming soon as well. So across all of those three H200 B100, B200, we’ve got servers that can help customers with that. And then the XE9 with the PowerEdge XE9680, which is our flagship eight-way GPU 6U form factor air cool design. That’s something we’ve been working on with NVIDIA for a long time.
And one of the things actually I should say, is speed of light engineering. That’s a term that we use a lot when we’re talking about our work with NVIDIA. Being able to bring these innovations quickly is absolutely one consideration. The other one is how do you make sure that the box itself is providing value to customers, whether it’s advances that make cooling more efficient, and other things. The one thing that’s really interesting about the XE9680, even with the H100s, right, that’s available right now, is that it was designed with NVIDIA for many years, actually, so that it is actually able to handle the cooling needs with air cooling itself. Right? So it doesn’t need DLC, and it can actually be a much, much more energy efficient option for customers. But as you start looking at what’s coming with the Blackwell series, there is obviously a need to be able to support multiple thermal profiles, et cetera.
So what we announced was our eight-way server, the XE9680, when it supports the B200, it’s going to be available with liquid cooling options for the first time. For the B100, we’re going to continue to have the air-cooled system. And then even with the H200, which is coming to market, we’re going to have capabilities for customers. So across the H100 today, the H200, the B100, the B200, we have essentially the same platforms, right? The same design, the same PowerEdge platform that helps customers.
One of the things that we hear a lot, I’m sure you hear this as well, with the pace of change for GPUs, customers are saying, hey, I’m buying these GPUs now. I’m going to get them in a few months. Well, what happens when the new version comes out? Right? How do I think about all of that? And what they love when we tell them is you could have racks that are designed for a specific platform, and you could rely on Dell to have exactly the same platform for all of these different GPUs, right?
So your cooling, your power profile, et cetera, all of these things you can actually standardize on with Dell because we have that GPU to GPU compatibility, if you will, with the same PowerEdge platform underneath it. And then the other thing I would say is it’s not just about the B in the Blackwell, it’s also about the G, right? The grace. So GB200, which NVIDIA calls, their super chip, that’s huge. Jensen talked about just a huge amount of horsepower throughput, energy efficiency that comes with that. So we also did announce that we’re going to support the GB200 super chip when it comes out with our own compute platform, rack scale, rack level redesign, the ability to have up to 40X lower TCO for inferencing, 20X better data performance. The GB200 needs a complete redesign in terms of how you think about compute platforms, and we’re partnering with them on that.
And then finally, I know I’ve been talking for a while, but finally it’s also very important when you’re talking about GPUs, as Jensen reminded us all, networking is kind of the hidden bottleneck when it comes to AI. Right? So we announced that we’re partnering with NVIDIA on bringing, not just, we’re already working with them in InfiniBand, but when you think about BlueField-3 ethernet capabilities as well as Spectrum-X, ethernet AI fabric, we’re working with them to actually bring that to customers so you can really unlock some of those hidden bottlenecks. All of this stuff, by the way, rolls up into the AI factory framework, right? It’s all under that umbrella. There’s a lot though to one back there. So I’ll pause there and see what reactions you folks have here.
Patrick Moorhead: Yeah, you’re going to be busy. Team’s going to be busy for the year, and I’m sure your roadmap goes out multiple years that you’re working on. Is there anything next, what’s coming next that you can talk about? I’m not asking you to spill your roadmap. If you’d like to, you can, but it’s probably not the best place to do that.
Varun Chhabra: I’ll give you a two-for-all. First, I want to make sure I talk about one capability that we didn’t necessarily mention, otherwise my team’s going to kill me. Storage, right? We talked about compute, we talked about networking. Storage is such an important part of this as well. If you’re training these models, the ability to read and write data at a really, really fast throughput is so important. You can’t just use any storage platform for it.
So another area where we’ve been partnering with NVIDIA has been with bringing PowerScale to AI infrastructures. So we actually were very, very excited to announce that the PowerScale platform, our file and object platform, storage platform is going to be now available certified for the NVIDIA DGX SuperPOD. In fact, it is, Dan and Pat, it is the first ethernet-based storage platform that’s available as a SuperPOD option. So, very, very excited about that.
Now, to answer your question, Pat, yes, we’re busy working on the roadmap. In fact, as both of know, and we’ll see you there in a month or so, we’re going to have Dell Tech World. And we’ve already announced that Jensen is going to join Michael on stage. You can imagine there will be some fun announcements there as well. So what I would say is in terms of what’s coming next, we’ll have a lot to say in a few weeks at Dell Tech World, and certainly excited to catch up with both of you there.
And then also just delivering on the things we even talked about, GTC is something that we’re continuing to work on. If customers want to learn more, I think the best place always to learn about these things is dell.com/ai. And of course, we encourage all of your viewers to register for Dell Tech World. It’s going to be a lot of fun. It’s all about AI. As you can imagine, Dell Tech World, we’re obviously going to talk about multi-cloud edge, et cetera, but the central theme is going to be AI and how we make AI easier for enterprises.
Daniel Newman: Well, Varun, The Six Five will be there. Both Patrick and I will be there as analysts. And then of course we’ll have our team there.
Varun Chhabra: That’s awesome.
Daniel Newman: And we will be chatting. I would not be surprised if you’ll be back on the program and we’ll be able to talk. I was hoping, I’m not going to lie, that you were going to give all the pre-announcements here on the show today. We love to be the place where news breaks. Having said that, this is really all about the analysis of what’s happened and going deeper, and we appreciate, Varun, you taking the time, getting up early out there on the West Coast to spend a little time with Patrick and I recapping GTC. And yes, Dell Tech World sounds great. We’ll put some links in the show notes so that everybody can check out more of what happened and what’s happening. We appreciate the partnership. Thanks so much for joining us here today, Varun.
Varun Chhabra: Likewise. Pat and Dan, thank you so much for having me. As always, it’s such a great pleasure, and I can’t wait to see you guys again in a few weeks.
Daniel Newman: We will see you soon. And we hope to see everyone out there really soon as well. We would like you to hit that subscribe button and join us for all of our episodes here on The Six Five. We covered a lot of ground today, but for this show, for this episode, for Patrick Moorhead and myself, it’s time to say goodbye. We’ll see you all later.
Author Information
Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.
From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.
A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.
An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.