Bring AI to Your Data – The Six Five On the Road

Bring AI to Your Data - The Six Five On the Road

On this episode of The Six Five – On the Road hosts Daniel Newman and Patrick Moorhead are joined by Dell’s Travis Vigil, Senior Vice President, ISG Portfolio Management and Varun Chhabra, Senior Vice President, ISG and Telecom, at Dell Tech’s “Bring AI to Your Data” Event, for a discussion on how Dell is empowering AI adoption and a closer look at their latest announcements.

Their conversation covers:

  • Where Dell customers are heading with Generative AI and if the adoption rate is increasing
  • What challenges customers are experiencing
  • What Dell is doing differently and how Dell is driving Generative AI forward
  • How this strategy is supporting what Dell announced earlier this month and today

Be sure to subscribe to The Six Five Webcast, so you never miss an episode.

Watch the video here:

Or listen to the full audio here:

 

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

SPONSORED BY DELL TECHNOLOGIES

Transcript:

Patrick Moorhead: Hi, The Six Five is live at the Dell Technologies Bring Your AI to Your Data event. We’re here remote, we’re here virtual, but Dan and I, we are glad to be here. How you doing, buddy?

Daniel Newman: It’s good to be here, Pat. And it’s been a year of AI and bringing the AI to your data, bringing the AI, bringing all AI, bringing the best AI to your data is something I think every company is top of mind. And I’m excited to have this conversation today.

Patrick Moorhead: Yeah, it’s been a real whirlwind. And whether it’s IaaS, PaaS, SaaS, on-prem, on the edge, there’s always laws of physics that govern all of this. And bringing compute closer to the data has been a theme over the last 35 years, and it’s likely not changing over time. But we’ll see.

But hey, we brought two incredible folks here, Varun and Travis, welcome to the show and congratulations on the event. Just amazing things going on at Dell Technologies. Seem to be changes all the time, every day around taking advantage of AI and helping customers with that. And Varun, welcome back. I mean, this might be your fourth time. I lose count at three, but great to see you again. Travis, welcome to the Six Five.

Travis Vigil: Thanks. Long time listener. First time caller.

Patrick Moorhead: Yes, there we go. I love that.

Varun Chhabra: Thanks, guys. It’s good to be here again.

Daniel Newman: It’s good to have both of you and we do love our multi-timers, but I think soon enough we can make that happen for both of you. So let’s hit it. I mean, Dell is at the forefront of so many customers’ IT experiences. Your broad portfolio, your global reach has Dell inside of just about every enterprise on the planet and so many consumers. AI’s changed the game. It’s changing the go to market, it’s changing product roadmaps, it’s changing company strategy. I don’t really care if you’re a three-person organization or a 300,000-person organization, from November of last year, your business changed because of AI. It wasn’t just November, but in November it was just like when all of a sudden everyone light rolled on.

But I’d like to get your take, and Travis, I’m going to start with you. What is Dell seeing from your customer interactions as it pertains to this Gen AI adoption? And is the rate increasing, is the reality matching the hype that we’ve seen throughout 2023?

Travis Vigil: Yeah, it is an exciting time to be at Dell. It’s an exciting time to be providing solutions for generative AI. I think you said it exactly right. November, the change started, March April started to accelerate, and we have seen enormous adoption, especially around our compute portfolio. So the XE9680 is the fastest ramping product in Dell’s infrastructure solutions group’s history.

And it’s been interesting to watch the evolution of the demand. Early on, we saw a lot of interest and adoption from GPU as a service providers really focused on servicing the needs of the .AIs, lots of focus on model training, lots of focus on availability. And that trend has continued. But what we’ve seen start in parallel has been an increasing interest from enterprises.

And you said it, there’s not an enterprise out there that at the C level isn’t having a generative AI discussion, isn’t looking for opportunities either to do more or do things more efficiently. And what we’ve seen is even here at Dell is that folks are honing in on those use cases that really matter, whether that be customer support, whether that be speeding or enhancing the efficiency of coders, whether that be helping our sellers in terms of automating some of the work that they have to do to pursue opportunities.

Everybody’s getting it down to three or five big use cases. And what we’re seeing is that enterprises are starting to purchase equipment pick models. It’s moved from a focus on training to a focus on fine-tuning and inferencing. And from my perspective, what we’ll see going into the second half of the year and into next year is that those enterprises are really going to start to adopt, really going to start to deploy.

Patrick Moorhead: I really appreciate Travis. The last earnings Dell giving a lot of disclosures on how the business is doing. And a lot of people that I talked to were very surprised, not that Dell was in the game, but just the amount of traction that the company is having here. Because from our little old analyst viewpoint where we’re seeing the public cloud folks, the SaaS folks talk a lot about this, but it was great to see you come out with that a few months ago.

So in the end, technology for technology’s sake is one thing, but it’s also about solving problems. But solving problems, it’s not going to be easy. I mean, if nothing else, we’re looking at the configuration of data that was never brought together before. The size of some of these models, “Hey, do I create my own LLM? Do I ground it with high intensity grounding? What do I do here?” So I’m going to give this question to Varun. What are some of the challenges that your customers are experiencing getting real value out of this incredibly cool tech?

Varun Chhabra: It starts with, before we even get to the challenges, just to reiterate one part of what Travis said, it really starts with the opportunity. To your point, it’s not technology for technology’s sake. So if you think about use cases, Travis mentioned some of them, it’s a sales and pre-sales tools, it’s customer support, it’s content generation, whether it’s technical content or marketing, it’s internal processes, et cetera.

Every single company in every industry has some of these use cases. So they’re looking at that and as whether it’s a C-suite, the LOB units or the decision makers in IT, as they grapple with the opportunity that gen AI presents with these workloads, what we’re finding is that there are some common challenges and trends that we’re seeing with customers that really have to do with things like data security.

Pat, you mentioned, what do I do with my data? How do you actually take the piece parts of different parts of the stack from the compute to storage, to which model do you pick? Do you start with a model from scratch or do you take an existing model and tune it. If you want to tune it, well, how do I tune it? Do I have the know-how in my organization, is the technical complexity of the task is. We’re still in the early stages of that adoption curve. So it’s technically very complex.

Once you have these starting to get adopted in the company, in the organization, what’s the governance model? How do you make sure that proprietary data stays within the confines of what the company wants to do? How do we make sure it doesn’t get into an AI provider or an AI model unintentionally? What are the costs? So if you think about the full impact of that being cost, and then finally there’s also this notion of ethics and responsibility.

Things are moving so fast. I mean, I’m guessing we all follow the same circles in terms of what news we look at, what sources. Our social media feeds are probably similar and not a day goes by that I don’t end up texting someone like Travis or a friend or a friend outside of work or even family when-

Travis Vigil: I think the implication was that I wasn’t a friend. I think-

Varun Chhabra: No, no I said a friend outside of work. So that’s why I said that I knew Travis wasn’t going to let me get away with it.

Patrick Moorhead: Friends and also Travis.

Varun Chhabra: That’s right, that’s right.

Patrick Moorhead: Sorry Varun, go ahead.

Varun Chhabra: No, no, no. All good. So whether it’s colleagues, people in your personal life or even family like my dad or my mom who are maybe not as tech-savvy as a lot of people we work with, we all send these things where we’re like, “Look at this insanity, look at this thing that just makes your jaw drop”, and it’s amazing. But at the same time, you start to see instances of what can happen when the data you used to train a model has bias in it.

There’s all sorts of hallucinations that we’re seeing, et cetera. And the consequences of those as these start to get pulled in are pretty intense too. And I’d say we’re probably at the start of people understanding those concerns. But everywhere you look, there’s things that people have to be worried about. So just to summarize, data security, how to make sure that the data that you have is the proprietary data, basically a company’s crown jewel or crown jewels. How do we make sure that it stays in the right place?

The complexity of stitching all this together and getting the know-how to your organizations, the governance models, once these are rolled out, the cost of doing all of this and how do you do it responsibly and in a way that limits bias and disinformation. These are all things that are being figured out at warp speed for organizations. And that’s really where I think the biggest opportunities in all of these areas. And that’s really where Dell is putting a lot of effort with the things we’ve rolled out earlier in the year, things we’re rolling out now, as well as what’s to come is really going to be to attack and help customers with these challenges.

Daniel Newman: So Varun, I absolutely, on almost all of your points agree, which makes for an incredibly boring podcast. Having said that, what I tend to believe is that the challenges are becoming pretty well understood. And Pat, you and I can validate this as we’ve sat on no less than a thousand vendor briefings since Gen AI’s advent and been on endless councils, CEO calls and talking to the end customers too, and Varun, so you have a tremendous opportunity at Dell, a company that’s a hundred billion dollars in revenue and is plugged in, as I mentioned at all these customers.

Now Travis, I don’t want to be tough on this pod because Pat’s normally the one that asks the hard questions. I’m normally the softball guy, but I’m going to be a little bit tough here. Where I’m really starting to have a difficult time as an analyst who’s often asked to comment on who are the winners, losers, who are the companies rising, who are the companies that are following, Dell is in this situation. You’re not building a large language models, you’re not necessarily at the silicon level developing the chip, you’re using the chip.

How do you rise and stand out? How do you come out differently here so that these clients kind of say, “Look, we could go to the software layer and we can get our help from our CRM provider, our ERP, our productivity, our collaboration, or we can work with our DevOps company.” Dell does a little bit of all of it. How do you stand out so that companies say, “We want to turn to Dell, we’re going to entrust our AI strategy and work side by side with Dell to implement our future and to meet our boards demands on all things gen AI?”

Travis Vigil: Yeah, it’s a great question. When I look at the strength that we have at Dell in our portfolio, an amazing server franchise, an amazing storage franchise, at scale service and support, at scale consulting capabilities, a broad pre-sales and sales organization. I think those are the right ingredients to address the issues that Varun was talking about.

Y’all have been doing this for some time. While the issues are becoming well understood, what customers are coming to us to ask is help me make it real. And making it real means working through those details and following those blueprints and doing it again and again so customers can deploy a enterprise grade generative AI infrastructure and end to end stack.

And if you look out across the industry, I would contend that Dell is one of very few that have all of those pieces of the puzzle and can bring it together. Now going back to the question you asked earlier on customer demand, the conversations very much started like, why is your server great?

And having been in the server business for a long time, it was nice to be talking about servers again. But frankly, what customers want to talk about as it moves more into the enterprise space is bring it together. Help me stitch it together. Hey, maybe I might even work with you to help me stand it up. Or maybe I’m going to use your blueprint on how you guys stood it up in your labs.

That’s what customers are looking for. They’re looking for recipes. They’re looking for end-to-end solutions. They’re looking for overall enterprise grade generative AI infrastructure solutions that bring all those pieces together. And really they want to focus on the project, which is how do I implement that sales or pre-sales assistance? How do I start to use generative AI to more effectively generate content? How do I use generative AI to speed the development or increase the productivity of my developers? That’s where they want to focus. Dell, you work with us so that we can address those issues that Varun was talking about and bring that infrastructure stacked together that makes it easy for us to do that.

Varun Chhabra: Can I add a couple of things to what Travis said?

Patrick Moorhead: Sure.

Varun Chhabra: I think everything Travis said, absolutely salient advantages. And I think the two ingredients that make this even more special that I would add to the recipe is the importance of data in gen AI throughout the lifecycle. Whether you’re building a model from scratch or tuning a preexisting model off the shelf with your data or deploying something at scale, our franchise in terms of helping customers with their data journey, whether it’s on the block side or transactional side or primary storage or unstructured data or data protection and archiving side, we have that experience. We have that insight into how these data sources are distributed increasingly across on-premise and cloud.

So that’s another one special ingredient. And I think the other one is also our heritage and demonstrated capability of working with partners and putting an ecosystem on behalf of our customers. So Dan, you mentioned, hey, there’s a lot of innovation happening at the silicon level. There’s a lot of innovation happening at layers in the software stack as well. Some of which on the software stack, some of it Dell is helping drive that innovation.

But to put all of this together across that stack of infra models, tool sets like PyTorch, et cetera, model lifecycle, ISV application setting on top of it, customers trust us to build those, help drive those ecosystems on their behalf. So I think that’s going to be a very, very critical differentiator for us as well. Just like we saw in multi-cloud, we don’t have a vested interest in saying, “Oh, you got to go do this because this is all we sell.”

If a customer wants to go in a specific workload for a specific location or wherever they’re in the cycle, whatever partner they’re working with, they know they can work with us. So that’s why a lot of the announcements you’ve seen with us with NVIDIA, with VMware and more to come over time is really going to be focused by helping customers up and down the stack with the ecosystem.

Patrick Moorhead: The industry has gone through some vacillations, like an accordion where hey, centralization, decentralization. But I think one thing stands true is the big growth comes from ecosystems and where everybody has a part and everybody is making some money for their hard work and their innovations, but it has to be simple.

It can’t be too many parts that need to be integrated. And this is one thing I really like about Apex and as a service and simplifying it for folks, and some people might want OpEx versus CapEx, but this choice model is super important all along. I mean, 75 to 90% of enterprise data is still on-prem, okay?

Varun Chhabra: Yes.

Patrick Moorhead: And there’s a reason. And the public cloud is 14 years old and it’s 75 to 90%. So I think that says a lot, and I think it’s lazy analysis to say, “Oh, that’s just the old stuff that nobody wants to move over.” But I do see a lot of enterprises that I talk to, CIOs which are like, “Hey, I’m using generative AI as a litmus test, is do I move more to the public cloud? Do I expand this in the private cloud or go full on hybrid?”

And I think what we are seeing is the dawn of a hybrid AI, or quite frankly, you can take a public cloud, LLM, take advantage of it, customize it, on-prem, run it on-prem, and it started to show data gravity. Typically, the compute always goes the most efficient way to processes is closer to the data.

And for the most part, most of that data is not being generated in a big data center. It’s being generated at the edge, being processed and aggregated at a certain point. But I think it’s a great opportunity for Dell to come in and shine with its ecosystem approach. And oh, by the way, it helps that you have a ton of data under management as well.

Varun Chhabra: Definitely.

Patrick Moorhead: I’d like to shift to what you announced on October 4th and today, and I’m curious, how does this overall strategy come to bear and what you announced a while back and today?

Varun Chhabra: Absolutely.

Patrick Moorhead: Varun, if you can answer that one.

Varun Chhabra: Yeah, sure. Sure Pat. I think just taking a step back before we talk about 10/4 and 10/19 today, just to contextualize it. Our journey has really been about creating a big easy button for our customers, as Travis said. We want to simplify what is inherently, at least at this stage, pretty complex and requires a lot of factors and vectors to think through.

We got out of the gate pretty early on with NVIDIA with what we were calling Project Helix at Dell Technologies world, talking about how we’re going to help simplify that stack for customers across compute, storage, GPUs, and then going into frameworks, et cetera. And normally when you hear about the word project, you might think, oh, six, seven months later or something will come through. Well, we delivered back in July, the first tranche of solutions related project Helix probably the fastest I’ve ever seen us go from project to live product.

And that has happened, not because we were able to take advantage of a flash in the pan, but we’ve actually been working with NVIDIA and other ecosystem partners for a while. Travis mentioned the XC9680. It said it’s an eight way server, has up to eight GPUs in it.

Travis Vigil: Beast.

Varun Chhabra: It is a beast and it is what is driving so many conversations as Travis said, because it is able to handle and scale the kind of workloads that are being talked about. But if you’d allow me a slight detour, you guys know how this works. You can’t create a server that is able to run workloads at scale with the power efficiency and cooling that an eight way GPU server demands by working on starting to work on a November when ChatGPT is announced and have it ready. This has been in development for the last three years, and there have been, when it was being conceived three years ago, we had a lot of debates internally about, well, who’s going to use a server with eight GPUs?

And we’re not going to pretend that we knew that this moment would come, but we knew looking at the trends and the telemetry that we had, that there was going to be a demand for more and more AI intensive workloads. So fast-forward to today, what we announced on 10/4 was a continuation of our journey with NVIDIA, with our generative AI solutions with NVIDIA. Back in July, we had announced a validated design that was geared towards customers running inferencing workloads because that’s where we saw the most demand at that point in time and where customers would take pre-existing models and deploy workloads. So inferencing was a big ask.

We’re moving along with our customers on a journey. What we announced two weeks ago on 10/4 was really helping customers with taking preexisting models and tuning those models with their proprietary data. So we’re extending what our generative AI solutions are doing with NVIDIA compute storage, networking, NVIDIA’s GPUs, NVIDIA game of frameworks, all tested together and validated together with reference architecture for customers that want to run model tuning, because we think that that’s where a lot of our customers are now going or are today. That’s one thing we announced.

As we mentioned, data is a big part of this journey. What we’re increasingly hearing from customers is they’re not only looking for the technology from us, they’re actually looking for the know-how and the expertise. So we’ve been investing heavily in our services capabilities in this space. So we’ve got brand new services that are accompanying the solution we just talked about to help customers prepare data to be able to start using them for gen AI workflows. We have consultative engagements with customers that help them understand where they are on this journey.

You guys all know technology is only part of this. Companies have to be ready from a process perspective as well. So there’s consulting engagements to help them with where are they in the maturity curve, what process changes do they need to make to be able to take advantage of some of what the technology is bringing? We’ve got new managed capabilities as customers are now asking us to, “Hey, can you also help manage this infrastructure at scale?” So that’s new.

And then what we are just announcing today is on the client side, which is really symbolizes, and Dan going back to your question about what we’re doing differently, uniquely providing not just stuff on the infrastructure side, but even on client devices side, all in one shop for customers, we’ve got updates to our Precision 7875 tower with new capabilities with our partners at EMD to help customers take advantage of gen AI. Data scientists and other personas take advantage of being able to run these workloads on their devices as well, whether it’s for testing and tuning models or in some cases deploying them as well.

So that’s really what the journey we’re on. We’ve got so much more in the tank that we’re working on, but in terms of announcing what we just announced in this last month, that covers the gamut of what we’re doing. One thing I actually did forget, we’re also thinking about things up the stack. So another announcement we made was around this concept of a data lakehouse. We talked about how data is increasingly distributed, and one of the things we often hear from customers is, well help us figure out where those data silos exist, because sometimes customers themselves don’t know.

So this notion of actually creating logical architectures on top of the distributed data to create this virtual data lakehouse is super important. So we announced an extension of our partnership with Starburst to create an end-to-end appliance to help customers with understanding what their data state is and create a virtual data lakehouse that they can tap into as they think about all of their AI workloads and putting their data to work wherever it’s needed.

Daniel Newman: So Varun, if I wanted to do the TLDR, what are you doing and how are you differentiating the tweet or the X would be a lot. You’re doing a lot. And I want to double click on something before Travis I come to you with the last thoughts and remarks here. When you mentioned the solutions and we think about things like the foundational models that are industry specific, putting all this stuff together is really hard. And so everything from not only what is the best box in terms of the server itself, but which is the right software to run, what is the right specific programming?

Varun Chhabra: The Storage-

Daniel Newman: Best of breed, and our Futurum Labs team is working closely with Brian Payne on your team. And we’re actually doing a bunch of proof of concepts right now to actually test things for financial services, testing workloads for healthcare, so that you can see for diagnostics, testing manufacturing for automation.

And that I think is where Dell really has an opportunity to uniquely stand out, is not in any individual part, but it’s going to be the ability to put together a solution edge to cloud and God, that feels cliche, but it really is complex and it’s difficult. And looking for companies that are willing to help you not just vertically by pieces, but horizontally cut across the noise and implement solutions that are going to meaningfully enable you to implement this generative AI to get to the outcomes, which is the whole reason we’re doing this in the first place.

Varun Chhabra: Our approach is… Sorry, go ahead Dan.

Daniel Newman: No, no, no. I’ll give you a word, but then we got to let Travis hop in because we got to take this baby home.

Varun Chhabra: Yeah, absolutely. What I was going to say, just to your point about verticals, you are absolutely right. There’s a lot of innovation happening in those vertical stacks. We think it’s very important in addition to working on those vertical stacks that we create a flexible and horizontal platform that can then have ISV innovation, our own innovation built on top of it. That’s what you’re seeing us do in these most recent launches.

Daniel Newman: So Travis?

Travis Vigil: Yes, sir.

Daniel Newman: There’s a lot here. By the way I can feel your passion. You both, you are feeling this. This is a big moment for Dell. Is the hundred-

Patrick Moorhead: This is fun.

Daniel Newman: Is this the hundred to hundred 25 billion dollar, is this the inflection? Now only you could get a few more of those H100s, only get a few more of those, you could probably double in a quarter. What’s coming next, Travis? What’s the next wave? I think we’ve teased where you’re at, what you’re doing, a little bit of the announcement. Where do we go from here? How does Dell drive this message forward?

Travis Vigil: Yeah, if the TLDR for Varun’s response was a lot, mine is more. So I think you said it really, really nicely. It is difficult. And so what you can expect from Dell going forward is ecosystem enablement and focus on making the solution level easier. And that’ll be with testing validation, that’ll be with integrations. That will be with partnerships. I don’t want to give them all away now, but you can expect that we will be doing more internally and more with partners.

Daniel Newman: This is the place to break it. This is the place to break it.

Travis Vigil: No, no, no, no. I want to get invited back, so I can’t let all the good news out yet, but it’s super exciting. We were talking before this podcast how it’s, I think, Pat, you called it tornado, I’ll call it a tidal wave. It’s just starting. And so I look forward to speaking to you all in the future. And it’s going to be fun.

Daniel Newman: Varun, I’ll call it an earthquake. You want to call it a typhoon?

Patrick Moorhead: Come on man. Let’s come up with something positive. I’m obviously not a marketing person anymore.

Daniel Newman: Well, Varun and Travis, I’d love to say thank you for joining. A lot to digest here. As I was listening, all I was thinking is this doesn’t slow down. It just gets faster. And this is going to require a couple more trips around the show. So strap in. Travis, Varun, let’s have you back really soon. Congratulations on all the progress, Pat and I can’t wait to monitor, comment, provide our insights to the market on it, but we’ve got to say goodbye for this one. So thanks for joining us.

Travis Vigil: Thanks for having us.

Patrick Moorhead: Thanks guys.

Varun Chhabra: Thanks for having us. We’ll be back guys. We were with you when we announced Project Helix at DW, we’re here at the next milestone, so we’d love to be back. So thanks for having us.

Daniel Newman: Oh, yeah.

Travis Vigil: Absolutely.

Daniel Newman: We’ll see you soon. So everybody out there, hit that subscribe button, send all your positive feedback right to me and any other concerns Patrick’s way. For this episode of The Six Five, time to say goodbye. Thanks for joining us at this exciting Dell moment. We’ll see you all soon. Bye now.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Brad Shimmin, VP and Practice Lead at The Futurum Group, examines why investors behind NVIDIA and Meta are backing Hammerspace to remove AI data bottlenecks and improve performance at scale.
Looking Beyond the Dashboard: Tableau Bets Big on AI Grounded in Semantic Data to Define Its Next Chapter
Futurum analysts Brad Shimmin and Keith Kirkpatrick cover the latest developments from Tableau Conference, focused on the new AI and data-management enhancements to the visualization platform.
Colleen Kapase, VP at Google Cloud, joins Tiffani Bova to share insights on enhancing partner opportunities and harnessing AI for growth.
Ericsson Introduces Wireless-First Branch Architecture for Agile, Secure Connectivity to Support AI-Driven Enterprise Innovation
The Futurum Group’s Ron Westfall shares his insights on why Ericsson’s new wireless-first architecture and the E400 fulfill key emerging enterprise trends, such as 5G Advanced, IoT proliferation, and increased reliance on wireless-first implementations.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.