Driving Value from Your AI Investments: Introducing Dell Generative AI Solutions with Intel – Six Five On the Road

Driving Value from Your AI Investments: Introducing Dell Generative AI Solutions with Intel - Six Five On the Road

How are Dell and Intel partnering to support rapid AI adoption and development? Get the latest from Chad Dunn, VP of Product Management, AI and Data Management at Dell Technologies and Bill Pearson, VP, Data Center/AI at Intel in this episode of Six Five On the Road with David Nicholson and Keith Townsend. Find out how Dell and Intel are collaborating to streamline the process for enterprises to leverage AI for real business value. ⤵️

Their discussion covers:

  • The collaborative efforts of Intel and Dell to simplify AI value realization for enterprises
  • Key enterprise use cases targeted by Dell and Intel’s AI solutions
  • The integration of Xeon, Gaudi, Dell infrastructure (PowerEdge XE9680), and professional services to support customer use cases
  • The significance of open-source standards in Dell and Intel’s joint solutions and the benefits these bring for customers
  • Future directions for the Dell and Intel partnership as Generative AI matures

Learn more from Dell Technologies about the Intel® Gaudi® 3 AI Accelerator.

Watch the video below at Six Five Media, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: Six Five On the Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript:

Keith Townsend: All right. I’m really excited for this Six Five On the Road. I am joined with my co-host, Dave Nicholson. Dave, we get to geek out today.

David Nicholson: I’m looking forward to it.

Keith Townsend: We get to talk server model numbers. We get to talk storage, compute, networking, all in the service of AI. The last time the Dell and Intel team joined us, we talked about the high level services around AI. Now we’re going to talk about the work of getting AI done. I’m joined today by our guest, Chad Dunn, VP of product management, AI, and data management at Dell Technologies. Chad, welcome.

Chad Dunn: Thank you very much. Great to be with you guys.

Keith Townsend: And Bill Pearson, VP of Data Center and AI at Intel. Bill, welcome to the show.

Bill Pearson: Thanks. Great to be here.

Keith Townsend: So Dave, kick us off. What’s your first question for Dell and Intel over their long partnership, not just in AI, but in general?

David Nicholson: In general, I want to kind of want to get straight to AI to be honest with you. I want to know-

Keith Townsend: No banter, Dave?

David Nicholson: No. No banter. I want to know what are Intel and Dell doing to help folks as they navigate the world of Enterprise AI? One thing when we all use ChatGPT and play around with it, but people want positive ROI, right? What are Intel and Dell doing together to make that happen? Chad, maybe start with you.

Chad Dunn: Sure. I think what we try to do at Dell and working with Intel in this thing that we call the AI Factory, is give customers something that’s been pre-designed, pre-tested, arrives fully racked with servers, with the Gaudi 3 accelerators with networking, with storage. So it’s really a turnkey solution. We want this to be as simple for them to deploy and to get into service and start really realizing that that customer value out of the solution as quickly as possible.

David Nicholson: Yeah. Bill, what’s your perspective?

Bill Pearson: Yeah, I mean, Chad said it well, but one of the things that we’re super excited about as we do that is the new Dell PowerEdge XE9680 server. So this is an awesome setup. And of course it uses Gaudi 3 and Xeons, but it’s an awesome infrastructure option, and it fits right in there with the ability to deliver on all the promises of AI that enterprises are expecting.

David Nicholson: Keith is getting excited because you’re starting to use actual model numbers. And of course, we’re both thinking, “Hey, can you guys hook us up? Can you maybe send us one of those? Just one.”

Keith Townsend: As I think about a whole rack of AI equipment, the obvious question comes, what’s the use case for this type of rollout or setup? It’s intriguing.

Chad Dunn: Yeah, there’s a lot of use cases that we see in the enterprise today around generative AI. Now, those can be from fine-tuning of your models or even model training at the extreme. But most of the time it’s workloads that touch the individual lines of business. Things like content generation, code generation, retrieval augmented generative AI, so basically enhanced search, you turn search into find, so you get much better results when you go to look for things. And those use cases will sort of repeat themselves across a number of different verticals. And so we see many customers innovating the way that they do business today and augmenting it with that AI capability to generate code, to generate data, to write copy, to make videos. The list just goes on and on. It’s amazing.

Keith Townsend: Bill?

Bill Pearson: Yeah. As we work on this with these enterprises, what we’re helping them do is take the treasure trove of data that they have on their premises or in their data warehouses, data lakes, marry that with the power of generative AI. And so when Chad says turn these things in from searching to finding, what they’re able to do is find with their own data. So it brings these two things together in a pretty amazing way. And so it means that the insights they’re getting are more relevant, they’re more rich, they’re based on the data that the enterprise has and knows, whether it’s their domain or their specific business. And there’s a lot of power in that for the use of AI inside of these enterprises.

Keith Townsend: So we’ve talked a little bit about the use case. We’ve talked about conceptually how this is packaged. Shapes and sizes, Dell has a massive portfolio. Chad, can you talk to me specifically about the options? We ship a rack. Is it just one rack? You mentioned training at the extreme. What’s my options for receiving this?

Chad Dunn: Well, in the AI Factory, which we really targeted at enterprise customers, you can go from a single rack all the way up to multi-rack, eight-rack system with 32XE9680s. So you get some pretty amazing compute density and GPU density in these systems. So you can start with a single workload, a single application, and then potentially scale that to multiple use cases running on that same infrastructure. A nice thing about that, that ability to scale up is an IT organization can support a well-known hardware and software stack, and then their AI practitioners and their data scientists can install new use cases on that same infrastructure. But still, it’s the same infrastructure from a hardware and software perspective that IT is used to dealing with. We think that this really helps them get that adoption and get things from the pilot stage into the production stage.

David Nicholson: Yeah. Is this primarily a Gaudi discussion, or is there some good old-fashioned Xeon in the mix when you guys are building out infrastructure?

Chad Dunn: Absolutely Xeon in the mix. Every one of these servers is powered by Xeon CPUs, and then we augment that with the Gaudi 3 GPUs.

Keith Townsend: All right. Chad, I promise geek love. Talk to me about what’s inside of the rack. When it arrives, what’s going to make my eyes pop?

Chad Dunn: I think the first thing that’s going to make your eyes pop is going to be the PowerEdge XE9680 servers, and this is the industry-leading GPU optimized server out there. But you’re also going to see a couple of other industry leading products. You’re going to see PowerSwitch for networking. Again, industry leading. You’re going to see PowerScale. Powerscale is our unstructured storage solution as a part of every one of these solutions. It’s a mouthful to talk about all those components at once. I know that you love to talk about model numbers, but collectively we just refer to this as Dell Generative AI Solutions with Intel.

Keith Townsend: I love that. But the Intel part of it, we talked about Xeon and accelerators. What we haven’t talked about when it comes to the actual solution is gluing that together, which is the software. Bill, can you talk to us about what Intel is doing in this software partnership?

Bill Pearson: Absolutely. Working with Dell, our solutions really are full stack of open software. It’s been tested, validated, benchmarked with the Dell hardware and the Intel Gaudi 3 accelerators. And we include things, hugging face models, models from PyTorch, Jupyter, Kubeflow, VLM, all the things that you need as an enterprise to bring this solution to market.

Keith Townsend: All right, so we can’t have a conversation with AI without having a conversation about open source. This seems to go hand in hand. There’s a little controversy around open source models, et cetera, but these tools around doing AI is very much open source. Talk to me about the importance of open source with your customers and what Dell and Intel are doing together to make AI more consumable with open source.

Chad Dunn: Well, look, no two organizations are the same. So they’re going to want flexibility in terms of the software that they choose to run on the infrastructure to get the results they need. So that may be a host of different open source tools at the ML layer or at the model layer. It also may be commercial distributions of other full stacks that they want to implement on top of it. Because the needs and the outcomes of your customer are different, they’re going to need that sort of flexibility to choose the model, to choose the tools that they use. And sometimes they have pre-existing skills with a set of tools that they want to bring to the party, if you will, and we want to be able to support that wherever we can.

Bill Pearson: You said two magic words, Chad, choice and open. And we constantly hear from customers about choice. They do have their favorites. They do have their unique business needs, and so being able to offer them the choice of technologies and tools and models and capabilities, allow them to get the most out of the 9680 and the setup that they’re deploying in their enterprise. The open nature of it also means that customers are free to bring in different capabilities, but also free to adjust those as they need. And one of the things that we’re doing to help make it easier for some of these enterprises to take these open tools and to choose which one’s going to be right for them is building pre-configured use cases.

So we’ve taken components that are based on open source from our open platform for enterprise AI, we’ve taken those components, put them together to solve common use cases, whether it’s building a chatbot, building content generation, code translation like we talked about, and we’ve just built an opinionated solution. We call it Intel AI for Enterprise RAG, but really think about it as a toolkit that an enterprise can go put together and say, “Okay, this is how someone has done it. Does that fit my needs or do I need to modify it and it to go and deliver the needs for my enterprise?”

Keith Townsend: You just brought up something that has been kind of one of my admirable attributes with the Dell and Intel relationship, and this is this ability to expand the ecosystem. You talked about what the partnership and the open source part of it. Talk to me about the Dell hardware portfolio and that engineering partnership.

Bill Pearson: Yeah, I mean, really what our goal is collectively, I’d argue, is to delight our customers, make sure they have the right technology for the business and use cases that they’re trying to drive. To do that, Dell makes an awesome set of portfolio of hardware capabilities from the devices themselves to storage and networking. And as we pull all this together for AI, the way that we’re doing it is taking common use cases that customers are trying to go and deploy, building software capabilities that help deliver that, pairing that with a scalable, flexible set of hardware so that the hardware can grow as the enterprise needs grow. We can start off with a single unit doing prototyping and POCs and making sure that we’ve got it really dialed in for the business, and then scale that at rack scale, again, network, storage, software, everything so that it grows with the business in terms of performance, in terms of users and capabilities. All of this, making sure that it’s easy for a business to go and deploy at a TCO price for the performance that they can afford.

David Nicholson: So it’s interesting the conversation that we’re having here, keep in mind that Keith, CTO advisor as he’s known, is constantly talking to folks who are struggling with these sorts of decisions right now. I teach some courses at Wharton on the very same subject, and we’re constantly getting this feedback that folks don’t necessarily know where to start. And they definitely don’t want to get locked in through some standard, but they also need help figuring out where to start. How do your professional services help companies that are struggling with this question about where to start?

Chad Dunn: Professional services and consulting are absolutely essential to your AI strategy. I mean, you’re starting with data and you want to get to an outcome. You’ve got to identify that data. Where is it? What is it? What kind of data is it? What are the restrictions around how you can use it? How do you prepare that data to get into your AI infrastructure? These are all things that we can work through from a consulting perspective, and identify those use cases and identify the roadmap to achieve those use cases and outcomes. So from the very early consulting engagements to understand the use case, all the way through to deployment, to residency, or even consuming it as a fully managed service, we’ve got a very rich services portfolio around the AI factories.

Keith Townsend: Well, I really appreciate that two of you came together and shared with us the Dell Generative AI solution with Intel. If you want to find out more, look in the notes below. For me and my co-host, Dave Nicholson, thank you for joining us on this special Six Five On the Road.

Author Information

David Nicholson is Chief Research Officer at The Futurum Group, a host and contributor for Six Five Media, and an Instructor and Success Coach at Wharton’s CTO and Digital Transformation academies, out of the University of Pennsylvania’s Wharton School of Business’s Arresty Institute for Executive Education.

David interprets the world of Information Technology from the perspective of a Chief Technology Officer mindset, answering the question, “How is the latest technology best leveraged in service of an organization’s mission?” This is the subject of much of his advisory work with clients, as well as his academic focus.

Prior to joining The Futurum Group, David held technical leadership positions at EMC, Oracle, and Dell. He is also the founder of DNA Consulting, providing actionable insights to a wide variety of clients seeking to better understand the intersection of technology and business.

Keith Townsend is a technology management consultant with more than 20 years of related experience in designing, implementing, and managing data center technologies. His areas of expertise include virtualization, networking, and storage solutions for Fortune 500 organizations. He holds a BA in computing and an MS in information technology from DePaul University. He is the President of the CTO Advisor, part of The Futurum Group.

SHARE:

Latest Insights:

Oracle Introduces a Platform to Design, Deploy, and Manage AI Agents Across Fusion Cloud at No Additional Cost to Users
Keith Kirkpatrick, Research Director at The Futurum Group, analyzes Oracle’s AI Agent Studio, a platform enabling enterprise users to create, manage, and extend AI agents across Fusion Cloud Applications without added cost or complexity.
Nokia Bell Labs’ 100th Anniversary Created the Opportunity for Nokia CNS to Showcase How Collaboration with Bell Labs is Productizing Portfolio Innovation
Ron Westfall, Research Director at The Futurum Group, shares insights on why Nokia CSN and Bell Labs are driving the portfolio innovation key to enable CSP and enterprise transformation of cloud, AI and automation, and monetization capabilities.
Synopsys Deepens NVIDIA Collaboration to Accelerate EDA Workloads on Grace Blackwell Platform
Richard Gordon, VP & Practice Lead, Semiconductors at The Futurum Group, examines how Synopsys and NVIDIA aim to accelerate chip design with Grace Blackwell, targeting 30x EDA speedups and enhanced AI productivity.
Custom Arm Neoverse V2 Chip Posts Gains in AI, HPC, and General Compute Across C4A VMs
Richard Gordon, VP & Practice Lead, Semiconductors at The Futurum Group, unpacks Google Axion’s strong benchmarks across AI, HPC, and cloud workloads, showing how Google’s custom Arm CPU could reshape enterprise infrastructure.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.