AI in Context: Reflections on Dell Technologies World and AI Factory

AI in Context: Reflections on Dell Technologies World and AI Factory

The News: On May 20th through 23rd, 2024, the Dell Technologies World conference in Las Vegas highlighted the breadth and choice of accelerated infrastructure, software, and service AI offerings that Dell can bring to bear for enterprise customers. The Dell AI Factory is an integrated program that starts with user and business data and ultimately deploys solutions across various use cases. The Dell AI Factory with NVIDIA integrates the latter’s accelerator hardware and AI Enterprise software. See the Dell press release and posting on the NVIDIA website for more information.

AI in Context: Reflections on Dell Technologies World and AI Factory

Analyst Take: In the last year, Dell has moved quickly to pull together its AI-related software, hardware, and services. While AI use cases do not need an entirely different infrastructure, providers must optimize the individual components individually and together for AI performance and sustainability. Dell has reshaped its organization and product offerings with admirable urgency to meet its users’ expectations for solutions based on AI, especially Generative AI. Dell works with many hardware vendors to enhance user AI choices and allow those users to maintain their investments and existing infrastructure relationships. Dell Services provides the glue that combines all elements to form a coherent, complete solution.

Dell Technologies World 2024 in Las Vegas

Dell hosted its Dell Technologies World conference this year at the Venetian resort complex in Las Vegas. Thousands of customers, partners, industry analysts, members of the media, and Dell employees came together to learn about and discuss the company’s updated products services. AI permeated almost all talks and discussions.

The Dell AI Factory

AI in Context: Reflections on Dell Technologies World and AI Factory
Image Source: Dell Technologies

Of particular note, the Dell AI Factory concept pulls together the raw materials and supplies (data ) from various sources, the “manufacturing” processes, tools, and human elements needed to create value, and delivers solutions for well-defined prescriptive use cases.

I think the factory metaphor is apt because businesses often reach commercial success after experimenting with one-off products and then delivering high-quality, repeatable, and reliable solutions within timeframes that exceed their customers’ expectations.

Dell stressed the openness of its infrastructure above its hardware, which it would prefer you purchase, of course. There is no single correct Large Language Model (LLM) to use today in every situation. Models are evolving and becoming better tuned. Customers have existing infrastructure investments they want to continue using. They want Dell to help them with a roadmap to optimize the performance and ROI for AI. Software like Linux and open-source Generative AI models have proven that “open” is a valuable component of a flexible and evolving infrastructure, either on-premise, in the cloud, or both.

Dell and NVIDIA have partnered to create a version of the AI Factory built on their respective software and hardware technologies. If the Dell AI Factory is the general solution, the Dell AI Factory with NVIDIA is a specialization. Might we see more of these in the future, particularly around hardware from Intel or AMD?

AI in Context: Reflections on Dell Technologies World and AI Factory
Image Source: Dell Technologies

As part of the Dell AI Factory with NVIDIA, Dell introduced the Direct Liquid Cooled (DLC) Dell PowerEdge XE9680L with NVIDIA Blackwell Tensor Core GPUs. NVIDIA introduced the Blackwell GPUs in March. The XE9680L is a statement of Dell’s strong commitment to state-of-the-art AI infrastructure. They are proving that the AI Factory is not simply a brand; it includes Dell’s best products developed and optimized for AI.

At the Solutions Expo conference, Dell demonstrated its digital assistant built using its and NVIDIA’s technology. They have implemented this for Amarillo, Texas, answering common questions in 62 languages. I did not have a municipal query, but I can attest that the assistant gave a very good answer when I asked it to define quantum superposition. This might not be a use case Dell envisaged, but it does show the depth of the LLM behind the solution.

The Rise of AI PCs

At the conference, Dell introduced five new AI PC laptops, each featuring a “custom-integrated Qualcomm Oryon™ CPU, premium GPU, and neural processing unit (NPU).” Several other vendors have recently introduced AI PCs, and my colleague Olivier Blanchard at The Futurum Group recently surveyed them. Several Android phones and Apple iPhones already have NPUs.

AI PCs are in a bit of a chicken-and-egg situation: to get much better AI-enhanced apps on PCs, we need AI hardware like GPUs and NPUs. Hardware vendors will only install them if apps represent user value. At least two hardware vendors I spoke with at the conference said they were in discussions with Microsoft to provide better Windows operating system support.

We need a better description of how these AI PCs will enhance our lives beyond what can be done today with CPUs and GPUs in graphics cards. I recommend that all AI PC vendors, including Dell, focus on this list and improve their marketing with it. For example: “In the future, your AI PC will allow you to do XYZ better and faster while preserving your privacy.” Here, XYZ might be better contextual help, automation of everyday tasks or business processes, and video or image editing. What would be on your list?

Data and Processing in the Right Place at the Right Time

We will use AI PCs to implement federated or distributed AI. In this scheme, data is collected and processed at several locations, including PCs and other devices at the Edge, and then used locally or synchronized with central cloud-based models. For Generative AI, the PC could tokenize text and other data, and then a background process could selectively augment a foundation model in a hybrid configuration.

The history of computing is a repeating cycle of deciding what is done locally or remotely and then optimizing the division of labor to minimize data movement. This is an example of client-server, a computer architecture that goes back to the 1960s. We are seeing it now for AI; neglect history and its lessons at your cost and peril!

Key Takeaway

Dell demonstrated at Dell Technology World that it has quickly transformed itself to deliver on a comprehensive, evolving strategy to provide the foundation for customers to build their AI solutions.

The openness of Dell’s approach particularly appeals to me. NVIDIA is a partner of choice for many in the industry, but others are aggressively introducing general processors, GPUs, and NPUs to power AI. There is much more to AI than Generative AI, but it is a good benchmark to measure the performance, agility, and sustainability of solutions.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author holds a small equity position in Dell. The author does not hold an equity position in any other company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

AI for Human Progress – Six Five On the Road at Dell Technologies World

Making Dell AI Factory Real – Six Five On The Road at Dell Technologies World

How Dell Supports Customers with AI – Six Five On the Road at Dell Technologies World

Author Information

Dr. Bob Sutor

Dr. Bob Sutor is a Consulting Analyst for Futurum and an expert in quantum technologies with 40+ years of experience. He is an accomplished author of the quantum computing book Dancing with Qubits, Second Edition. Bob is dedicated to evolving quantum to help solve society's critical computational problems. For Futurum, he helps clients understand sophisticated technologies and how to make the best use of them for success in their organizations and industries.

He’s the author of a book about quantum computing called Dancing with Qubits, which was published in 2019, with the Second Edition released in March 2024. He is also the author of the 2021 book Dancing with Python, an introduction to Python coding for classical and quantum computing. Areas in which he’s worked: quantum computing, AI, blockchain, mathematics and mathematical software, Linux, open source, standards management, product management and marketing, computer algebra, and web standards.

SHARE:

Latest Insights:

Solidigm and NVIDIA Unveil Cold-Plate-Cooled SSD to Eliminate Air Cooling from AI Servers
Ron Westfall, Research Director at The Futurum Group, shares insights on Solidigm’s cold-plate-cooled SSD, developed with NVIDIA to enable fanless, liquid-cooled AI server infrastructure and meet surging demand driven by gen AI workloads.
In an engaging episode of Six Five Webcast - Infrastructure Matters, Camberley Bates and Keith Townsend explore key updates in data infrastructure and AI markets, including the revolutionary IBM Storage Scale and Pure Storage’s latest enhancements.
Kevin Wollenweber, SVP at Cisco, joins Patrick Moorhead on Six Five On The Road to discuss accelerating AI adoption in enterprises through Cisco's partnership with NVIDIA.
Fidelma Russo, EVP & GM at HPE, joins Patrick Moorhead to share insights on HPE's Private Cloud AI advancements and their future AI endeavors.

Thank you, we received your request, a member of our team will be in contact with you.