Search
Close this search box.

AI in Context: Reflections on Dell Technologies World and AI Factory

AI in Context: Reflections on Dell Technologies World and AI Factory

The News: On May 20th through 23rd, 2024, the Dell Technologies World conference in Las Vegas highlighted the breadth and choice of accelerated infrastructure, software, and service AI offerings that Dell can bring to bear for enterprise customers. The Dell AI Factory is an integrated program that starts with user and business data and ultimately deploys solutions across various use cases. The Dell AI Factory with NVIDIA integrates the latter’s accelerator hardware and AI Enterprise software. See the Dell press release and posting on the NVIDIA website for more information.

AI in Context: Reflections on Dell Technologies World and AI Factory

Analyst Take: In the last year, Dell has moved quickly to pull together its AI-related software, hardware, and services. While AI use cases do not need an entirely different infrastructure, providers must optimize the individual components individually and together for AI performance and sustainability. Dell has reshaped its organization and product offerings with admirable urgency to meet its users’ expectations for solutions based on AI, especially Generative AI. Dell works with many hardware vendors to enhance user AI choices and allow those users to maintain their investments and existing infrastructure relationships. Dell Services provides the glue that combines all elements to form a coherent, complete solution.

Dell Technologies World 2024 in Las Vegas

Dell hosted its Dell Technologies World conference this year at the Venetian resort complex in Las Vegas. Thousands of customers, partners, industry analysts, members of the media, and Dell employees came together to learn about and discuss the company’s updated products services. AI permeated almost all talks and discussions.

The Dell AI Factory

AI in Context: Reflections on Dell Technologies World and AI Factory
Image Source: Dell Technologies

Of particular note, the Dell AI Factory concept pulls together the raw materials and supplies (data ) from various sources, the “manufacturing” processes, tools, and human elements needed to create value, and delivers solutions for well-defined prescriptive use cases.

I think the factory metaphor is apt because businesses often reach commercial success after experimenting with one-off products and then delivering high-quality, repeatable, and reliable solutions within timeframes that exceed their customers’ expectations.

Dell stressed the openness of its infrastructure above its hardware, which it would prefer you purchase, of course. There is no single correct Large Language Model (LLM) to use today in every situation. Models are evolving and becoming better tuned. Customers have existing infrastructure investments they want to continue using. They want Dell to help them with a roadmap to optimize the performance and ROI for AI. Software like Linux and open-source Generative AI models have proven that “open” is a valuable component of a flexible and evolving infrastructure, either on-premise, in the cloud, or both.

Dell and NVIDIA have partnered to create a version of the AI Factory built on their respective software and hardware technologies. If the Dell AI Factory is the general solution, the Dell AI Factory with NVIDIA is a specialization. Might we see more of these in the future, particularly around hardware from Intel or AMD?

AI in Context: Reflections on Dell Technologies World and AI Factory
Image Source: Dell Technologies

As part of the Dell AI Factory with NVIDIA, Dell introduced the Direct Liquid Cooled (DLC) Dell PowerEdge XE9680L with NVIDIA Blackwell Tensor Core GPUs. NVIDIA introduced the Blackwell GPUs in March. The XE9680L is a statement of Dell’s strong commitment to state-of-the-art AI infrastructure. They are proving that the AI Factory is not simply a brand; it includes Dell’s best products developed and optimized for AI.

At the Solutions Expo conference, Dell demonstrated its digital assistant built using its and NVIDIA’s technology. They have implemented this for Amarillo, Texas, answering common questions in 62 languages. I did not have a municipal query, but I can attest that the assistant gave a very good answer when I asked it to define quantum superposition. This might not be a use case Dell envisaged, but it does show the depth of the LLM behind the solution.

The Rise of AI PCs

At the conference, Dell introduced five new AI PC laptops, each featuring a “custom-integrated Qualcomm Oryon™ CPU, premium GPU, and neural processing unit (NPU).” Several other vendors have recently introduced AI PCs, and my colleague Olivier Blanchard at The Futurum Group recently surveyed them. Several Android phones and Apple iPhones already have NPUs.

AI PCs are in a bit of a chicken-and-egg situation: to get much better AI-enhanced apps on PCs, we need AI hardware like GPUs and NPUs. Hardware vendors will only install them if apps represent user value. At least two hardware vendors I spoke with at the conference said they were in discussions with Microsoft to provide better Windows operating system support.

We need a better description of how these AI PCs will enhance our lives beyond what can be done today with CPUs and GPUs in graphics cards. I recommend that all AI PC vendors, including Dell, focus on this list and improve their marketing with it. For example: “In the future, your AI PC will allow you to do XYZ better and faster while preserving your privacy.” Here, XYZ might be better contextual help, automation of everyday tasks or business processes, and video or image editing. What would be on your list?

Data and Processing in the Right Place at the Right Time

We will use AI PCs to implement federated or distributed AI. In this scheme, data is collected and processed at several locations, including PCs and other devices at the Edge, and then used locally or synchronized with central cloud-based models. For Generative AI, the PC could tokenize text and other data, and then a background process could selectively augment a foundation model in a hybrid configuration.

The history of computing is a repeating cycle of deciding what is done locally or remotely and then optimizing the division of labor to minimize data movement. This is an example of client-server, a computer architecture that goes back to the 1960s. We are seeing it now for AI; neglect history and its lessons at your cost and peril!

Key Takeaway

Dell demonstrated at Dell Technology World that it has quickly transformed itself to deliver on a comprehensive, evolving strategy to provide the foundation for customers to build their AI solutions.

The openness of Dell’s approach particularly appeals to me. NVIDIA is a partner of choice for many in the industry, but others are aggressively introducing general processors, GPUs, and NPUs to power AI. There is much more to AI than Generative AI, but it is a good benchmark to measure the performance, agility, and sustainability of solutions.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author holds a small equity position in Dell. The author does not hold an equity position in any other company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

AI for Human Progress – Six Five On the Road at Dell Technologies World

Making Dell AI Factory Real – Six Five On The Road at Dell Technologies World

How Dell Supports Customers with AI – Six Five On the Road at Dell Technologies World

Author Information

Dr. Bob Sutor

Dr. Bob Sutor has been a technical leader and executive in the IT industry for over 40 years. Bob’s industry role is to advance quantum and AI technologies by building strong business, partner, technical, and educational ecosystems. The singular goal is to evolve quantum and AI to help solve some of the critical computational problems facing society today. Bob is widely quoted in the press, delivers conference keynotes, and works with industry analysts and investors to accelerate understanding and adoption of quantum technologies. Bob is the Vice President and Practice Lead for Emerging Technologies at The Futurum Group. He helps clients understand sophisticated technologies in order to make the best use of them for success in their organizations and industries. He is also an Adjunct Professor in the Department of Computer Science and Engineering at the University at Buffalo, New York, USA. More than two decades of Bob’s career were spent in IBM Research in New York. During his time there, he worked on or led efforts in symbolic mathematical computation, optimization, AI, blockchain, and quantum computing. He was also an executive on the software side of the IBM business in areas including middleware, software on Linux, mobile, open source, and emerging industry standards. He was the Vice President of Corporate Development and, later, Chief Quantum Advocate, at Infleqtion, a quantum computing and quantum sensing company based in Boulder, Colorado USA. Bob is a theoretical mathematician by training, has a Ph.D. from Princeton University, and an undergraduate degree from Harvard College.

He’s the author of a book about quantum computing called Dancing with Qubits, which was published in 2019, with the Second Edition released in March 2024. He is also the author of the 2021 book Dancing with Python, an introduction to Python coding for classical and quantum computing. Areas in which he’s worked: quantum computing, AI, blockchain, mathematics and mathematical software, Linux, open source, standards management, product management and marketing, computer algebra, and web standards.

SHARE:

Latest Insights:

Managing Cloud Costs Amid AI and Cloud-Native Adoption
Paul Nashawaty and Steven Dickens of The Futurum Group cover IBM's acquisition of Kubecost, a cost monitoring and management tool for Kubernetes, marking a step toward providing a comprehensive cost management platform for cloud-native applications.
Veeam Makes a Strategic Move to Enhance Positioning in Next-Generation, AI-Driven Cyber Resilience
Krista Case, Research Director at The Futurum Group, covers Veeam’s acquisition of Alcion and its appointment of Niraj Tolia as CTO. The move will strengthen its AI cyber resilience capabilities.
Google’s New Vault Offering Enhances Its Cloud Backup Services, Addressing Compliance, Scalability, and Disaster Recovery
Krista Case, Research Director at The Futurum Group, offers insights on Google Cloud’s new vault offering and how this strategic move enhances data protection, compliance, and cyber recovery, positioning Google against competitors such as AWS and Azure.
Capabilities Focus on Helping Customers Execute Tasks and Surface Timely Insights
Keith Kirkpatrick, Research Director with The Futurum Group, shares his insights on Oracle’s Fusion Applications innovations announced at CloudWorld, and discusses the company’s key challenges.