Can Dell and NVIDIA’s AI Factory 2.0 Solve Enterprise-Scale AI Infrastructure Gaps?

Can Dell and NVIDIA’s AI Factory 2.0 Solve Enterprise-Scale AI Infrastructure Gaps?

Analyst(s): Olivier Blanchard
Publication Date: May 29, 2025

Dell Technologies and NVIDIA have launched the latest iteration of the Dell AI Factory at Dell Technologies World 2025, introducing updated PowerEdge servers powered by NVIDIA Blackwell Ultra GPUs, enhanced AI data management tools, and new managed services. The release supports AI lifecycle execution from training to deployment with high-performance compute, storage, and networking integrated into a single enterprise-grade solution.

What is Covered in this Article:

  • Dell launches new PowerEdge servers with support for up to 256 NVIDIA Blackwell Ultra GPUs per rack
  • Dell expands AI Factory with NVIDIA to include updated infrastructure, software stack, and managed services
  • New data platform upgrades include ObjectScale S3 over RDMA for 230% higher throughput
  • Agentic AI workloads supported through semantic storage and end-to-end integration with NVIDIA’s AI tools
  • Dell Managed Services introduced to help businesses scale AI operations with 24/7 support

The News: At Dell Technologies World 2025, Dell Technologies rolled out its latest wave of enterprise AI innovations, developed hand-in-hand with NVIDIA, as part of an expanded Dell AI Factory lineup. Among the highlights are new PowerEdge XE9785L and XE9712 servers, built from the ground up to handle accelerated AI training and inference workloads powered by NVIDIA’s new Blackwell GPUs. These systems are designed to give enterprises more horsepower across the AI lifecycle, with better performance, faster data access, and greater efficiency at the infrastructure level.

Beyond the hardware, Dell also announced upgrades to its AI data platform, deeper ties with NVIDIA AI Enterprise software, and a suite of new managed services. The re-engineered Dell AI Factory, now bolstered by NVIDIA’s stack, brings together compute, storage, networking, and software into a single, unified framework – geared to meet the rising demand for scalable, full-stack AI deployments.

Can Dell and NVIDIA’s AI Factory 2.0 Solve Enterprise-Scale AI Infrastructure Gaps?

Analyst Take: Dell’s latest update to its AI Factory platform, in collaboration with NVIDIA, signals a move towards a more fully integrated enterprise AI ecosystem. By fusing NVIDIA’s state-of-the-art GPU systems with Dell’s long-standing servers, storage, and networking strengths, this second-gen AI Factory aims to resolve common headaches stemming from the complexity of AI integration and deployments. The goal is to reduce friction, lower adoption hurdles, and give enterprises a pre-assembled ecosystem ready to deploy and scale. Especially for organizations that lack AI-specific infrastructure or teams, this kind of vertical integration could be the difference between the reality of expensive stop-and-go headaches and the ability to deploy and scale quickly.

Focused Hardware Upgrades Increase AI Workload Capacity

The refreshed PowerEdge XE9780L and XE9785L servers support up to 256 NVIDIA Blackwell Ultra GPUs per rack, delivering high-density capacity tailored to the demands of large language model (LLM) training. Dell notes that 8-way NVIDIA HGX B300 setups can speed up LLM training by up to 4x, directly shortening time-to-value. Meanwhile, the XE9712, using NVIDIA GB300 NVL72, pushes inference to new levels, offering up to 50x the output and a 5x bump in throughput. The systems also feature direct-to-chip liquid cooling and Dell’s PowerCool for thermal control at scale. All told, this upgraded server lineup is tuned for both physical-world AI (like robotics) and virtual agentic AI (such as digital twin models), aiming squarely at enterprise needs for dense, scalable compute. We note that with these types of efficiency improvements also come the potential for real-world ROI and TCO improvement discussions and case studies, which we expect to start seeing in the next 12 months.

Enhancements to Data Platforms Reduce Bottlenecks

Dell’s ObjectScale now includes support for S3 over RDMA, which roughly triples throughput, slashes latency by 80%, and cuts CPU use by nearly 98% when compared to standard S3 approaches. That kind of bandwidth opens the door to much higher GPU utilization, letting models access data more fluidly. In tandem, Dell’s alignment with NVIDIA’s AI Data Platform and NIXL Libraries – plus Project Lightning and PowerScale – offers robust infrastructure for large-scale, distributed inference. This toolkit ensures rapid, always-on access to data, a must-have for AI workflows where consistency and availability aren’t optional. At the same time, the updates help reduce data center sprawl while driving throughput, making the data layer faster and smarter.

Software and Semantic Tools Target Agentic AI

With the new Dell AI Factory, users gain built-in access to NVIDIA AI Enterprise software, including NeMo microservices, NIM inference frameworks, Llama Nemotron reasoning models, and more. The platform now supports Red Hat OpenShift as well, giving IT teams the flexibility to deploy AI containers however they see fit. These tools, embedded directly in Dell’s infrastructure, are intended to streamline the rollout of agentic AI applications that can reason, retrieve, and respond in complex ways. By embedding these tools into its AI Factory stack, Dell reduces the integration effort typically required to operationalize agentic AI.

Managed Services Fill Enterprise Capability Gaps

To close the loop, Dell is introducing a managed service layer to help enterprises stand up and maintain their full NVIDIA AI stack. These services cover everything from patching and version control to 24/7 system monitoring and infrastructure upkeep, tailored for AI-heavy operations. It’s a smart hedge against the usual enterprise pain points: talent shortages, system complexity, the steep climb of AI lifecycle management, and risks associated with system failures and/or downtime. Dell’s managed offerings can act as a bridge for companies with slim AI teams, getting them from initial deployment to real-world outcomes faster and with less risk.

What to Watch:

  • Migrating Blackwell-based Dell infrastructure into existing enterprise data centers will require significant planning to avoid integration and thermal management issues.
  • Enterprises adopting Dell’s AI Factory must align distributed inference workloads and GPU capacity with their AI training and deployment needs.
  • Dell must ensure its managed services consistently support round-the-clock monitoring, patching, and updates across diverse customer environments.
  • The value proposition and ROI of on-prem AI infrastructure will be tested against public cloud alternatives in terms of cost, scalability, and deployment speed.
  • Successful deployments may prompt competing vendors to deepen their integrations with NVIDIA software and hardware to retain enterprise relevance.

See the complete press release on Dell’s next-generation AI Factory innovations with NVIDIA on the Dell Technologies website.

Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.

Other insights from Futurum:

Dell Q4 FY 2025 Earnings Show Strong AI Momentum, ISG Revenue Up 22% YoY

Powering the Future of Work: Dell Tech & Glean’s AI Collaboration – Six Five On The Road

Dell Expands Virtualized Networking Capabilities Through 6WIND Partnership

Author Information

Olivier Blanchard

Research Director Olivier Blanchard covers edge semiconductors and intelligent AI-capable devices for Futurum. In addition to having co-authored several books about digital transformation and AI with Futurum Group CEO Daniel Newman, Blanchard brings considerable experience demystifying new and emerging technologies, advising clients on how best to future-proof their organizations, and helping maximize the positive impacts of technology disruption while mitigating their potentially negative effects. Follow his extended analysis on X and LinkedIn.

SHARE:

Latest Insights:

Dell Introduces a Discrete Enterprise-Grade NPU in a Mobile Form Factor for On-Device AI Model Inferencing
Olivier Blanchard, Research Director at Futurum, shares insights on Dell’s Pro Max Plus mobile workstation and its enterprise-grade NPU powering on-device AI development without cloud reliance.
Dell Strengthens AI Leadership With Infrastructure, Edge Inferencing, Partner Solutions, and Energy-Efficient Cooling Across Its AI Factory Offerings
Olivier Blanchard, Research Director at Futurum, shares insights on Dell’s expanded AI Factory offerings, including infrastructure, AI PCs, and partner solutions aimed at making enterprise AI more scalable and energy-efficient.
Nathan Thomas and GG Goindi discuss the groundbreaking Oracle Database at Google Cloud partnership, highlighting how it empowers customers to utilize Oracle’s robust database capabilities within Google Cloud, marking a significant step in multicloud strategy advancement.
Despite Its Strategy of Being Vendor-Agnostic, SAP Executives at Sapphire Highlighted the Important Role WalkMe Will Play in Driving SAP Customer Digital Transformations
Keith Kirkpatrick, Research Director with Futurum, shares his insights on SAP’s integration of WalkMe as part of the company’s core strategy for enabling digital transformation, and discusses the challenge it will raise for WalkMe.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.