CoreWeave Selects Dell Servers to Power GPU Cloud

CoreWeave Selects Dell Servers to Power GPU Cloud

The News: Dell Technologies announced that CoreWeave—a specialized cloud provider for graphics processing unit (GPU)-focused workloads—has selected Dell PowerEdge XE9860 servers with NVIDIA H100 Tensor Core GPUs to power its cloud offerings. More about the announcement can be found in the press release on the Dell website.

CoreWeave Selects Dell Servers to Power GPU Cloud

Analyst Take: Dell announced recently that CoreWeave “has purchased thousands of servers” to expand the compute power of the CoreWeave cloud. The servers in reference are Dell XE9860 servers with NVIDIA H100 Tensor Core GPUs.

CoreWeave is a cloud provider that specializes in hosting compute intensive workloads, which in many cases rely on GPU acceleration. While CoreWeave offers CPU-only solutions, increased demand for GPU acceleration has led CoreWeave to focus on building what it refers to as “The GPU Cloud.”

CoreWeave’s GPU Cloud is designed to handle the most compute-intensive workloads, such as AI and machine learning (ML), and scale as needed without sacrificing performance. These applications are GPU intensive, and as organizations further scale their AI applications, the infrastructure demands are quite significant.

With the announcement from Dell, it appears that Dell PowerEdge XE9860 servers are a key component to handling this infrastructure challenge. The selection of Dell servers to power CoreWeave’s cloud is a significant demonstration of Dell’s ability to provide hardware for AI workloads. Although the infrastructure demands of any single AI application are non-trivial, CoreWeave (which also recently partnered with VAST Data), must do so at cloud-scale.

CoreWeave’s overall mission is to provide cloud-based compute for the most intensive workloads, so the selection of Dell PowerEdge servers reflects well on Dell’s capabilities. As AI and other similarly intensive applications continue to develop—both in and outside of the cloud—Dell appears well equipped to provide the necessary infrastructure.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Storj Releases Storj Select Feature to Meet Customer Compliance Needs

VAST Receives $118 Million in Series E Funding

Quantum Announces General Availability of Myriad

Author Information

Mitch comes to The Futurum Group through the acquisition of the Evaluator Group and is focused on the fast-paced and rapidly evolving areas of cloud computing and data storage. Mitch joined Evaluator Group in 2019 as a Research Associate covering numerous storage technologies and emerging IT trends.

With a passion for all things tech, Mitch brings deep technical knowledge and insight to The Futurum Group’s research by highlighting the latest in data center and information management solutions. Mitch’s coverage has spanned topics including primary and secondary storage, private and public clouds, networking fabrics, and more. With ever changing data technologies and rapidly emerging trends in today’s digital world, Mitch provides valuable insights into the IT landscape for enterprises, IT professionals, and technology enthusiasts alike.

SHARE:

Latest Insights:

Tara DeZao and Peter van der Putten from Pega join Keith Kirkpatrick to discuss the future of AI and creating impactful customer experiences responsibly.
Snowflake Unveils Cortex AI Enhancements, OpenFlow for Interoperability, and Significant Compute Performance Upgrades, Aiming To Make AI More Accessible and Efficient for Enterprises
Nick Patience, AI Practice Lead at Futurum, shares his insights on Snowflake Summit 2025. Key announcements like Cortex AI, OpenFlow, and Adaptive Compute aim to accelerate enterprise AI by unifying data and enhancing compute efficiency.
Q2 Results Driven by Improved Server Execution, GreenLake Gains, and Cost Actions
Fernando Montenegro and Krista Case at Futurum examine HPE’s Q2 FY 2025 results, with improved server execution, AI systems traction, and hybrid cloud strength driving upside despite margin pressure and macro headwinds.
Enhancements to the Pega Platform, Pega Blueprint, and AI Agents Are Also Designed To Help Organizations Modernize and Streamline Their Stack and Enable CX, Productivity, and Efficiency Gains Now and in the Future
Keith Kirkpatrick, Research Director with Futurum, shares his insights on the news from PegaWorld ‘25 and discusses the potential impact for other platform vendors operating in the market.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.