The News: Hewlett Packard Enterprise and NVIDIA announce private cloud deployment solutions designed to accelerate the generative AI industrial revolution. Read the full press release on the HPE website.
HPE Unveils NVIDIA AI Computing by HPE: Enterprise AI Ascends
Analyst Take: As we move beyond the pilot stage of artificial intelligence (AI) deployments and into the stage of the cycle where enterprises are looking to deploy AI at scale in production environments, many are struggling to move forward. Deploying multiple use cases in production faces obstacles to overcome in areas such as implementing governance models that assure secure scalability and aligning change management processes that are ready for generative AI (GenAI) training and inferencing.
To address these major challenges, HPE and NVIDIA have co-developed a turnkey private cloud for AI that can enable enterprises to focus their resources on building AI use cases to increase productivity and improve business outcomes. Through the strengthened NVIDIA alliance, HPE is preparing enterprises to architect an AI advantage that provides enduring business success by overcoming obstacles to AI optimization.
In the rapidly evolving landscape of AI, NVIDIA has emerged as a pivotal force, significantly shaping the industry’s trajectory. NVIDIA’s dominance is not just limited to its powerful GPUs but extends to a comprehensive software stack that underpins much of the AI development and deployment today. This software ecosystem, including the NVIDIA AI Enterprise suite and NIM inference microservices, provides robust tools for developing, optimizing, and deploying AI models at scale. By integrating hardware and software, NVIDIA has created a seamless environment that accelerates AI innovation across various industries.
Simultaneously, there is a notable shift in how enterprises manage their AI workloads. Increasingly, companies are opting to run AI workloads on-premises rather than relying primarily on public cloud providers. This trend is driven by concerns over data sovereignty and security. As data becomes more critical and sensitive, organizations are wary of potential vulnerabilities and regulatory implications associated with storing and processing data in public clouds. By keeping AI operations on-premises, enterprises can exert greater control over their data, ensuring compliance with local regulations and mitigating risks related to data breaches and unauthorized access.
What Was Announced?
HPE and NVIDIA unveiled a significant expansion of their partnership, marked by the introduction of NVIDIA AI Computing by HPE. This comprehensive portfolio of AI solutions and joint go-to-market strategies aims to accelerate the adoption of generative AI within enterprises. The collaboration between these two tech giants signifies a deep integration of NVIDIA’s advanced AI computing capabilities with HPE’s robust infrastructure and cloud solutions.
From our view, the centerpiece of this announcement is HPE Private Cloud AI, a breakthrough solution that integrates NVIDIA AI computing, networking, and software with HPE’s AI storage, compute, and the HPE GreenLake cloud platform. This offering is designed to provide enterprises with an energy-efficient, fast, and flexible pathway for developing and deploying generative AI applications. It features the new OpsRamp AI copilot, which enhances IT operations by improving workload and IT efficiency. HPE Private Cloud AI is available in four configurations, catering to a wide range of AI workloads and use cases, from small-scale inferencing to large-scale machine learning and retrieval-augmented generation (RAG).
The hardware component of HPE Private Cloud AI includes new server models tailored for various needs. These range from the HPE ProLiant Compute DL384 Gen12, equipped with an NVIDIA GH200 NVL2, to the high-end HPE Cray XD670, which boasts up to eight NVIDIA H200 Tensor Core GPUs. The mid-tier model can also support up to eight NVIDIA H200 NVL GPUs, with select versions featuring direct liquid cooling (DLC) for enhanced performance and efficiency.
Additionally, HPE announced that its GreenLake cloud platform has achieved NVIDIA DGX BasePOD certification and OVX storage validation. New GreenLake OpsRamp capabilities for AI infrastructure observability were also introduced, ensuring that enterprises can effectively monitor and manage their AI workloads.
We find that the offering benefits from HPE’s decades of experience in DLC as evidenced by having over 300 DLC patents. HPE brings DLC options that include liquid to air cooling, 70% DLC, and 100% DLC that meet the specific needs of customers. HPE GreenLake for File Storage provides certified AI storage including support for optimized GPU utilization through NVIDIA Quantum-2 InfiniBand, RDMA, and NVIDIA GPUDirect Storage. Plus, NVIDIA DGX BasePOD certification and NVIDIA OVX storage validation are included.
This comprehensive suite of offerings underscores HPE’s commitment to simplifying AI adoption for enterprises. With a focus on ease of use, HPE claims that its Private Cloud AI can be set up with just a few clicks, providing a seamless, self-service cloud experience. The solution supports both standalone on-premises deployment and hybrid models, offering flexibility to meet diverse business needs.
HPE President and CEO Antonio Neri and NVIDIA founder and CEO Jensen Huang highlighted the unprecedented level of integration between their technologies, emphasizing how this collaboration aims to reduce the risks and barriers associated with large-scale AI adoption. The goal is to empower enterprises to focus their resources on developing innovative AI use cases that enhance productivity and unlock new revenue streams.
Looking Ahead
HPE’s strategic alignment with NVIDIA signifies a clear commitment to leveraging NVIDIA’s superior AI software and hardware ecosystem. This partnership highlights HPE’s prioritization of the NVIDIA relationship while also collaborating alongside AI silicon and solution partners such as Intel and AMD.
We discern that the rationale behind this choice appears to be driven more by software capabilities related to AI than silicon price performance factors. NVIDIA’s AI software stack, particularly the NVIDIA AI Enterprise suite and NIM inference microservices, provides a compelling value proposition that goes beyond hardware specifications.
NVIDIA’s software ecosystem is designed to optimize the entire AI lifecycle, from model development and training to deployment and inferencing. This comprehensive approach ensures that enterprises can achieve high performance and efficiency while maintaining control over their data. By integrating NVIDIA’s software stack with its own infrastructure and cloud solutions, HPE is positioning itself as a leader in the enterprise AI market, offering solutions that are not only powerful but also easy to deploy and manage.
As enterprises continue to navigate the complexities of AI adoption, HPE’s focus on providing turnkey solutions that address data sovereignty and security concerns will likely resonate with many organizations. By prioritizing and strengthening its partnership with NVIDIA, HPE can deliver a cohesive and integrated AI ecosystem that meets the needs of modern enterprises, helping them harness the full potential of AI while mitigating associated risks. This strategic move underscores HPE’s commitment to innovation and its determination to stay at the forefront of the AI revolution.
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Other Insights from The Futurum Group:
HPE Infuses GenAI LLMs to Uplift HPE Aruba Networking Central AIOps
HPE’s Game-Changing $14 Billion Acquisition of Juniper
HPE Aruba Networking Ready to Turbocharge Private 5G
Image Credit: HPE
Author Information
Ron is an experienced, customer-focused research expert and analyst, with over 20 years of experience in the digital and IT transformation markets, working with businesses to drive consistent revenue and sales growth.
He is a recognized authority at tracking the evolution of and identifying the key disruptive trends within the service enablement ecosystem, including a wide range of topics across software and services, infrastructure, 5G communications, Internet of Things (IoT), Artificial Intelligence (AI), analytics, security, cloud computing, revenue management, and regulatory issues.
Prior to his work with The Futurum Group, Ron worked with GlobalData Technology creating syndicated and custom research across a wide variety of technical fields. His work with Current Analysis focused on the broadband and service provider infrastructure markets.
Ron holds a Master of Arts in Public Policy from University of Nevada — Las Vegas and a Bachelor of Arts in political science/government from William and Mary.
Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the Vice President and Practice Leader for Hybrid Cloud, Infrastructure, and Operations at The Futurum Group. With a distinguished track record as a Forbes contributor and a ranking among the Top 10 Analysts by ARInsights, Steven's unique vantage point enables him to chart the nexus between emergent technologies and disruptive innovation, offering unparalleled insights for global enterprises.
Steven's expertise spans a broad spectrum of technologies that drive modern enterprises. Notable among these are open source, hybrid cloud, mission-critical infrastructure, cryptocurrencies, blockchain, and FinTech innovation. His work is foundational in aligning the strategic imperatives of C-suite executives with the practical needs of end users and technology practitioners, serving as a catalyst for optimizing the return on technology investments.
Over the years, Steven has been an integral part of industry behemoths including Broadcom, Hewlett Packard Enterprise (HPE), and IBM. His exceptional ability to pioneer multi-hundred-million-dollar products and to lead global sales teams with revenues in the same echelon has consistently demonstrated his capability for high-impact leadership.
Steven serves as a thought leader in various technology consortiums. He was a founding board member and former Chairperson of the Open Mainframe Project, under the aegis of the Linux Foundation. His role as a Board Advisor continues to shape the advocacy for open source implementations of mainframe technologies.