Menu

HPE Invests in LLM Aleph Alpha to Fuel On-Premises AI Strategy

HPE Invests in LLM Aleph Alpha to Fuel On-Premises AI Strategy

The News: In November, Hewlett Packard Enterprise (HPE) announced it joined with a consortium of investors for large language model (LLM) player Aleph Alpha ’s series B funding round of more than $500 million. The investment follows HPE selection of Aleph Alpha’s LLM Luminous as the foundation for the company’s first AI private cloud service, HPE GreenLake for Large Language Models. Here are the key details of the growing relationship:

  • HPE sees the relationship as a strategic partnership that allows both companies to “further integrate our complementary technologies” – HPE’s leadership in supercomputer technology and Aleph Alpha’s focus on the development of generative AI for data sensitive industries such as healthcare, finance, law, government, and security.
  • Luminous is trained on an HPE supercomputer and leverages the HPE Machine Learning Development Environment, which is designed to efficiently scale AI model training.
  • The partners have several joint generative AI projects in process, including a project for a US federal government agency “which includes the analysis, summary and generation of documents critical for national security.”
  • The federal agency project is an example of a “private on-premises LLM environment based on HPE supercomputing and AI technology, which the agency uses for training, tuning and inferencing based on its own documents and databases.”

Read the press release from HPE on the Aleph Alpha partnership here.

Read the Aleph Alpha blog post on the Series B funding here.

HPE Invests in LLM Aleph Alpha To Fuel On-Prem AI Strategy

Analyst Take: For many enterprises, the risk of running ops in the public cloud is too high because of the sensitive nature of their data or the sensitive nature of their overall work. The foundation models that fuel generative AI, particularly LLMs, typically train on massive amounts of publicly available data, so the use of most LLMs can introduce a level of risk for these enterprises they are not willing to take. HPE is thinking about how to solve that dilemma with a vision for an on-premises AI stack. The company’s partnership with Aleph Alpha is part of that equation. Here is why the partnership is important to HPE’s on-premises AI approach.

Partners Are Philosophically Aligned

HPE has been a long-time vendor to enterprises that deploy private IT stacks, so they well understand the drivers and requirements these enterprises have. Aleph Alpha’s mission and creation is based on a similar premise. From the company’s landing page: “A new generation of AI is re-shaping knowledge work. In the most complex and critical environments there are no simple answers. Taking responsibility requires a human-machine paradigm beyond chatbots designed around data security, technology transparency and result explainability.” Note that HPE, in its announcement of the Aleph Alpha investment chose to specifically detail a joint project for a US federal agency working on national security.

Synergy to Drive Efficiencies

Compute costs are a concern for all enterprises seeking to leverage generative AI. In Aleph Alpha, HPE chose an LLM that by design is more efficient than many other available models. Luminous, the model HPE is deploying for its customers, is trained on 70 billion parameters, less than half as big as OpenAI’s ChatGPT-3. Aleph Alpha claims Luminous is “twice as efficient which translates to a better scaling and lower resource consumption when in use.” You can read more about the Luminous LLM in Aleph Alpha’s blog post on the Aleph Alpha website.

Conclusion

It will be interesting to see how the HPE-Aleph Alpha relationship develops. Other players vying for the on-prem AI business have taken a broader approach to partnerships, focusing on open-source options. However, it’s been argued that open-source software in general isn’t secure enough for many enterprise customers and applications, which would bode well for HPE’s current arch with Aleph Alpha. A focused partnership could lead to further refinement and innovation that’s required to make an on-prem AI stack a reality for security-minded enterprises.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Supercomputing 2023

Empowering AI Innovation with HPE’s Advanced Supercomputing Solution

Powering Your Future Business with AI Inference – Futurum Tech Webcast

Author Information

Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

Related Insights
CIO Take Smartsheet's Intelligent Work Management as a Strategic Execution Platform
December 22, 2025

CIO Take: Smartsheet’s Intelligent Work Management as a Strategic Execution Platform

Dion Hinchcliffe analyzes Smartsheet’s Intelligent Work Management announcements from a CIO lens—what’s real about agentic AI for execution at scale, what’s risky, and what to validate before standardizing....
Will Zoho’s Embedded AI Enterprise Spend and Billing Solutions Drive Growth
December 22, 2025

Will Zoho’s Embedded AI Enterprise Spend and Billing Solutions Drive Growth?

Keith Kirkpatrick, Research Director with Futurum, shares his insights on Zoho’s latest finance-focused releases, Zoho Spend and Zoho Billing Enterprise Edition, further underscoring Zoho’s drive to illustrate its enterprise-focused capabilities....
Will IFS’ Acquisition of Softeon Help Attract New Supply Chain Customers
December 19, 2025

Will IFS’ Acquisition of Softeon Help Attract New Supply Chain Customers?

Keith Kirkpatrick, Research Director at Futurum, shares his insights into IFS’ acquisition of WMS provider Softeon, and provides his assessment on the impact to IFS’s market position and the overall...
NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy
December 16, 2025

NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy

Nick Patience, AI Platforms Practice Lead at Futurum, shares his insights on NVIDIA's release of its Nemotron 3 family of open-source models and the acquisition of SchedMD, the developer of...
Will a Digital Adoption Platform Become a Must-Have App in 2026?
December 15, 2025

Will a DAP Become the Must-Have Software App in 2026?

Keith Kirkpatrick, Research Director with Futurum, covers WalkMe’s 2025 Analyst Day, and discusses the company’s key pillars for driving success with enterprise software in an AI- and agentic-dominated world heading...
Broadcom Q4 FY 2025 Earnings AI And Software Drive Beat
December 15, 2025

Broadcom Q4 FY 2025 Earnings: AI And Software Drive Beat

Futurum Research analyzes Broadcom’s Q4 FY 2025 results, highlighting accelerating AI semiconductor momentum, Ethernet AI switching backlog, and VMware Cloud Foundation gains, alongside system-level deliveries....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.