HPE Invests in LLM Aleph Alpha to Fuel On-Premises AI Strategy

HPE Invests in LLM Aleph Alpha to Fuel On-Premises AI Strategy

The News: In November, Hewlett Packard Enterprise (HPE) announced it joined with a consortium of investors for large language model (LLM) player Aleph Alpha ’s series B funding round of more than $500 million. The investment follows HPE selection of Aleph Alpha’s LLM Luminous as the foundation for the company’s first AI private cloud service, HPE GreenLake for Large Language Models. Here are the key details of the growing relationship:

  • HPE sees the relationship as a strategic partnership that allows both companies to “further integrate our complementary technologies” – HPE’s leadership in supercomputer technology and Aleph Alpha’s focus on the development of generative AI for data sensitive industries such as healthcare, finance, law, government, and security.
  • Luminous is trained on an HPE supercomputer and leverages the HPE Machine Learning Development Environment, which is designed to efficiently scale AI model training.
  • The partners have several joint generative AI projects in process, including a project for a US federal government agency “which includes the analysis, summary and generation of documents critical for national security.”
  • The federal agency project is an example of a “private on-premises LLM environment based on HPE supercomputing and AI technology, which the agency uses for training, tuning and inferencing based on its own documents and databases.”

Read the press release from HPE on the Aleph Alpha partnership here.

Read the Aleph Alpha blog post on the Series B funding here.

HPE Invests in LLM Aleph Alpha To Fuel On-Prem AI Strategy

Analyst Take: For many enterprises, the risk of running ops in the public cloud is too high because of the sensitive nature of their data or the sensitive nature of their overall work. The foundation models that fuel generative AI, particularly LLMs, typically train on massive amounts of publicly available data, so the use of most LLMs can introduce a level of risk for these enterprises they are not willing to take. HPE is thinking about how to solve that dilemma with a vision for an on-premises AI stack. The company’s partnership with Aleph Alpha is part of that equation. Here is why the partnership is important to HPE’s on-premises AI approach.

Partners Are Philosophically Aligned

HPE has been a long-time vendor to enterprises that deploy private IT stacks, so they well understand the drivers and requirements these enterprises have. Aleph Alpha’s mission and creation is based on a similar premise. From the company’s landing page: “A new generation of AI is re-shaping knowledge work. In the most complex and critical environments there are no simple answers. Taking responsibility requires a human-machine paradigm beyond chatbots designed around data security, technology transparency and result explainability.” Note that HPE, in its announcement of the Aleph Alpha investment chose to specifically detail a joint project for a US federal agency working on national security.

Synergy to Drive Efficiencies

Compute costs are a concern for all enterprises seeking to leverage generative AI. In Aleph Alpha, HPE chose an LLM that by design is more efficient than many other available models. Luminous, the model HPE is deploying for its customers, is trained on 70 billion parameters, less than half as big as OpenAI’s ChatGPT-3. Aleph Alpha claims Luminous is “twice as efficient which translates to a better scaling and lower resource consumption when in use.” You can read more about the Luminous LLM in Aleph Alpha’s blog post on the Aleph Alpha website.

Conclusion

It will be interesting to see how the HPE-Aleph Alpha relationship develops. Other players vying for the on-prem AI business have taken a broader approach to partnerships, focusing on open-source options. However, it’s been argued that open-source software in general isn’t secure enough for many enterprise customers and applications, which would bode well for HPE’s current arch with Aleph Alpha. A focused partnership could lead to further refinement and innovation that’s required to make an on-prem AI stack a reality for security-minded enterprises.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Supercomputing 2023

Empowering AI Innovation with HPE’s Advanced Supercomputing Solution

Powering Your Future Business with AI Inference – Futurum Tech Webcast

Author Information

Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.

Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

SHARE:

Latest Insights:

Looking Beyond the Dashboard: Tableau Bets Big on AI Grounded in Semantic Data to Define Its Next Chapter
Futurum analysts Brad Shimmin and Keith Kirkpatrick cover the latest developments from Tableau Conference, focused on the new AI and data-management enhancements to the visualization platform.
Colleen Kapase, VP at Google Cloud, joins Tiffani Bova to share insights on enhancing partner opportunities and harnessing AI for growth.
Ericsson Introduces Wireless-First Branch Architecture for Agile, Secure Connectivity to Support AI-Driven Enterprise Innovation
The Futurum Group’s Ron Westfall shares his insights on why Ericsson’s new wireless-first architecture and the E400 fulfill key emerging enterprise trends, such as 5G Advanced, IoT proliferation, and increased reliance on wireless-first implementations.
Mark Lohmeyer, VP & GM Compute and ML Infrastructure at Google, joins the Six Five team to explore how Google Cloud's customized infrastructure meets the diverse needs of AI and cloud workloads.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.