Generative AI Investment Accelerating: $1.3 Billion for LLM Inflection

Generative AI Investment Accelerating: $1.3 Billion for LLM Inflection

The News: Inflection announced on June 29 they have raised a new round of funding to the tune of $1.3 billion. Inflection is a large language model (LLM) founded by the co-founder of DeepMind, Mustafa Suleyman and LinkedIn co-founder Reed Hoffman. The company, launched approximately 1 year ago, is now valued at $4 billion. Investors include Microsoft, NVIDIA, Bill Gates, and former Google CEO Eric Schmidt.

Forbes reports that Inflection will install the largest GPU cluster for AI apps in the world – 23,000 NVIDIA H100s. Inflection’s chatbot is called “Pi”, and the company’s ambition is to create a more personal, emotional interface/LLM. Read the full Press Release about the investment round on Inflection’s website.

Generative AI Investment Accelerating: $1.3 Billion for LLM Inflection

Analyst Take: The $10 billion investment by Microsoft in Open AI aside (note: that’s a bit misleading, since it is not all cash to OpenAI, and a big chunk of the “investment” is providing Azure compute), the Inflection funding marks a significant milestone in the generative AI landscape. What will the Inflection investment mean to the generative AI market?

Short Term: More Investment in LLMs, GenAI Model Management, LLM Management

At this early stage of the generative AI market, investments are being made in the more foundational elements of the generative AI stack – AI compute and AI tools/platforms. This makes perfect sense since generative AI use cases are extremely formative. We are in the innovation and experimentation stage of the market. As enterprises contemplate the generative AI opportunity, many are thinking about how to create a competitive edge, and how to build their own IP, with generative AI.

AI cloud compute is established with cloud providers AWS, Microsoft Azure, Google Cloud, and to an extent, Oracle. The investment opportunity lies in the next layer – LLMs, generative AI model management, and LLM management. This group of companies provides the foundational tools enterprises need to experiment with and build generative AI applications. LLM options are exploding, and their value propositions are being refined so rapidly it is difficult to keep up with the market. The primary driver in this competitive space will be how to present an LLM that meets enterprise-grade requirements for scalable applications – security, accuracy, privacy, etc.

LLMs and other foundational models (diffusion, etc.) are proving to be a bit messy and not necessarily standalone, plug-and-play platforms. Consequently, a range of ancillary services has emerged for generative AI model management and LLM management, such as Trustwise, Whylabs, Galileo AI, OctoML, and Anyscale. These companies help enterprises scale AI compute, tune models, and help tackle hallucination, among other functions. Investment in these companies will grow significantly in the short term.

Short Term: More Investment in Data Management

Generative AI has sparked a renewed interest by enterprises in leveraging proprietary data. As such, data management is even more critical (See Databrick’s MosaicML Acquisition, LakehouseIQ Launch, Data + AI Summit Show Gen AI Savvy). Investments in companies like Databricks, Snowflake, MongoDB, SingleStore, and narrower data management specialists like LlamaIndex and Datasaur will grow significantly in the short term.

Longer Term: 2024 Investment in GenAI Applications and Specific Use Cases Accelerates

Generative AI use cases are formative, and with the exception of automated code development, there are few proven use cases and consequently, limited specific proven applications. While generative AI application startups are proliferating, significant investment in such companies will lag the more foundational layers of the generative AI stack, with acceleration in this top layer coming sometime in 2024.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

UK AI Regulations Criticized: A Cautionary Tale for AI Safety

New Generative AI-Powered Capabilities in Oracle Fusion Cloud HCM Announced

Improving Contact Center Experiences via NLP, NLU, and Analytics-Focused AI

Author Information

Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.

Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

SHARE:

Latest Insights:

On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss Microsoft, Google, Meta, AI regulations and more!
Oracle’s Latest Exadata X11M Platform Delivers Key Enhancements in Performance, Efficiency, and Energy Conservation for AI and Data Workloads
Futurum’s Ron Westfall examines why Exadata X11M allows customers to decide where they want to gain the best performance for their Oracle Database workloads from new levels of price performance, consolidation, and efficiency alongside savings in hardware, power and cooling, and data center space.
Lenovo’s CES 2025 Lineup Included Two New AI-Powered ThinkPad X9 Prosumer PCs for Hybrid Workers
Olivier Blanchard, Research Director at The Futurum Group, shares his insights on how Lenovo’s new Aura Edition ThinkPad X9 prosumer PCs help the company maximize Intel’s new Core Ultra processors to deliver a richer and more differentiated AI feature set on premium tier Copilot+ PCs to hybrid workers.
Lenovo’s New ThinkBook Plus 6 Rollable Isn’t for Everyone, but It Might Not Be as Niche as It Seems at First Glance
Olivier Blanchard, Research Director at The Futurum Group, shares his insights on how Lenovo’s new ThinkBook Gen 6 Rollable laptop could make the case for more versatile, hybrid laptop form factors, and turn a niche play into a mainstream feature.

Thank you, we received your request, a member of our team will be in contact with you.