Search
Close this search box.

CoreWeave Secures $2.3 Billion in Debt Financing, Challenges for AI Compute

CoreWeave Secures $2.3 Billion in Debt Financing, Challenges for AI Compute

The News: On August 3, startup cloud provider CoreWeave announced it secured $2.3 billion in debt financing. Funding was led by Magnetar Capital and Blackstone Tactical Opportunities along with participation from Coatue, DigitalBridge Credit, PIMCO and Carlyle. The new financing will be used to add more high-performance compute and staff and open new data centers.

In July, the company announced a new $1.6 billion data center in Plano, Texas. The company is aiming to have a total of 14 data centers in place by the end of 2023.

Read the full Press Release on CoreWeave’s debt financing here

In related news, on June 27, CoreWeave revealed they delivered a record-breaking performance on MLPerf workloads with the new GPT-3 LLM benchmark test, which trained in under 11 minutes on more than 3,500 NVIDA H100 Tensor Core GPUs on a CoreWeave H100 Cloud Supercomputer. The performance was 29 times faster than any previous run. What does that mean for AI? According to the CoreWeave blog post: “CoreWeave allows ML Research teams to train large models at unprecedented speed and efficiency by enabling parallel workloads to run across more NVIDIA GPUs. We deliver this infrastructure at scale, faster than anyone thought possible.’

In a blog post describing the MLPerf performance, CoreWeave CTO Brian Venturo went on further to describe how CoreWeave might differentiate itself from established cloud providers: “Unlike generalized cloud providers, CoreWeave’s specialized infrastructure provides blazing fast bare-metal performance and the supporting storage, networking, and software solutions to match. Teams that use CoreWeave Cloud access a wider variety of NVIDIA GPUs and have the flexibility to ‘right-size’ their workloads to best match their demands and business needs. Importantly, CoreWeave’s compute solutions are optimized for highly parallelized workloads.”

Read the full blog post, MLPerf Results: CoreWeave and NVIDIA Showcase Record-Breaking, Cloud Native AI Supercomputer, here.

CoreWeave Secures $2.3 Billion in Debt Financing, Challenges for AI Compute

Analyst Take: Thanks to NVIDIA’s disinclination to feed potential GPU-making competitors and a made for AI compute approach, CoreWeave is perched to potentially disrupt the cloud provider hierarchy, which would spawn lots of repercussions. Will they do it? The answers to these three key questions will determine the outcome.

How urgently do enterprises want to spin up generative AI?

NVIDIA favors working with CoreWeave and might potentially limit GPU supplies to hyperscalers because they are building GPUs to compete with NVIDIA. CoreWeave’s success could hinge on how much pent-up demand there is for AI-GPU compute right now that is not getting addressed. It does not seem there is a massive pent-up demand yet, but that could quickly change before the end of 2023. As a demand measuring stick, CoreWeave Chief Strategy Officer Brannin McBee told VentureBeat in July the company had $30 million in revenue in 2022, should have $500 million in 2023, and has contracted nearly $2 billion for 2024.

Will stickiness and switching costs keep enterprises from moving AI workloads away from AWS, Microsoft Azure, and Google Cloud?

A key question will be whether enterprises have the bandwidth and willingness to divert move AI compute to another cloud provider. There are other considerations at stake for such a decision – implementation cycle and costs, security measures and procedures, broader application integrations, value-added services, such as monitoring, data management, AI tools and platforms, etc. Contractual concerns might slow a significant shift. However, many enterprises prefer multiple cloud vendors and are used to dealing with these issues. A viable option is for enterprises to shift new generative AI projects to CoreWeave and their business gets built on that approach.

Can the legacy cloud providers match CoreWeave’s claimed speed, efficiency, and made-for-AI approach?

The X factor in CoreWeave’s opportunity to be a cloud provider disruptor is their potential advantage of better speed and scale for running AI workloads. If the hyperscalers cannot match CoreWeave’s built-from-scratch approach to AI workloads and the lab-proven MLPerf test efficiencies, the appeal of lower costs and speed to market will swing enterprises away from the hyperscalers. That might only be a temporary advantage – it is hard to imagine AWS, Google, and Microsoft will not counter with strategies to keep AI compute workloads.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

NVIDIA Q1 Earnings

Google, NVIDIA, Qualcomm Spar on AI Domination

The Cost of The Next Big Thing – Artificial Intelligence

Author Information

Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.

Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

SHARE:

Latest Insights:

Frank Geraci, President at Cronos, joins David Nicholson to share his insights on Huddle, a groundbreaking Smartsheet solution set to redefine configuration management, version control, and the use of Smartsheet portals.
Cicero, Director of Product Marketing at Smartsheet joins David Nicholson to share his insights on ENGAGE 2024. Discover the groundbreaking announcements and the unique energy that makes ENGAGE an unmissable event.
Jennifer Stockton and Courtney Finger share how Smartsheet transformed Conga's marketing operations from "chaos to collaboration," highlighting the pivotal role of Smartsheet in streamlining processes and enhancing creativity at scale.
Amilcar Alfaro, Sr. Director, Product Marketing at Smartsheet, joins Keith Townsend to share insights on the crucial updates from ENGAGE 2024, emphasizing the value of enterprise-grade scale and the platforms' user-friendliness.