Menu

CoreWeave Secures $2.3 Billion in Debt Financing, Challenges for AI Compute

CoreWeave Secures $2.3 Billion in Debt Financing, Challenges for AI Compute

The News: On August 3, startup cloud provider CoreWeave announced it secured $2.3 billion in debt financing. Funding was led by Magnetar Capital and Blackstone Tactical Opportunities along with participation from Coatue, DigitalBridge Credit, PIMCO and Carlyle. The new financing will be used to add more high-performance compute and staff and open new data centers.

In July, the company announced a new $1.6 billion data center in Plano, Texas. The company is aiming to have a total of 14 data centers in place by the end of 2023.

Read the full Press Release on CoreWeave’s debt financing here

In related news, on June 27, CoreWeave revealed they delivered a record-breaking performance on MLPerf workloads with the new GPT-3 LLM benchmark test, which trained in under 11 minutes on more than 3,500 NVIDA H100 Tensor Core GPUs on a CoreWeave H100 Cloud Supercomputer. The performance was 29 times faster than any previous run. What does that mean for AI? According to the CoreWeave blog post: “CoreWeave allows ML Research teams to train large models at unprecedented speed and efficiency by enabling parallel workloads to run across more NVIDIA GPUs. We deliver this infrastructure at scale, faster than anyone thought possible.’

In a blog post describing the MLPerf performance, CoreWeave CTO Brian Venturo went on further to describe how CoreWeave might differentiate itself from established cloud providers: “Unlike generalized cloud providers, CoreWeave’s specialized infrastructure provides blazing fast bare-metal performance and the supporting storage, networking, and software solutions to match. Teams that use CoreWeave Cloud access a wider variety of NVIDIA GPUs and have the flexibility to ‘right-size’ their workloads to best match their demands and business needs. Importantly, CoreWeave’s compute solutions are optimized for highly parallelized workloads.”

Read the full blog post, MLPerf Results: CoreWeave and NVIDIA Showcase Record-Breaking, Cloud Native AI Supercomputer, here.

CoreWeave Secures $2.3 Billion in Debt Financing, Challenges for AI Compute

Analyst Take: Thanks to NVIDIA’s disinclination to feed potential GPU-making competitors and a made for AI compute approach, CoreWeave is perched to potentially disrupt the cloud provider hierarchy, which would spawn lots of repercussions. Will they do it? The answers to these three key questions will determine the outcome.

How urgently do enterprises want to spin up generative AI?

NVIDIA favors working with CoreWeave and might potentially limit GPU supplies to hyperscalers because they are building GPUs to compete with NVIDIA. CoreWeave’s success could hinge on how much pent-up demand there is for AI-GPU compute right now that is not getting addressed. It does not seem there is a massive pent-up demand yet, but that could quickly change before the end of 2023. As a demand measuring stick, CoreWeave Chief Strategy Officer Brannin McBee told VentureBeat in July the company had $30 million in revenue in 2022, should have $500 million in 2023, and has contracted nearly $2 billion for 2024.

Will stickiness and switching costs keep enterprises from moving AI workloads away from AWS, Microsoft Azure, and Google Cloud?

A key question will be whether enterprises have the bandwidth and willingness to divert move AI compute to another cloud provider. There are other considerations at stake for such a decision – implementation cycle and costs, security measures and procedures, broader application integrations, value-added services, such as monitoring, data management, AI tools and platforms, etc. Contractual concerns might slow a significant shift. However, many enterprises prefer multiple cloud vendors and are used to dealing with these issues. A viable option is for enterprises to shift new generative AI projects to CoreWeave and their business gets built on that approach.

Can the legacy cloud providers match CoreWeave’s claimed speed, efficiency, and made-for-AI approach?

The X factor in CoreWeave’s opportunity to be a cloud provider disruptor is their potential advantage of better speed and scale for running AI workloads. If the hyperscalers cannot match CoreWeave’s built-from-scratch approach to AI workloads and the lab-proven MLPerf test efficiencies, the appeal of lower costs and speed to market will swing enterprises away from the hyperscalers. That might only be a temporary advantage – it is hard to imagine AWS, Google, and Microsoft will not counter with strategies to keep AI compute workloads.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

NVIDIA Q1 Earnings

Google, NVIDIA, Qualcomm Spar on AI Domination

The Cost of The Next Big Thing – Artificial Intelligence

Author Information

Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

Related Insights
Yann LeCun’s AMI Raises $1BN Seed Round - Is the World Model Era Finally Here
March 13, 2026

Yann LeCun’s AMI Raises $1BN Seed Round – Is the World Model Era Finally Here?

Nick Patience, VP & AI Platforms Practice Lead at Futurum, examines AMI Labs' $1.03B seed round - Europe's largest - and what it means for the world model era, sovereign...
Domo Q4 FY 2026 Earnings Show Record Billings And Profitability Gains
March 13, 2026

Domo Q4 FY 2026 Earnings Show Record Billings And Profitability Gains

Brad Shimmin, Vice President & Practice Lead Futurum, analyzes Domo’s Q4 FY 2026 results, focusing on record billings, improving retention, and AI-led workflow automation strategy as the company pushes consumption...
Oracle Q3 FY 2026 Earnings Driven by OCI AI Infrastructure Demand
March 13, 2026

Oracle Q3 FY 2026 Earnings Driven by OCI AI Infrastructure Demand

Futurum Research analyzes Oracle’s Q3 FY 2026 earnings, focusing on OCI AI infrastructure momentum, sovereign cloud positioning, and Fusion’s embedded AI agents as the company scales capacity and backlog....
Dataiku Pivots to AI Success. Can One Control Plane Master a Multi-Cloud Agent Wilderness
March 12, 2026

Dataiku Pivots to AI Success. Can One Control Plane Master a Multi-Cloud Agent Wilderness?

Brad Shimmin, VP and Practice Lead at Futurum, explores Dataiku's pivot to "The Platform for AI Success." He analyzes how new tools for agent management and visual orchestration aim to...
Will Zendesk’s Forethought Acquisition Enable True Agentic Resolutions
March 12, 2026

Will Zendesk’s Forethought Acquisition Enable True Agentic Resolutions?

Keith Kirkpatrick, VP & Research Director at Futurum, covers Zendesk's proposed acquisition of Forethought, and discusses its impact on Zendesk’s Resolution Platform, outcome-based pricing models, and other SaaS competitors offering...
March 11, 2026

AI Accelerators – Futurum Signal

The rapid acceleration of artificial intelligence is fundamentally reshaping the semiconductor and data center landscape. In our latest Futurum Signal Report: AI Accelerators, we examine how a new generation of...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.