Analyst(s): Nick Patience
Publication Date: February 12, 2026
The five largest US cloud and AI infrastructure providers – Microsoft, Alphabet, Amazon, Meta, and Oracle – have collectively committed to spending between $660 billion and $690 billion on capital expenditure in 2026, nearly doubling 2025 levels. At the same time, pure-play AI vendors led by OpenAI and Anthropic are posting rapid revenue growth, though their combined revenues remain a fraction of the infrastructure investment being deployed on their behalf.
What is Covered in this Article:
- US hyperscaler AI capex plans for 2025-2026, including Microsoft, Alphabet, Amazon, Meta, and Oracle, with aggregate spending approaching $700 billion
- The Stargate project and its $500 billion infrastructure ambition involving OpenAI, SoftBank, and Oracle
- Pure-play AI vendor revenue trajectories for OpenAI, Anthropic, Cohere, Mistral, xAI, Perplexity, and others
- Chinese and regional AI infrastructure investment from Alibaba, ByteDance, Tencent, and Middle Eastern sovereign funds
- The sustainability question: whether AI revenues can justify the scale of infrastructure investment underway
The News: The first weeks of 2026 have brought a cascade of earnings reports and guidance from the largest technology companies, and the consistent theme is an acceleration of AI-related capital expenditure. This includes Amazon with a projected $200 billion in capex for 2026 (most, but not all, for data centers), Alphabet at $175-185 billion, Meta at $115-135 billion, Microsoft tracking toward $120 billion or more, and Oracle targeting $50 billion. Combined, these five companies alone plan to spend roughly $660-690 billion on infrastructure in 2026, the vast majority directed at AI compute, data centers, and networking. All the hyperscalers report that their markets are supply-constrained, rather than demand-constrained.
Simultaneously, the pure-play AI model vendors are reporting strong revenue growth. OpenAI ended 2025 with approximately $20 billion in annual recurring revenue, a threefold increase from the prior year. Anthropic’s revenue run rate surpassed $9 billion in January 2026, up from roughly $1 billion at the end of 2024. Smaller vendors, including Cohere, Mistral, and Perplexity, are also scaling, though from considerably lower bases.
AI Capex 2026: The $690B Infrastructure Sprint
Analyst Take: The scale of spending is substantial. In roughly 18 months, the aggregate annual AI infrastructure commitment from the five largest US cloud and technology companies has increased from approximately $380 billion in 2025 to a projected $660–690 billion in 2026. This represents a near-doubling of spending in a single year, driven by a shared conviction that AI workloads will consume every available unit of compute capacity. The question facing the industry is whether the revenue and demand trajectory can justify them.
The Capex Landscape: Who is Spending What
Amazon leads the field with its $200 billion capex plan for 2026 (most of which is data centers, the rest being logistics and other elements of Amazon’s business), a figure that caught even bullish projections off guard – consensus expectations had been closer to $147 billion. CEO Andy Jassy defended the plan by noting that AI capacity is being monetized as quickly as it is installed and that AWS reached a $142 billion annualized revenue run rate with growth accelerating to 24% year-over-year, a three-year high. Still, Amazon’s stock dropped roughly 8-10% on the announcement, reflecting investor nervousness about the payback period.
Alphabet’s planned $175-185 billion is notable partly because it has already been revised three times upward from an initial $71-73 billion range for 2025. CEO Sundar Pichai acknowledged the scale is significant enough to cause concern internally, but pointed to a cloud backlog that surged 55% sequentially to over $240 billion. Alphabet also reported reducing Gemini serving costs by 78% over 2025 through model optimization, an important signal that efficiency gains are occurring alongside the spending increases.
Microsoft is tracking toward $120 billion or more in fiscal 2026, having already spent $37.5 billion in its most recent quarter alone. The company disclosed an $80 billion backlog of Azure orders that cannot be fulfilled due to power constraints, suggesting demand is outpacing even its aggressive build-out pace. Meta – not in the same business as the hyperscalers, clearly, but still a massive capex investor – plans capex in the $115-135 billion range, including a 1GW data center in Ohio and a facility in Louisiana that could eventually scale to 5GW. Oracle’s projected $50 billion represents a 136% increase over 2025, supported by $523 billion in remaining performance obligations.
Table 1: US Hyperscaler AI Capex Summary

The Stargate Factor
Layered on top of individual company plans is the Stargate project, a joint venture between OpenAI, SoftBank, Oracle, and MGX, announced in January 2025 and backed by the Trump administration. The project targets $500 billion in AI infrastructure investment by 2029, with an initial $100 billion deployment. As of September 2025, roughly 7 GW of capacity had been planned across five sites in Texas, New Mexico, and Ohio, with more than $400 billion in commitments within the first three years.
The Revenue Gap: Capex vs. AI Vendor Returns
The scale of investment raises an obvious question about returns. The pure-play AI vendors – the primary consumers of this infrastructure – are growing rapidly but from modest bases relative to the capital being deployed. OpenAI’s $20 billion ARR is impressive for a company that barely had consumer products three years ago, but it represents roughly 3% of the projected 2026 hyperscaler capex total. Anthropic’s $9 billion run rate, while showing 9x year-over-year growth, occupies a similar position. The entire cohort of pure-play AI vendors – including Cohere ($150 million ARR), Mistral (~$400 million), Perplexity ($148 million annualized), and others – likely accounts for less than $35 billion in projected combined 2026 revenue.
This is not to say the investment is misplaced. The hyperscalers are not building exclusively for third-party AI vendors; they are building for their own AI services, enterprise customers running AI workloads on their clouds, and the anticipated growth in AI inference demand as adoption matures. AWS alone reached $142 billion in annualized revenue, and a growing share of that is AI-driven. Microsoft reports that its AI business is already larger than some of its more established franchises. The revenue is coming – but the infrastructure is being built well ahead of it, which introduces execution risk.
The US-China Infrastructure Race
While US companies dominate the raw spending figures, China’s AI infrastructure investment is accelerating on a different model. Alibaba has committed RMB 380 billion (~$53 billion) over three years for AI and cloud, with CEO Wu indicating a new, larger plan is forthcoming. ByteDance is targeting RMB 160 billion (~$23 billion) in 2026 capex, with roughly $13 billion earmarked for AI processors. Tencent has been more measured, with quarterly capex actually declining in late 2025 as it prioritizes profitability alongside AI buildout.
China’s total AI investment reached an estimated $125 billion in 2025, a figure that, while substantial, remains well below the US hyperscaler total. However, Chinese AI model makers will doubtless point to DeepSeek’s R1 release in January 2025, which demonstrated that Chinese companies can achieve competitive model performance, even if some of the low-cost claims were misleading.
US chip export controls continue to shape the landscape, though their impact is evolving. As of January 2026, the Trump administration has allowed conditional sales of NVIDIA’s H20 and H200 chips to approved Chinese customers with revenue-sharing arrangements. This has provided some relief to Chinese firms while maintaining restrictions on the most advanced hardware. Huawei’s domestic chip production remains limited – congressional testimony cited only 200,000 AI chips produced in 2025 – and the H200 is roughly 60% more powerful in real-world training than Huawei’s Ascend 910C – suggesting Chinese companies still face meaningful constraints on scaling domestic compute.
Regional Investment: Middle East, Europe, and Asia-Pacific
The AI infrastructure buildout extends beyond the US-China axis. Saudi Arabia announced more than $15 billion in new AI investments at LEAP 2025, held in Riyadh one year ago, including a $10 billion partnership between PIF and Google Cloud and plans to deploy 500 MW each of AMD and Nvidia chips through its HUMAIN initiative. The UAE is developing what it describes as the largest AI campus outside the US – a 26 square kilometer facility in Abu Dhabi with 5 GW of planned capacity. These investments reflect a strategic bet by Gulf states to diversify beyond energy economies.
The EU has unveiled a €200 billion AI Continent Action Plan, split between €50 billion in public funding and €150 billion from private sources. Thirteen AI Factories have been established across 17 member states, and European AI server spending is projected to reach $47 billion in 2026. Japan’s government has allocated ¥1 trillion annually for AI and semiconductor development, while South Korea’s 2026 national AI budget stands at 9.9 trillion won (~$6.7 billion), with nearly half directed at infrastructure.
Can the Spending Be Sustained?
The sustainability of current capex levels depends on several factors that remain uncertain. On the demand side, the signals are positive: cloud backlogs are large and growing, enterprise AI adoption is broadening, and inference workloads are scaling as AI moves from experimentation to production. All five major hyperscalers report that AI capacity is being absorbed as quickly as it can be deployed.
On the supply side, constraints are real. Microsoft’s $80 billion unfulfilled Azure backlog is largely a function of power availability, not demand softness. Energy requirements for AI data centers are growing rapidly – global data center electricity consumption is projected to double between 2022 and 2026, according to the IEA. Securing power, permitting sites, and building physical infrastructure at this pace is stretching current infrastructure development capabilities.
The risk lies in the gap between investment timing and revenue realization. Infrastructure built today may take 18-36 months to generate proportional returns. If AI adoption progresses more slowly than anticipated, or if efficiency gains reduce the compute required per workload more quickly than expected, the return on these investments could disappoint. However, Jevons Paradox – where efficiency gains increase rather than decrease total consumption – may also apply (as Satya Nadella pointed to a year ago): cheaper inference could drive dramatically higher usage volumes, ultimately requiring more infrastructure rather than less.
What to Watch:
- Whether hyperscaler capex guidance for 2027 continues to escalate or begins to plateau as base effects grow and power constraints bite.
- The trajectory of OpenAI and Anthropic revenues through 2026, particularly whether enterprise adoption accelerates enough to narrow the revenue-to-capex ratio
- Efficiency-focused approaches – if training and inference costs continue to fall, the relationship between capex and capability will shift
- The US chip export policy under the Trump administration, and whether conditional sales to Chinese firms expand or tighten
- Power infrastructure development as a binding constraint on the pace of AI data center deployment, particularly in the US and Europe
- The Stargate project’s execution against its aggressive timeline and whether it attracts additional participants or faces financing challenges
See the latest earnings releases from Alphabet, Amazon, and Microsoft on their respective websites.
Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.
Other insights from Futurum:
Alphabet Q4 FY 2025 Highlights Cloud Acceleration and Enterprise AI Momentum
Sovereign AI: What Nations Want (And What They’ll Actually Get) – Report Summary
NVIDIA and CoreWeave Team to Break Through Data Center Real Estate Bottlenecks
Author Information
Nick Patience is VP and Practice Lead for AI Platforms at The Futurum Group. Nick is a thought leader on AI development, deployment, and adoption - an area he has researched for 25 years. Before Futurum, Nick was a Managing Analyst with S&P Global Market Intelligence, responsible for 451 Research’s coverage of Data, AI, Analytics, Information Security, and Risk. Nick became part of S&P Global through its 2019 acquisition of 451 Research, a pioneering analyst firm that Nick co-founded in 1999. He is a sought-after speaker and advisor, known for his expertise in the drivers of AI adoption, industry use cases, and the infrastructure behind its development and deployment. Nick also spent three years as a product marketing lead at Recommind (now part of OpenText), a machine learning-driven eDiscovery software company. Nick is based in London.