Anthropic has expanded its compute partnership with Google and Broadcom to secure dedicated AI compute infrastructure, signaling a strategic move beyond model development into supply chain control. The deal raises a question that matters more than the announcement itself: is Anthropic’s compute partnership building a durable compute moat, taking on infrastructure complexity that distracts from its core model differentiation, or both?
What is Covered in this Article
- What the Google-Broadcom compute partnership actually gives Anthropic strategically
- Whether vertical integration strengthens or dilutes Anthropic’s model-first identity
- Competitive implications for OpenAI, Google DeepMind, and Microsoft’s AI supply chain
- Execution risks in custom silicon and the broader semiconductor dependency problem
The News: Anthropic announced an expanded compute partnership with Google and Broadcom to develop and deploy dedicated AI compute infrastructure, deepening its existing relationship with Google Cloud and adding Broadcom’s custom ASIC capabilities to the mix. The arrangement gives Anthropic preferential access to purpose-built silicon and cloud capacity at a time when GPU supply constraints are the single most-cited barrier to AI scaling. The compute partnership follows a pattern of frontier model companies moving upstream into hardware relationships to reduce dependence on NVIDIA’s supply chain and pricing power.
Broadcom is the dominant player. As of Q1 CY2025, Broadcom commands approximately 72% of the data center XPU (custom accelerator) market by revenue, generating roughly $5.1 billion per quarter in XPU-related revenue – far ahead of the next largest player, Marvell, at approximately 13% share [1]. Anthropic is therefore partnering with the company that already builds the majority of custom AI chips deployed at hyperscale.
The timing is notable. According to Futurum Group’s 2H 2025 Semiconductors Decision Maker Survey (n=831), accelerator and GPU supply ranks as the number one constraint in scaling data center compute at 26% of respondents, ahead of power and cooling at 23%. Anthropic is not solving an abstract future problem; it is responding to a supply bottleneck that is actively limiting inference capacity today [2].
Anthropic’s Google-Broadcom Deal: Model Company or Infrastructure Play?
Analyst Take: This deal is less about Anthropic becoming a chip company and more about who controls the rate-limiting factor in AI deployment. Compute access is now a strategic asset, not a procurement line item. The question is whether Anthropic can absorb infrastructure complexity without losing the model quality focus that differentiates Claude from GPT and Gemini.
Compute Partnership as Competitive Moat or Operational Distraction
Anthropic’s move mirrors what Google, Microsoft, and Amazon have done for years: secure the supply chain rather than rent it. The logic is sound in theory. NVIDIA dominates GPU market share in data centers, which means any company relying entirely on NVIDIA for inference capacity is exposed to both supply constraints and pricing leverage. Custom silicon with Broadcom reduces that exposure. Broadcom’s dominance in the XPU market — 72% share and over $5 billion per quarter in revenue as of Q1 CY2025 [1] — means Anthropic is working with the most proven custom ASIC partner available. However, designing, validating, and deploying custom ASICs at scale requires engineering depth that is categorically different from training and fine-tuning frontier models. Amazon’s Trainium and Google’s TPU programs took years to reach production viability. Anthropic is entering this game later, with a smaller engineering base and against incumbents who have already absorbed the learning curve costs.
What Google Gets From the Compute Partnership That Anthropic May Not Have Priced In
Google is not a neutral infrastructure partner. It is a direct competitor in foundation models through Gemini, and it has a structural interest in keeping Anthropic capable enough to validate Google Cloud as an enterprise AI platform without letting Anthropic become an independent threat. The partnership deepens Anthropic’s dependency on Google Cloud at the exact moment Anthropic needs enterprise customers to see it as a vendor-neutral model provider.
The Compute Partnership and Inference Scaling Problem Nobody Is Solving Fast Enough
The strategic subtext of this compute partnership is inference, not training. Training runs are large but infrequent; inference at production scale is continuous and cost-sensitive. Futurum Group’s 2H 2025 Semiconductors Decision Maker Survey (n=831) found that 33% of organizations now cite inference at scale as their primary compute purpose. Custom silicon optimized for inference workloads can meaningfully reduce per-token costs, which directly affects Anthropic’s ability to price competitively against OpenAI and Google. If Broadcom’s ASICs deliver the inference efficiency gains Anthropic needs through this compute partnership, this deal has real strategic value. The XPU market’s projected growth to $84 billion by CY2029 [1] underscores the scale of opportunity — and the intensity of competition for Broadcom’s design capacity. If the silicon takes 18 to 24 months to reach production maturity, Anthropic will have spent significant engineering and financial capital on infrastructure that does not improve its competitive position in the window that currently matters most, which is the next 12 months of enterprise AI contract cycles.
What to Watch
- Silicon Timeline: Will Broadcom-designed ASICs reach production inference workloads within 12 months, or does the custom silicon roadmap slip past the 2027 enterprise procurement cycle?
- Google Dependency Ceiling: At what point does Anthropic’s Google Cloud reliance become a liability in enterprise deals where Microsoft Azure is the incumbent, and how does Anthropic’s sales team handle that conflict?
- OpenAI’s Counter-Move: Does OpenAI accelerate its own custom silicon program or deepen its Azure dependency in response, and which strategy proves more durable against a well-capitalized Anthropic with dedicated compute?
- Enterprise Neutrality Test: Can Anthropic credibly position Claude as a multi-cloud, vendor-neutral model when its infrastructure is co-developed with one of the three hyperscalers it needs enterprise buyers to trust it against?
Sources
1. Anthropic expands partnership with Google and Broadcom …
2. AI Platforms Market Forecast – Scenario Analysis
Declaration of generative AI and AI-assisted technologies in the writing process: This content has been generated with the support of artificial intelligence technologies. Due to the fast pace of content creation and the continuous evolution of data and information, The Futurum Group and its analysts strive to ensure the accuracy and factual integrity of the information presented. However, the opinions and interpretations expressed in this content reflect those of the individual author/analyst. The Futurum Group makes no guarantees regarding the completeness, accuracy, or reliability of any information contained herein. Readers are encouraged to verify facts independently and consult relevant sources for further clarification.
Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.
Read the full Futurum Group Disclosure.
Other Insights from Futurum:
Is Autonomous IT The Endgame For AI In Operations Or Just The Start Of A Bigger Shift?
Openai’S GPT-5.3 Instant Mini: Does Faster AI Mean Smarter Enterprise Decisions?
Openai Sora Discontinuation: What The End Of A Platform Means For Enterprise AI Strategy
Author Information
Nick Patience is VP and Practice Lead for AI Platforms at The Futurum Group. Nick is a thought leader on AI development, deployment, and adoption - an area he has researched for 25 years. Before Futurum, Nick was a Managing Analyst with S&P Global Market Intelligence, responsible for 451 Research’s coverage of Data, AI, Analytics, Information Security, and Risk. Nick became part of S&P Global through its 2019 acquisition of 451 Research, a pioneering analyst firm that Nick co-founded in 1999. He is a sought-after speaker and advisor, known for his expertise in the drivers of AI adoption, industry use cases, and the infrastructure behind its development and deployment. Nick also spent three years as a product marketing lead at Recommind (now part of OpenText), a machine learning-driven eDiscovery software company. Nick is based in London.