Menu

AMD OpenAI Partnership: Scale Win or Execution Risk at 6 GW?

AMD OpenAI Partnership Scale Win or Execution Risk at 6 GW

Analyst(s): Ray Wang
Publication Date: October 7, 2025

OpenAI will deploy 6 GW of AMD Instinct GPUs under a multi-year agreement, beginning with 1 GW of MI450 GPUs in H2 FY 2026. Warrants for up to 160 million AMD shares are linked to milestone deployments and share-price targets, aligning both companies to execution.

What is Covered in this Article:

  • Multi-year AMD-OpenAI agreement for 6 GW of Instinct GPUs, beginning with 1 GW MI450 in H2 FY 2026.
  • Warrants of up to 160 million AMD shares tied to deployment milestones and AMD share-price targets.
  • Power infrastructure and deployment pace as key execution factors.
  • Broader capacity expansion context and competitive positioning across AI compute suppliers.
  • Financial and operational implications, including revenue, accretion, and alignment incentives.

The News: AMD and OpenAI have signed a multi-year, multi-generation deal for OpenAI to use 6 gigawatts of AMD Instinct GPUs to power its next wave of AI infrastructure. The first 1 gigawatt rollout with MI450 GPUs is planned for H2 FY 2026, with future growth tied to newer GPU generations and large-scale AI systems.

To align both sides, AMD issued warrants for up to 160 million shares that vest as OpenAI scales from 1 gigawatt (GW) to 6 GW. These warrants are tied to AMD’s stock performance and OpenAI’s technical and commercial milestones. AMD expects the deal to bring in tens of billions in revenue and add to non-GAAP earnings per share.

AMD OpenAI Partnership: Scale Win or Execution Risk at 6 GW?

Analyst Take: OpenAI’s multi-year plan to deploy 6 GW of AMD Instinct GPUs, starting with 1 GW of MI450 in H2 FY 2026, is among the largest AI compute projects announced. The deal is built around shared milestones, financial alignment, and coordinated hardware development. AMD leadership called it a significant step toward enabling “the world’s most ambitious AI buildout.” At the same time, OpenAI sees it as key to securing more compute capacity over its competitors in the next few years, potentially with a better pricing deal. The partnership is also a vote of confidence for OpenAI’s tremendous business potential.

Some might view this deal as a potential threat to NVIDIA and Broadcom’s future share of OpenAI’s compute demand. However, given the rapid surge in OpenAI’s overall compute needs—evidenced by the sharp rise in processed tokens and user activity (now exceeding 800 million users)—we believe the expanding TAM should largely offset such concerns. More importantly, OpenAI is expected to continue relying heavily on NVIDIA’s compute infrastructure over the next few years, supported by NVIDIA’s systematic advantages in hardware performance, software ecosystem maturity, and networking integration.

The setup ties equity incentives to deployment goals, spans several GPU generations, and connects both companies’ product roadmaps. Moreover, collaborating even more closely with leading AI labs will potentially help AMD’s software development for its compute offering in the future, which has been an issue many have pointed out. This also allows AMD to anchor arguably its most important customer in the data center in the coming years with its upcoming MI450 GPUs, which will be the company’s first rack-level solutions (e.g, NVIDIA’s GB200 NVL72)

Deal Scale and Incentive Structure

The agreement covers 6 GW of compute, beginning with the 1 GW MI450 rollout in H2 FY 2026 and expanding with later Instinct models. AMD issued up to 160 million share warrants that vest with each deployment stage and stock milestone, ensuring both sides benefit from success. The warrants extend roughly five years, through October 2030, keeping long-term interests aligned. AMD expects tens of billions in revenue and a lift to non-GAAP EPS from the deal, positioning AI infrastructure as a primary growth driver. The AMD OpenAI partnership builds strong incentives for delivery and shared financial upside, with execution milestones shaping ultimate value.

Power Infrastructure and Deployment Timeline

The project’s size puts pressure on available power to support massive data centers. AI demand is already testing grid capacity in the US, and adding 6 GW raises that challenge further. AMD’s CEO said deployment timing will depend on access to enough power and noted the goal is to move as soon as possible. Some facilities already use on-site natural gas plants to support scaling. The mix of growing compute needs and power constraints makes the timeline variable. The success of the AMD OpenAI partnership will rely on securing energy capacity and aligning infrastructure with data center expansion.

Competitive and Strategic Positioning

The 6 GW deal comes alongside OpenAI’s other compute agreements, including a 10 GW partnership with another supplier and additional collaborations across chip and data center companies. AMD’s deal adds to, rather than replaces, those relationships. The Instinct MI450 represents the next step after AMD’s MI300X and MI350X chips, which already support AI workloads. AMD’s CEO described this deployment as the company’s largest yet, saying it could help expand the adoption of AMD technology in future infrastructure projects. The AMD OpenAI partnership adds another supplier to OpenAI’s compute base and strengthens AMD’s standing in AI hardware, though long-term competitiveness depends on execution and performance gains.

Financial and Strategic Alignment

AMD expects the partnership to generate tens of billions in revenue, adding major scale to its data center segment, which was projected at $6.55 billion in FY 2025. The structure links AMD’s growth to OpenAI’s infrastructure expansion, creating shared reliance on large AI investments. Both companies plan to leverage their strengths – AMD in high-performance computing and OpenAI in generative AI – to advance technology and expand AI capabilities. The incentive plan ties outcomes to share price and performance milestones. The AMD OpenAI partnership aligns both companies financially and strategically over several years, though long-term results will depend on steady progress toward GW-scale deployments.

What to Watch:

  • The 1 GW MI450 rollout in H2 FY 2026 is the first measurable deployment milestone.
  • The timing of warrant vesting is linked to deployed capacity and AMD share-price targets.
  • Access to power infrastructure and self-generation initiatives to sustain 6 GW capacity.
  • Execution of future Instinct GPU generations beyond MI450 in large-scale AI systems.
  • AMD’s reported trajectory toward tens of billions in revenue from this agreement.
  • Progress in infrastructure delivery relative to OpenAI’s other disclosed compute deals.

See the complete press release on the AMD–OpenAI strategic partnership on the AMD website.

Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.

Other insights from Futurum:

OpenAI Windsurf Acquisition Sends Shot Heard Around the AI World

AMD Q2 FY 2025 Sales Beat Offset by MI308-Linked EPS Decline

IBM, AMD Team with Zyphra to Build AI Infrastructure on IBM Cloud

Author Information

Ray Wang is the Research Director for Semiconductors, Supply Chain, and Emerging Technology at Futurum. His coverage focuses on the global semiconductor industry and frontier technologies. He also advises clients on global compute distribution, deployment, and supply chain. In addition to his main coverage and expertise, Wang also specializes in global technology policy, supply chain dynamics, and U.S.-China relations.

He has been quoted or interviewed regularly by leading media outlets across the globe, including CNBC, CNN, MarketWatch, Nikkei Asia, South China Morning Post, Business Insider, Science, Al Jazeera, Fast Company, and TaiwanPlus.

Prior to joining Futurum, Wang worked as an independent semiconductor and technology analyst, advising technology firms and institutional investors on industry development, regulations, and geopolitics. He also held positions at leading consulting firms and think tanks in Washington, D.C., including DGA–Albright Stonebridge Group, the Center for Strategic and International Studies (CSIS), and the Carnegie Endowment for International Peace.

Related Insights
Micron Technology Q1 FY 2026 Sets Records; Strong Q2 Outlook
December 18, 2025

Micron Technology Q1 FY 2026 Sets Records; Strong Q2 Outlook

Futurum Research analyzes Micron’s Q1 FY 2026, focusing on AI-led demand, HBM commitments, and a pulled-forward capacity roadmap, with guidance signaling continued strength into FY 2026 amid persistent industry supply...
NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy
December 16, 2025

NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy

Nick Patience, AI Platforms Practice Lead at Futurum, shares his insights on NVIDIA's release of its Nemotron 3 family of open-source models and the acquisition of SchedMD, the developer of...
Broadcom Q4 FY 2025 Earnings AI And Software Drive Beat
December 15, 2025

Broadcom Q4 FY 2025 Earnings: AI And Software Drive Beat

Futurum Research analyzes Broadcom’s Q4 FY 2025 results, highlighting accelerating AI semiconductor momentum, Ethernet AI switching backlog, and VMware Cloud Foundation gains, alongside system-level deliveries....
Synopsys Q4 FY 2025 Earnings Highlight Resilient Demand, Ansys Integration
December 12, 2025

Synopsys Q4 FY 2025 Earnings Highlight Resilient Demand, Ansys Integration

Futurum Research analyzes Synopsys’ Q4 FY 2025 results, highlighting AI-era EDA demand, Ansys integration momentum, and the NVIDIA partnership....
Hewlett Packard Enterprise Q4 FY 2025 ARR Surges as AI Orders Build
December 8, 2025

Hewlett Packard Enterprise Q4 FY 2025: ARR Surges as AI Orders Build

Futurum Research analyzes HPE’s Q4 FY 2025 results, highlighting networking-led margin resiliency, AI server order momentum, and GreenLake ARR growth....
AWS re:Invent 2025: Wrestling Back AI Leadership
December 5, 2025

AWS re:Invent 2025: Wrestling Back AI Leadership

Futurum analysts share their insights on how AWS re:Invent 2025 redefines the cloud giant as an AI manufacturer. We analyze Nova models, Trainium silicon, and AI Factories as AWS moves...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.