Menu

Oracle, Microsoft, and OpenAI Form GenAI Power Trio

Oracle, Microsoft, and OpenAI Form GenAI Power Trio

The News: Oracle, Microsoft, and OpenAI are partnering to extend the Microsoft Azure AI platform to Oracle Cloud Infrastructure (OCI) to provide additional capacity for OpenAI. Read the full press release on the Oracle website.

Oracle, Microsoft, and OpenAI Form GenAI Power Trio

Analyst Take: Oracle Cloud Infrastructure (OCI) has emerged as the fourth hyperscaler, significantly expanding its footprint in the cloud market. The global surge in AI adoption and the growing demand for large language models (LLMs) provide substantial tailwinds for OCI’s growth. Oracle’s deep integration of AI capabilities, such as generative AI and computer vision, into its Gen2 AI infrastructure highlights its readiness to meet the evolving needs of businesses. Moreover, Oracle’s stronghold on enterprise data, coupled with its multi-cloud agility, positions it advantageously to capitalize on the burgeoning AI ecosystem.

OpenAI selected OCI to extend the Microsoft Azure AI platform to OCI, enlarging OpenAI’s overall cloud capacity. OpenAI is the AI research and development company behind ChatGPT, which provides generative AI (GenAI) to more than 100 million users every month. OpenAI’s high-profile endorsement aligns with the burgeoning AI ecosystem demand for greater sized LLMs, directly invigorating demand for Oracle Gen2 AI infrastructure and its price performance advantages.

As such, OpenAI will join thousands of players across industries globally that run their AI workloads on OCI AI infrastructure. Adept, Modal, MosaicML, NVIDIA, Reka, Suno, Together AI, Twelve Labs, xAI, and others use OCI Supercluster to train and inference next-generation AI models.

Oracle Gen2 AI infrastructure includes GenAI, computer vision, and predictive analytics throughout its distributed cloud. Gen2 OCI uses ultra-swift remote data memory access (RDMA) networking and connects NVIDIA GPUs in huge superclusters that can efficiently train LLMs. As such, OCI Supercluster can scale up to 64K NVIDIA Blackwell GPUs or GB200 Grace Blackwell Superchips connected by ultra-low-latency RDMA cluster networking and a choice of HPC storage.

RDMA networking means that one computer in the network can access the memory of another computer without interrupting the other computer. This results in the ability to move massive amounts of data from one computer to another extremely fast, especially in relation to conventional networks. OCI’s speed and cost advantages are why vendors such as Cohere, NVIDIA, plus X.AI, and now OpenAI, are using it to train their LLMs.

Oracle already has demonstrated its GenAI acumen with the general availability of its OCI Generative AI offering as well as OCI Generative AI Agents and OCI Data Science AI Quick Actions. Key details include that OCI Generative AI service is a fully managed service that integrates LLMs (e.g., OpenAI, Cohere, and Meta Llama 2) to address a wide range of business use cases. OCI Generative AI service includes multilingual capabilities that support over 100 languages, an improved GPU cluster management experience, and flexible fine-tuning options. Customers can use OCI Generative AI service in the Oracle Cloud and on-premises through OCI Dedicated Region.

Oracle and Microsoft Reinforce Multi-cloud and GenAI Credentials

From our view, Oracle’s strategic focus on enabling multi-cloud agility strengthens the company’s competitive position in the unfolding multi-cloud era, especially as enterprise customers increase their demand for streamlined multi-cloud interconnectedness and capabilities. Oracle is committed to getting the interconnect latencies to the equivalent of two data centers acting as one data center, effectively minimizing latency issues and providing a one cloud experience to customers.

Oracle’s multi-cloud is serving as the interconnect approach, which all hyperscalers and most cloud service providers should adopt. Business customers will continue to drive the demand to use whatever service they want in the cloud that is best suited for any workload. As such, AWS, Google Cloud, and Azure should all connect to other clouds in the same manner as interstate highways interconnect motor vehicle traffic.

Fully validating Oracle’s multi-cloud philosophy is the coinciding announcement that Oracle and Google Cloud entered a partnership that gives customers the choice to combine OCI and Google Cloud technologies to help accelerate their application migrations and modernization. Specifically, Google Cloud’s Cross-Cloud Interconnect will be initially available for customer onboarding in 11 global regions, allowing customers to deploy general purpose workloads with no cross-cloud data transfer charges.

Both companies will jointly go-to-market with Oracle Database@Google Cloud, with the prime objective of benefiting enterprises worldwide and across multiple industries, including financial services, healthcare, retail, manufacturing, and more. Oracle’s Real Application Clusters (RAC) and Autonomous Database can be paired with services from Google Cloud, Azure, and AWS, according to customer needs, including interconnect assurances with Azure and now Google Cloud.

Of interest is the question of OCI supporting Google’s Bard/Gemini offering, emulating its new OpenAI/Azure relationship. For now, it’s not a specific component of the Google Cloud alliance, although OCI and Google Cloud indicated the eventual possibility of adding Google’s chatbot and related portfolio capabilities akin to the OpenAI/Azure implementation as well. Ultimately, customer demand will determine such an outcome.
Key Takeaways and Looking Ahead: Oracle, Microsoft, OpenAI Make GenAI Multi-Cloud Friendly

Oracle’s strategic positioning in the AI landscape is reinforced by its robust Gen2 AI infrastructure and key partnerships with leading AI entities such as OpenAI. These collaborations enhance Oracle’s capabilities to support and scale advanced AI workloads, leveraging its ultra-low-latency RDMA networking and extensive GPU superclusters. By integrating multi-cloud interoperability, Oracle ensures that enterprises can seamlessly deploy AI solutions across various cloud environments, maximizing flexibility and performance.

The ability to combine Oracle’s enterprise data management expertise with cutting-edge AI technologies makes it a formidable player in the hybrid multi-cloud world. As businesses increasingly adopt AI-driven solutions, Oracle’s comprehensive offerings and strategic alliances position it to lead in delivering innovative and efficient AI services.

Overall, we believe the collaboration with OpenAI to extend the Microsoft Azure AI platform to OCI validates that Oracle Gen2 AI infrastructure, underscored by RDMA-fueled innovation, can support and scale the most demanding GenAI/LLM workloads with immediacy. OCI’s multi-cloud advances underline how combining cloud services from multiple clouds can enable customers to optimize cost, performance, and functionality in modernizing their databases and applications.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Oracle Database 23ai: Taking Enterprise AI to the Next Level

Oracle Fiscal 2024 Q3: Results Fueled by Cloud and Generative AI Surge

Oracle Generative AI: Advancing the Frontier of Enterprise Innovation

Image Credit: Oracle

Author Information

Ron is an experienced, customer-focused research expert and analyst, with over 20 years of experience in the digital and IT transformation markets, working with businesses to drive consistent revenue and sales growth.

Ron holds a Master of Arts in Public Policy from University of Nevada — Las Vegas and a Bachelor of Arts in political science/government from William and Mary.

Steven engages with the world’s largest technology brands to explore new operating models and how they drive innovation and competitive edge.

Related Insights
Teradata Trades Duct Tape for Unified Intelligence With Its Latest Release
March 10, 2026

Teradata Trades Duct Tape for Unified Intelligence With Its Latest Release

Brad Shimmin, VP and Practice Lead at Futurum, analyzes Teradata’s launch of the Agentic Enterprise Vector Store. This multi-modal pivot aims to challenge the standalone vector database by bringing AI...
Can Microsoft's Frontier Suite Deliver AI Excellence at Scale
March 10, 2026

Can Microsoft’s Frontier Suite Deliver AI Excellence at Scale?

Futurum analysts Keith Kirkpatrick and Fernando Montenegro share their insights on Microsoft’s Frontier Suite, and discuss the implications for both enterprise buyers and the company’s competitors....
Marvell Technology Q4 FY 2026 Earnings Raise Data Center Growth Outlook
March 9, 2026

Marvell Technology Q4 FY 2026 Earnings Raise Data Center Growth Outlook

Futurum Research analyzes Marvell’s Q4 FY 2026 earnings, focusing on raised data center outlook, scale-up networking expansion, and connectivity roadmap transitions....
Okta Q4 FY 2026 Earnings Highlight Agentic Identity Positioning
March 6, 2026

Okta Q4 FY 2026 Earnings Highlight Agentic Identity Positioning

Dion Hinchcliffe is Vice President & Practice Lead, CIO & Technology Buyers reviews Okta’s Q4 FY 2026 earnings, focusing on agentic identity positioning, evolving pricing models, and how large-customer platform...
CrowdStrike Q4 FY 2026 Earnings Extend ARR Scale and AI Security Focus
March 6, 2026

CrowdStrike Q4 FY 2026 Earnings Extend ARR Scale and AI Security Focus

Fernando Montenegro, VP Cybersecurity at Futurum, highlights CrowdStrike’s Q4 FY26 earnings: Falcon expands into AI security, identity, and browser runtime, underscoring consolidation-driven cybersecurity strategies....
Broadcom Q1 FY 2026 Earnings Driven by XPU Momentum
March 5, 2026

Broadcom Q1 FY 2026 Earnings Driven by XPU Momentum

Brendan Burke, Research Director at Futurum, analyzes Broadcom’s Q1 FY 2026 earnings, focusing on AI accelerator and networking momentum, expanding custom XPU programs, and VMware’s role in AI-era private cloud...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.