Oracle, Microsoft, and OpenAI Form GenAI Power Trio

Oracle, Microsoft, and OpenAI Form GenAI Power Trio

The News: Oracle, Microsoft, and OpenAI are partnering to extend the Microsoft Azure AI platform to Oracle Cloud Infrastructure (OCI) to provide additional capacity for OpenAI. Read the full press release on the Oracle website.

Oracle, Microsoft, and OpenAI Form GenAI Power Trio

Analyst Take: Oracle Cloud Infrastructure (OCI) has emerged as the fourth hyperscaler, significantly expanding its footprint in the cloud market. The global surge in AI adoption and the growing demand for large language models (LLMs) provide substantial tailwinds for OCI’s growth. Oracle’s deep integration of AI capabilities, such as generative AI and computer vision, into its Gen2 AI infrastructure highlights its readiness to meet the evolving needs of businesses. Moreover, Oracle’s stronghold on enterprise data, coupled with its multi-cloud agility, positions it advantageously to capitalize on the burgeoning AI ecosystem.

OpenAI selected OCI to extend the Microsoft Azure AI platform to OCI, enlarging OpenAI’s overall cloud capacity. OpenAI is the AI research and development company behind ChatGPT, which provides generative AI (GenAI) to more than 100 million users every month. OpenAI’s high-profile endorsement aligns with the burgeoning AI ecosystem demand for greater sized LLMs, directly invigorating demand for Oracle Gen2 AI infrastructure and its price performance advantages.

As such, OpenAI will join thousands of players across industries globally that run their AI workloads on OCI AI infrastructure. Adept, Modal, MosaicML, NVIDIA, Reka, Suno, Together AI, Twelve Labs, xAI, and others use OCI Supercluster to train and inference next-generation AI models.

Oracle Gen2 AI infrastructure includes GenAI, computer vision, and predictive analytics throughout its distributed cloud. Gen2 OCI uses ultra-swift remote data memory access (RDMA) networking and connects NVIDIA GPUs in huge superclusters that can efficiently train LLMs. As such, OCI Supercluster can scale up to 64K NVIDIA Blackwell GPUs or GB200 Grace Blackwell Superchips connected by ultra-low-latency RDMA cluster networking and a choice of HPC storage.

RDMA networking means that one computer in the network can access the memory of another computer without interrupting the other computer. This results in the ability to move massive amounts of data from one computer to another extremely fast, especially in relation to conventional networks. OCI’s speed and cost advantages are why vendors such as Cohere, NVIDIA, plus X.AI, and now OpenAI, are using it to train their LLMs.

Oracle already has demonstrated its GenAI acumen with the general availability of its OCI Generative AI offering as well as OCI Generative AI Agents and OCI Data Science AI Quick Actions. Key details include that OCI Generative AI service is a fully managed service that integrates LLMs (e.g., OpenAI, Cohere, and Meta Llama 2) to address a wide range of business use cases. OCI Generative AI service includes multilingual capabilities that support over 100 languages, an improved GPU cluster management experience, and flexible fine-tuning options. Customers can use OCI Generative AI service in the Oracle Cloud and on-premises through OCI Dedicated Region.

Oracle and Microsoft Reinforce Multi-cloud and GenAI Credentials

From our view, Oracle’s strategic focus on enabling multi-cloud agility strengthens the company’s competitive position in the unfolding multi-cloud era, especially as enterprise customers increase their demand for streamlined multi-cloud interconnectedness and capabilities. Oracle is committed to getting the interconnect latencies to the equivalent of two data centers acting as one data center, effectively minimizing latency issues and providing a one cloud experience to customers.

Oracle’s multi-cloud is serving as the interconnect approach, which all hyperscalers and most cloud service providers should adopt. Business customers will continue to drive the demand to use whatever service they want in the cloud that is best suited for any workload. As such, AWS, Google Cloud, and Azure should all connect to other clouds in the same manner as interstate highways interconnect motor vehicle traffic.

Fully validating Oracle’s multi-cloud philosophy is the coinciding announcement that Oracle and Google Cloud entered a partnership that gives customers the choice to combine OCI and Google Cloud technologies to help accelerate their application migrations and modernization. Specifically, Google Cloud’s Cross-Cloud Interconnect will be initially available for customer onboarding in 11 global regions, allowing customers to deploy general purpose workloads with no cross-cloud data transfer charges.

Both companies will jointly go-to-market with Oracle Database@Google Cloud, with the prime objective of benefiting enterprises worldwide and across multiple industries, including financial services, healthcare, retail, manufacturing, and more. Oracle’s Real Application Clusters (RAC) and Autonomous Database can be paired with services from Google Cloud, Azure, and AWS, according to customer needs, including interconnect assurances with Azure and now Google Cloud.

Of interest is the question of OCI supporting Google’s Bard/Gemini offering, emulating its new OpenAI/Azure relationship. For now, it’s not a specific component of the Google Cloud alliance, although OCI and Google Cloud indicated the eventual possibility of adding Google’s chatbot and related portfolio capabilities akin to the OpenAI/Azure implementation as well. Ultimately, customer demand will determine such an outcome.
Key Takeaways and Looking Ahead: Oracle, Microsoft, OpenAI Make GenAI Multi-Cloud Friendly

Oracle’s strategic positioning in the AI landscape is reinforced by its robust Gen2 AI infrastructure and key partnerships with leading AI entities such as OpenAI. These collaborations enhance Oracle’s capabilities to support and scale advanced AI workloads, leveraging its ultra-low-latency RDMA networking and extensive GPU superclusters. By integrating multi-cloud interoperability, Oracle ensures that enterprises can seamlessly deploy AI solutions across various cloud environments, maximizing flexibility and performance.

The ability to combine Oracle’s enterprise data management expertise with cutting-edge AI technologies makes it a formidable player in the hybrid multi-cloud world. As businesses increasingly adopt AI-driven solutions, Oracle’s comprehensive offerings and strategic alliances position it to lead in delivering innovative and efficient AI services.

Overall, we believe the collaboration with OpenAI to extend the Microsoft Azure AI platform to OCI validates that Oracle Gen2 AI infrastructure, underscored by RDMA-fueled innovation, can support and scale the most demanding GenAI/LLM workloads with immediacy. OCI’s multi-cloud advances underline how combining cloud services from multiple clouds can enable customers to optimize cost, performance, and functionality in modernizing their databases and applications.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Oracle Database 23ai: Taking Enterprise AI to the Next Level

Oracle Fiscal 2024 Q3: Results Fueled by Cloud and Generative AI Surge

Oracle Generative AI: Advancing the Frontier of Enterprise Innovation

Image Credit: Oracle

Author Information

Ron is an experienced, customer-focused research expert and analyst, with over 20 years of experience in the digital and IT transformation markets, working with businesses to drive consistent revenue and sales growth.

He is a recognized authority at tracking the evolution of and identifying the key disruptive trends within the service enablement ecosystem, including a wide range of topics across software and services, infrastructure, 5G communications, Internet of Things (IoT), Artificial Intelligence (AI), analytics, security, cloud computing, revenue management, and regulatory issues.

Prior to his work with The Futurum Group, Ron worked with GlobalData Technology creating syndicated and custom research across a wide variety of technical fields. His work with Current Analysis focused on the broadband and service provider infrastructure markets.

Ron holds a Master of Arts in Public Policy from University of Nevada — Las Vegas and a Bachelor of Arts in political science/government from William and Mary.

Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the Vice President and Practice Leader for Hybrid Cloud, Infrastructure, and Operations at The Futurum Group. With a distinguished track record as a Forbes contributor and a ranking among the Top 10 Analysts by ARInsights, Steven's unique vantage point enables him to chart the nexus between emergent technologies and disruptive innovation, offering unparalleled insights for global enterprises.

Steven's expertise spans a broad spectrum of technologies that drive modern enterprises. Notable among these are open source, hybrid cloud, mission-critical infrastructure, cryptocurrencies, blockchain, and FinTech innovation. His work is foundational in aligning the strategic imperatives of C-suite executives with the practical needs of end users and technology practitioners, serving as a catalyst for optimizing the return on technology investments.

Over the years, Steven has been an integral part of industry behemoths including Broadcom, Hewlett Packard Enterprise (HPE), and IBM. His exceptional ability to pioneer multi-hundred-million-dollar products and to lead global sales teams with revenues in the same echelon has consistently demonstrated his capability for high-impact leadership.

Steven serves as a thought leader in various technology consortiums. He was a founding board member and former Chairperson of the Open Mainframe Project, under the aegis of the Linux Foundation. His role as a Board Advisor continues to shape the advocacy for open source implementations of mainframe technologies.

SHARE:

Latest Insights:

On this episode of The Six Five Pod, hosts Patrick Moorhead and Daniel Newman discuss NVIDIA's GTC conference, Google's potential acquisition of Wiz and the competitive landscape for AI accelerators.
Solidigm and NVIDIA Unveil Cold-Plate-Cooled SSD to Eliminate Air Cooling from AI Servers
Ron Westfall, Research Director at The Futurum Group, shares insights on Solidigm’s cold-plate-cooled SSD, developed with NVIDIA to enable fanless, liquid-cooled AI server infrastructure and meet surging demand driven by gen AI workloads.
In an engaging episode of Six Five Webcast - Infrastructure Matters, Camberley Bates and Keith Townsend explore key updates in data infrastructure and AI markets, including the revolutionary IBM Storage Scale and Pure Storage’s latest enhancements.
Kevin Wollenweber, SVP at Cisco, joins Patrick Moorhead on Six Five On The Road to discuss accelerating AI adoption in enterprises through Cisco's partnership with NVIDIA.

Thank you, we received your request, a member of our team will be in contact with you.