The News: At Oracle CloudWorld, the company announced limited availability of the new Oracle Cloud Infrastructure (OCI) Generative AI service, supporting the use of large language models (LLMs), built in collaboration with Cohere. Workloads in the service run on dedicated infrastructure within Oracle’s new NVIDIA Supercluster architecture. With the OCI Generative AI service, users can add AI to their applications via application programming interface (API) and, leveraging new features of the just-released Oracle Database 23c, securely include proprietary data with pre-trained LLMs for greater accuracy. See the complete Press Release covering the OCI Generative AI service at Oracle.com.
Oracle Pioneers the Integration of Generative AI in Cloud Computing
Analyst Take: Oracle has been busy this year infusing generative AI into everything it does, from database software to services and applications and industry solutions to Oracle’s own operations. Central to every element of this effort is the implementation of LLMs on OCI, the company’s global public cloud service.
The partnership with Cohere, a leader in the LLM space, brought the necessary data science, models, and machine learning (ML) development processes to Oracle. Oracle has the data, infrastructure, and cloud service delivery capabilities needed to implement AI at scale. Oracle customers with access to the OCI Generative AI service now have a much easier on-ramp to LLM:
- API access to the service enables familiar, straightforward addition of AI to existing applications.
- The AI system is dedicated, ensuring usage and data remains proprietary to the customer.
- Privacy and security features are built into the infrastructure and Oracle Database, alleviating the need for additional security measures or error-prone human processes.
Though currently only in limited availability—and we note, also limited in global rollout by the same supply shortages experienced everywhere in AI deployments—this service promises to be a major factor in AI adoption in the enterprise. OCI has a large footprint, with considerable infrastructure deployed in regions worldwide as well as a deep service catalog extending from infrastructure as a service (IaaS) to development and integration platforms to software as a service (SaaS). Thanks to the OCI Generative AI service, all of these are rapidly gaining AI capabilities in both their management and feature set, and Oracle customers will be able to connect existing applications to it as well as build new ones around it as quickly as the company can manage.
Go Ahead, Light Your Hair on Fire
In his Oracle CloudWorld keynote address, Oracle Founder, Executive Chairman, and CTO Larry Ellison called generative AI the biggest movement in the history of information technology (IT). No small statement from a person who has literally seen, if not himself been part of, almost every such movement. But he is right. As he noted, it has been only 10 months from the release of ChatGPT-3.5. For the past 9 months, AI has been the dominant topic of every IT conversation and strategy session. Cloud computing, the last great movement, took 5 years to become dominant.
Ellison also noted that last year’s Oracle CloudWorld had almost no mention of AI. Meaning almost no keynotes, roadmaps, breakout sessions, or booth messages incorporated AI, let alone made it a cornerstone. What Oracle has achieved in the past 9 months in completely reorienting its entire corporate, product, and customer strategy around generative AI, is remarkable and singular, and a flare for the industry. One may be tempted already to be weary of AI, AI, AI everywhere, but we are here to tell you that if a $312 billion company can light its hair on fire and reorient everything, you can, too.
Alongside some comparatively small but critical updates in Oracle Database 23c, OCI is central and foundational to this transformation. It delivers the base infrastructure of the necessary architecture, performance, and efficiency profile; the hosting of the LLMs from Cohere; integration with its services, customer applications, and most important, masses of customer data; and the inherent and managed security required to maintain privacy and accountability, such a justifiable customer concern when it comes to AI. This transformation is a model for enabling AI everywhere, and it is only classic market friction and the escalating challenges of data ingesting and migration that limit it.
Looking Ahead
Oracle’s recent announcement about the limited availability of its new OCI Generative AI service is a significant development that warrants comparative analysis. Oracle’s partnership with Cohere, a recognized player in the LLM space, indicates a well-thought-out strategic alignment, leveraging both Oracle’s robust cloud infrastructure and Cohere’s specialized ML capabilities. The move is indicative of Oracle’s shift toward becoming a more AI-centric organization, which parallels the broader industry’s focus on implementing AI in versatile applications.
Although Google Vertex AI offers end-to-end ML for deploying and scaling models easily, it is typically geared toward data scientists and ML engineers. Microsoft’s ChatGPT, in contrast, is renowned for its language-generation capabilities but is often implemented as a piece of a larger service architecture. Oracle’s OCI Generative AI service aims to create a more seamless on-ramp for enterprises to incorporate AI into existing ecosystems. The dedicated infrastructure and API access offer a lower barrier to entry, while data and application security features are tightly woven into the fabric of the Oracle Database 23c. This positions Oracle’s service as a compelling option for businesses looking for turnkey AI solutions that can scale and integrate easily into existing IT architectures.
In summary, Oracle is making a calibrated move to infuse AI across its product and service offerings, a strategy that not only aligns the company with current trends but also places Oracle in direct competition with other tech giants such as Amazon Web Services (AWS), IBM, Google, and Microsoft. The emphasis on easier adoption, dedicated resources, and embedded security could give Oracle a differentiated edge in a marketplace that is becoming increasingly crowded with AI-based solutions.
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Other Insights from The Futurum Group:
Oracle Database Analyst Summit: Powering the Multi-Cloud Era and Liberating Developers
Oracle FY 2024 Q1: Solid Results Bolstered by IaaS Gain
Author Information
Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the Vice President and Practice Leader for Hybrid Cloud, Infrastructure, and Operations at The Futurum Group. With a distinguished track record as a Forbes contributor and a ranking among the Top 10 Analysts by ARInsights, Steven's unique vantage point enables him to chart the nexus between emergent technologies and disruptive innovation, offering unparalleled insights for global enterprises.
Steven's expertise spans a broad spectrum of technologies that drive modern enterprises. Notable among these are open source, hybrid cloud, mission-critical infrastructure, cryptocurrencies, blockchain, and FinTech innovation. His work is foundational in aligning the strategic imperatives of C-suite executives with the practical needs of end users and technology practitioners, serving as a catalyst for optimizing the return on technology investments.
Over the years, Steven has been an integral part of industry behemoths including Broadcom, Hewlett Packard Enterprise (HPE), and IBM. His exceptional ability to pioneer multi-hundred-million-dollar products and to lead global sales teams with revenues in the same echelon has consistently demonstrated his capability for high-impact leadership.
Steven serves as a thought leader in various technology consortiums. He was a founding board member and former Chairperson of the Open Mainframe Project, under the aegis of the Linux Foundation. His role as a Board Advisor continues to shape the advocacy for open source implementations of mainframe technologies.
Guy is the CTO at Visible Impact, responsible for positioning, GTM, and sales guidance across technologies and markets. He has decades of field experience describing technologies, their business and community value, and how they are evaluated and acquired. Guy’s specialty areas include cloud, DevOps/cloud-native/12-factor, enterprise applications, Big Data, governance-risk-compliance, containerization, virtualization, HPC, CPUs-GPUs, and systems lifecycle management.
Guy started his technology career as a research director for technology media company Ziff Davis, with stints at PC Magazine, eWeek, and CIO Insight. Prior to joining Visible Impact, he worked at Dell, including postings in marketing, product, and technical marketing groups for a wide range of products, including engineered systems, cloud infrastructure, enterprise software, and mission-critical cloud services. He lives and works in Austin, TX