The News: Oracle announced the general availability of HeatWave GenAI, which includes in-database large language model (LLM) breakthroughs, an automated in-database vector store, scale-out vector processing, and the ability to have contextual conversations in natural language informed by unstructured content. HeatWave GenAI is available immediately in all OCI regions, OCI Dedicated Region. Read the full press release on the Oracle website.
Oracle HeatWave GenAI: AI Innovation Without Data Movement or Added Expense
Analyst Take: Oracle HeatWave GenAI enables customers to build generative AI (GenAI) applications without data movement, AI expertise, or extra cost. Through the solution’s new capabilities, such as in-database LLMs, customers can bring GenAI to their enterprise data without necessitating moving data to a separate vector database representing an inflection point across in-memory query engine technology.
The HeatWave GenAI launch builds on the extensive ecosystem progress and inroads Oracle has made across its portfolio and channels including major moves. These include the Oracle APEX 24.1 launch that delivers AI-assisted app development by enlisting Oracle’s data platform and low code acumen to fuel mobile and web app innovation. Moreover, its collaboration with OpenAI to extend the Microsoft Azure AI platform to OCI validates that Oracle Gen2 AI infrastructure, underscored by RDMA-fueled innovation, can support and scale the most demanding LLM/GenAI workloads with immediacy.
Oracle HeatWave GenAI: Taking the Developer Experience to the Next NL Level
Through HeatWave GenAI, developers can create a vector store for enterprise unstructured content with a single SQL command, using built-in embedding models. Users can perform natural language (NL) searches in a single step using either in-database or external LLMs. Data does not leave the database, and there is no requirement to provision GPUs due to HeatWave’s scale and performance. As such, developers can reduce application complexity, boost performance, strengthen data security, and reduce costs.
The key highlights of the new HeatWave GenAI offering encompass:
- In-database LLMs: Streamlines the development of GenAI applications at a lower cost on a turnkey, out-of-the box basis. Customers can avoid the complications of external LLM selection and integration alongside avoiding uncertainty about the availability of LLMs in various cloud providers’ data centers.
- Automated, in-database Vector Store: Allows customers to use GenAI with their business documents without moving data to a separate vector database and without AI expertise. Using the vector store for RAG helps solve the hallucination problem of LLMs.
- HeatWave Chat. A Visual Code plug-in for MySQL shell that provides a graphical interface for HeatWave GenAI and enables developers to ask questions in NL or SQL. The integrated Lakehouse Navigator enables users to select files from object storage and create a vector store, guiding LLMs to retrieve information from specific data sets across the database, HeatWave, Lakehouse, and HeatWave Vector Store to increase speed and accuracy. This expedites contextual conversations and allows users to verify the source of answers generated by the LLM.
- Scale-out Vector Processing: Delivers rapid semantic search results without loss of accuracy. HeatWave’s support of new native VECTOR data type alongside an optimized implementation of the distance functions enables customers to perform semantic queries with standard SQL. The scale-out architecture ensures vector processing to execute at near-memory bandwidth and parallelize across up to 512 HeatWave nodes.
Key to the HeatWave GenAI proposition is that it provides immediate cost savings, based on the briefings we have received from Oracle, as there is no additional cost to use LLMs and system resources can be used optimally. Plus, HeatWave GenAI can be used anywhere with consistent results across deployments and integration with HeatWave AutoML enables new applications and higher quality results. Moreover, based on our initial analysis, HeatWave GenAI security entails no compromise with performance since data does not leave the database, assuring data isolation in accord with ensuring performance isolation since the solution is not a shared service.
Oracle Body Slams the Competition with HeatWave GenAI Debut
From our perspective, Oracle HeatWave GenAI delivers the search price performance advantages integral to moving the market needle in Oracle’s direction. For creating a vector store for documents in PDF, PPT, WORD, and HTML formats, HeatWave GenAI is 23x faster plus 1/4th the cost of using Knowledge base for Amazon Bedrock. We would need to validate the claims Oracle is making in our Signal 65 labs, but based on the companies provided data we are initially impressed.
The HeatWave GenAI vector processing price performance advantages are clear based on the information the company has shared and are testament to answering the main question: Why Oracle HeatWave GenAI?
Vector Processing Price Performance Comparison
Moreover, a separate Oracle benchmark reveals that vector indexes in Amazon Aurora PostgreSQL with pgvector can have a high degree of inaccuracy and can yield incorrect results. In contrast, HeatWave similarity search processing always provides accurate results, has predictable response time, is performed at near-memory speed, and is up to 10X-80X faster than Aurora using the same number of cores. In sum, we find that Oracle HeatWave GenAI delivers decisive price performance advantages against the competition.
Looking Ahead
The market is increasingly moving toward vector processing as a key component of the AI stack when you couple this with the need for moving Ai toward the data, rather than vice versa data-centricity becomes key. Data-centricity is crucial to Oracle’s strategy, and the ability to leverage its extensive database footprint to offer in-database capabilities, reducing the need for data movement and thereby enhancing security and performance, is a major differentiator.
Oracle’s HeatWave GenAI exemplifies this shift by integrating LLMs directly within the database, streamlining the development of generative AI applications without requiring external AI expertise or vector-specific databases. This approach not only simplifies application complexity but also provides cost efficiencies and performance gains, positioning Oracle as a leader in the data-centric AI landscape. As vector processing becomes more critical, Oracle’s innovations in scale-out vector processing and automated vector store creation are setting new benchmarks, driving the market toward more integrated and efficient AI solutions.
Key Takeaways: Oracle HeatWave GenAI Out Runs and Guns the Competition
Overall, there’s vector processing and there’s vector processing done right. We believe Oracle has delivered vector processing rice performance advantage that Oracle claims is 30X faster than Snowflake, 18X faster than Google BigQuery, and 15X faster than Databricks – at up to 6X lower cost, based on its benchmarks. Clearly, HeatWave GenAI didn’t just turn the heat up on the competition. It has melted them down on fundamental price performance metrics. For any organization serious about high performance generative AI workloads, spending company resources on any of these three other vector database offerings is the equivalent of burning money and trying to justify it as a good idea.
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Other Insights from The Futurum Group:
Oracle Debuts APEX AI Assistant to Unleash Developer-led AI Innovation
Oracle, Microsoft, and OpenAI Form GenAI Power Trio
Oracle Fiscal 2024 Q4 & Full-Year: Cloud and GenAI Uplift Results
Author Information
Ron is an experienced, customer-focused research expert and analyst, with over 20 years of experience in the digital and IT transformation markets, working with businesses to drive consistent revenue and sales growth.
He is a recognized authority at tracking the evolution of and identifying the key disruptive trends within the service enablement ecosystem, including a wide range of topics across software and services, infrastructure, 5G communications, Internet of Things (IoT), Artificial Intelligence (AI), analytics, security, cloud computing, revenue management, and regulatory issues.
Prior to his work with The Futurum Group, Ron worked with GlobalData Technology creating syndicated and custom research across a wide variety of technical fields. His work with Current Analysis focused on the broadband and service provider infrastructure markets.
Ron holds a Master of Arts in Public Policy from University of Nevada — Las Vegas and a Bachelor of Arts in political science/government from William and Mary.
Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the Vice President and Practice Leader for Hybrid Cloud, Infrastructure, and Operations at The Futurum Group. With a distinguished track record as a Forbes contributor and a ranking among the Top 10 Analysts by ARInsights, Steven's unique vantage point enables him to chart the nexus between emergent technologies and disruptive innovation, offering unparalleled insights for global enterprises.
Steven's expertise spans a broad spectrum of technologies that drive modern enterprises. Notable among these are open source, hybrid cloud, mission-critical infrastructure, cryptocurrencies, blockchain, and FinTech innovation. His work is foundational in aligning the strategic imperatives of C-suite executives with the practical needs of end users and technology practitioners, serving as a catalyst for optimizing the return on technology investments.
Over the years, Steven has been an integral part of industry behemoths including Broadcom, Hewlett Packard Enterprise (HPE), and IBM. His exceptional ability to pioneer multi-hundred-million-dollar products and to lead global sales teams with revenues in the same echelon has consistently demonstrated his capability for high-impact leadership.
Steven serves as a thought leader in various technology consortiums. He was a founding board member and former Chairperson of the Open Mainframe Project, under the aegis of the Linux Foundation. His role as a Board Advisor continues to shape the advocacy for open source implementations of mainframe technologies.