Oracle has announced the launch of HeatWave GenAI, featuring the industry's inaugural in-database large language models (LLMs), an automated vector store, scalable vector processing, and natural language contextual conversation capabilities. This innovation allows enterprises to harness generative AI within their existing data infrastructure, eliminating the need for AI expertise or separate vector databases.
HeatWave GenAI enables developers to create vector stores for unstructured enterprise data with a single SQL command using built-in embedding models. It also supports seamless natural language searches using both in-database and external LLMs, ensuring data remains secure within the database.
"HeatWave's stunning pace of innovation continues with the addition of HeatWave GenAI to existing built-in HeatWave capabilities: HeatWave Lakehouse, HeatWave Autopilot, HeatWave AutoML, and HeatWave MySQL. Today's integrated and automated AI enhancements allow developers to build rich generative AI applications faster, without requiring AI expertise or moving data. Users now have an intuitive way to interact with their enterprise data and rapidly get the accurate answers they need for their businesses," said Edward Screven, Chief Corporate Architect, Oracle.
With HeatWave, GPU provisioning is unnecessary, which simplifies application development, enhances performance, boosts data security, and reduces costs.
Available across all Oracle Cloud regions, Oracle Cloud Infrastructure (OCI) Dedicated Region, and various clouds, HeatWave GenAI comes at no extra cost for HeatWave customers.
Key features of HeatWave GenAI
In-Database LLMs: Streamlines the creation of generative AI applications, reducing costs and complexity. These models allow data search, content generation, summarization, and retrieval-augmented generation (RAG) within HeatWave Vector Store. HeatWave GenAI also connects with OCI Generative AI service to access pre-trained models from top LLM providers.
Automated In-Database Vector Store: Allows businesses to use generative AI with their documents without transferring data to an external vector database. The entire process—discovering documents in object storage, parsing them, generating embeddings, and inserting them into the vector store—is automated within the database, making it efficient and user-friendly.
Scale-Out Vector Processing: Offers rapid semantic search results with high accuracy. HeatWave's new VECTOR data type and optimized distance function support semantic queries via standard SQL. Users can also perform combined semantic and SQL operations, such as joining multiple tables for comprehensive similarity searches.
HeatWave Chat: A Visual Code plug-in for MySQL Shell, offering a graphical interface for HeatWave GenAI. It allows developers to pose questions in natural language or SQL. The integrated Lakehouse Navigator enables users to select files from object storage and create vector stores. Users can search the entire database or specific folders, maintaining context with a history of queries, source document citations, and LLM prompts.