The News: On August 29 as part of Google Cloud Next ‘23, Google Cloud made several announcements about updates to Vertex AI, Google Cloud’s comprehensive AI developer platform.
Here are the key details:
- New third-party AI models have been added to the platform’s suite of available AI models, the collection of which Google Cloud calls the Model Garden. The latest additions are open-source models – Meta’s Llama 2 and Code Llama and Technology Innovation Institute’s Falcon LLM. Google Cloud will soon add Claude 2 from Anthropic.
- Upgrades to Google AI models: PaLM now handles 32,000 token context windows, which means it can analyze larger documents faster and more accurately. Google Cloud says Codey (code generation) quality has improved by 25%. Imagen (image generation) introduces a new digital watermarking feature.
- Introduction of Vertex AI Extensions. “While foundational models are powerful, they are frozen after training, meaning they are not updated as new information becomes available, and thus may deliver stale results. Vertex AI Extensions is a set of fully-managed developer tools for extensions, which connect models to APIs for real-time data and real-world actions.”
Read the full post about the Vertex AI announcements on the Google Cloud blog.
Google Cloud Next: Vertex AI Heats Up Developer Platform Competition
Analyst Take: In the current AI gold rush, enterprises are most interested in being miners that find their own AI gold. As such, the picks and shovels of AI – developer platforms and tools – are in big demand. Google Cloud has long provided a premiere AI developer platform, but the stakes have gotten higher with generative AI. The technology is mutating at lightning speed and enterprises are looking to the major AI developer platform providers who also happen to the major cloud providers – Google Cloud, AWS, and Microsoft – for solutions. The Google Next Vertex AI announcements put Google Cloud in a good position to help enterprises in that regard. Here is some insight into the impact of the platform updates.
Sophisticated Range of AI Model Options
With Vertex AI, Google Cloud’s platform philosophy is to give enterprise customers as many options as possible, including third-party private and open-source options as well as Google’s own models. It is clear that enterprises are asking for AI model options and they are getting them. Google Cloud is no slouch when it comes to AI models, and the announced improvements for PaLM and Imagen belie an understanding of market drivers in generative AI – faster results/token improvement for PaLM-driven apps and potential copyright protection for image generators in Imagen. Google Cloud is comfortable offering this sophisticated range of internal and external AI model options for a couple of reasons: 1) It believes its proprietary AI models will hold their own against external options and 2) The business model for Vertex AI is not primarily driven by the specific AI models themselves but how enterprises use them. Vertex AI charges for AutoML model work (image, video, tabular data, text data training, deployment, and prediction) and for custom model training as well as other capabilities, regardless of which AI model is used.
Building Tooling to Maximize AI Models
Two key updates to Vertex AI are not AI models themselves, but rather tooling to help enterprises maximize AI model use – model tuning & customization and Vertex AI Extensions.
Model tuning & customization – “Beyond model updates, we are helping organizations customize models with their own data and instructions through adapter tuning, now generally available for PaLM 2 for text, and Reinforcement Learning with Human Feedback (or RLHF), which is now in public preview. In addition, we’re introducing Style Tuning for Imagen, a new capability to help our customers further align their images to their brand guidelines with 10 images or less.” The prevailing wind for enterprise-grade generative AI is moving away from AI models trained on massive amounts of public data and toward models that are fine-tuned or customized with private data to achieve better, more accurate results. Google Cloud is listening and providing the tools to make that happen.
Vertex AI Extensions – As detailed in the news section above, another challenge for LLMs is they are not evergreen in their training. Extensions change that dynamic, such that model training can be updated with real-time data, including elements like internal codebases. There are prebuilt extensions to Google Cloud’s BigQuery and AlloyDB, but also to third-party partners DataStax, MongoDB, and Redis. Essentially, Extensions simply makes a broad range of AI models more useful to enterprises.
Developer Platform War?
Google Cloud’s Vertex AI competes head-to-head with nearly full-stack AI options from Microsoft (partnered with OpenAI), AWS, as well as others such as NVIDIA, IBM, and Salesforce. Of these offerings, the cloud providers have similar histories with AI developers in that they have offered AI development tools for some time. Each is working on how best to serve enterprise AI developers’ needs best, and it is way too early to declare any winner. The good news is, the competition is fierce because of the wider revenue opportunities each of the providers see – in AI, there are AI compute and AI applications, but there are also a lot of other non-AI related cloud services at stake. AI developer platforms are getting better very rapidly, and that is good for enterprises implementing AI.
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Other insights from The Futurum Group:
NVIDIA AI Workbench Could Simplify Generative AI Builds
Next-Generation Compute: Agents for Amazon Bedrock Complete Tasks
Duet AI for Google Workspaces Enhances Google Meet and Google Chat
Author Information
Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.
Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.