Google Cloud Next: My Four Key Takeaways

Google Cloud Next: My Four Key Takeaways

Google announced a plethora of new product and feature enhancement announcements at Google Cloud Next, its annual conference for developers, partners, and customers, focused on improving collaboration, worker productivity, and creativity and enhancing the user experience with its product suite. The company also highlighted the release of Gemini 1.5, its next-generation generative AI model, new security enhancements, and several chip and infrastructure announcements. From my enterprise applications, CX, and workplace collaboration perspective, here are four key takeaways from the event.

AI Has Emerged as an Enabler of Friction-Free, Collaborative Workflows

Google made several announcements around how it is incorporating AI to reduce worker effort, improve productivity and accuracy, and support better collaboration within and between teams. The most obvious example is the launch of Google Vids, a video creation app that is designed to leverage Gemini to let users create videos using prompts. It sits alongside Docs, Sheets, and Slides. The app can incorporate a wide range of source information, including documents, spreadsheets, and presentations, and users can select a variety of style and format options to create a first-draft storyboard.

I received a demo on the show floor and had several key takeaways. First, the interface is extremely intuitive; even someone with my limited experience and skills using video and audio would easily be able to create a professional-looking video, largely because the friction involved in doing so has been eliminated via the use of natural language-based prompts. While these prompts are only able to incorporate content that is held within Workspace (the in-booth demo rep says Google listens to feedback and may incorporate the ability to use prompts to access content held in third-party apps in the future), it is still easy to upload content manually and then complete the process of authoring, collaborating, and sharing videos.

Notably, one of the features that I was most impressed with was the teleprompter, which scrolled as the user is reading and speaking and was located at the proper angle to ensure the speaker is looking straight at the camera. Another interesting feature is the ability to let an AI-generated voice read a voiceover script using an accent and style of the user’s choosing, ensuring an error-free performance.

Because Vids is a Workspace app, it is easy to collaborate with coworkers who can also leverage the conversational prompt interface to make edits or additions. Finally, the ability to upload and incorporate corporate-approved templates, which can be used as guidelines and guardrails for ensuring that the video adheres to style guidelines, ensures a consistent look and feel.

The implementation of these types of features really appear to remove the small yet tedious points of friction that make it challenging to work with video. In this case, AI is truly serving as a force multiplier because it allows more workers to be more productive with fewer technical or workflow challenges that often arise from not having domain-specific skill sets.

GenAI + Grounding Opens Massive Opportunities for Search Use Cases

Google made several announcements around the enhancement of its AI models and how they are being utilized. The company announced Vertex AI Agent Builder, a no-code service that enables clients to develop and deploy AI agents and combines multiple large language models (LLMs), developer tools, and Google Search. This functionality enables organizations to utilize the most appropriate model to accomplish the desired task at hand.

Similarly, the company has implemented retrieval augmented generation (RAG) and vector search for grounded results, which ensures that any results retrieved are taken only from vetted, reliable sources, which the company can specify. This approach reduces the likelihood of hallucinations and enables the creation of specific models/agents/tools to access information based on roles, departments, user types, or any other criteria, without fear of data leakage.

Given its genesis as a search company, it is surprising that Google did not spend more time highlighting the power of applying these powerful new AI models, along with RAG and other techniques for improving generative response relevancy and accuracy. I spoke with a Google product manager and a Google engineer about these developments, and they both agreed with my suggestion that the potential use cases across both B2C and B2B represent a huge opportunity for Google and its clients.

Google Unlocking Opportunity with Public Sector Customers

I have expanded on this opportunity within my Enterprising Insights podcast, but I want to call out an interesting session focused on Google’s work with the public sector. The public sector often gets (rightly) called out for failing to meet their constituents’ and partners’ needs, with inefficient processes, outdated software, and inevitable friction between workers and systems, workers and constituents, and constituents and systems being the primary culprits.

In a session with analysts, Google discussed its offerings targeted at federal, state, and local governments and other public sector entities, highlighting the efficiency and productivity enhancements of using a platform that supports natural language understanding within prompts, enabling stakeholders to interact with data more easily, as well as the ability to pull both structured and unstructured data from wherever it resides. Advanced functionality, such as the ability to provide on-the-fly language translation services opens much greater utility for underserved constituents, thereby improving utilization rates of key government services.

Most notably, Google mentioned that its platform is designed to retain most of its key functions and features while being safe for government, security, and military customers. Further, the company has taken a land-and-expand approach to reaching into these organizations, tackling a specific problem or challenge in one department, proving itself via economic or productivity ROI and then approaching other departments and functions.

Google: Lots of Excitement and Value, But Maybe Too Much News

My final takeaway from Google Cloud Next is that Google is clearly doing a lot, introducing a massive number of product and technology enhancements, and trying to reach into several market segments. But one of the challenges with putting out so much news at once is that it can inadvertently limit the impact of each announcement and, in some cases, obscure some news in favor of other news. While my colleagues and I at The Futurum Group, as well as analysts and reporters at other organizations, are clearly trying to take in and analyze these announcements, I’ve heard from some Google folks who thought that perhaps some messaging was not getting the play desired.

It will be interesting to see how Google approaches its announcement strategy over the coming months, given that the company made so many at once, and these came just 8 months after the previous Google Cloud Next event held last year.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Google Cloud Announces Generative AI Advances at HIMSS

Gemma and Building Your Own LLM AI – Google Cloud AI at AI Field Day 4

Google Cloud Widens Gemini Model Access for Vertex AI Users

Image Credit: Google Cloud

Author Information

Keith has over 25 years of experience in research, marketing, and consulting-based fields.

He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.

In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek,, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.

He is a member of the Association of Independent Information Professionals (AIIP).

Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.


Latest Insights:

An Assessment of Major Private 5G Moves Indicating New Momentum Including Cisco’s Ecosystem Progress, Tesla Onboarding, and Verizon Starts Certifying Nokia DAC
The Futurum Group’s Ron Westfall and Olivier Blanchard review new private 5G ecosystem advances including Cisco’s collaboration with Augmentir, Nokia, and Logicalis to boost adoption of the Cisco Mobility Services Platform across enterprises, Tesla building its P5G network at its Berlin gigafactory, and Verizon completing the first phase of certifying Nokia’s DAC for the Verizon Business Private 5G Network portfolio.
New “Timeless” Program and SimpliVity for GreenLake Private Cloud
The Futurum Group’s Camberley Bates covers HPE’s Alletra MP, GreenLake, and other storage announcements.
IBM Announces Additional Details About the Quantum Software Development Kit
Dr. Bob Sutor examines IBM’s recent announcement of new features for the Qiskit 1.0 release of the quantum software development kit. AI for quantum takes center stage, including GenAI in the Qiskit Coding Assistant built on Granite.
D-Wave Had a Consistent Quarter to Start 2024
Compared with Q4 2023, D-Wave had a consistent quarter to start 2024. Its fast-anneal feature is a notable achievement that will benefit existing users, allowing them to run individual computations faster and more of them.