Introduction
Google announced it is working on addressing the functionality of its Gemini AI service, in response to significant user and media backlash due to the tool generating embarrassing, inaccurate, and offensive results to user prompts. According to an internal memo that was first reported by Semafor and has since been confirmed as accurate by Google, the company is working “around the clock” to address these issues.
The headlines around Google’s Gemini AI service were hard to miss, as the media quickly picked up the most shocking examples of responses to generative AI prompts gone wrong. I will not repeat the examples here, but suffice to say, the responses to prompts were offensive, inaccurate, and appeared to be the result of tuning designed to reflect a certain worldview. However, Google should be commended for taking swift action to shut down the service once the problems were identified and for taking responsibility for the content that was generated.
Google CEO Sundar Pichai’s acknowledgment that “I know that some of its responses have offended our users and shown bias—to be clear, that’s completely unacceptable and we got it wrong,” is refreshing in its candor. And while there may be a continued short-term reputational impact, it is clear that Pichai is focused on addressing this issue as quickly as possible and restoring trust with its customers and users.
Deploying Effective AI Guardrails Is Challenging
Google, like many of its competitors, is trying to implement technological advances across its products very quickly. In fact, it was barely 12 months ago that the general public became aware of generative AI technology, and here we are, with most major vendors having released generative AI assistants into their enterprise-focused product lines.
It is not surprising that a generative AI tool might return offensive or inaccurate responses to user prompts. After all, models must be properly guardrailed against such behavior. However, the implementation of proper guardrails is challenging, particularly because of the current focus on ensuring that models are properly tuned and trained to eliminate historical biases. As I see it, Google tried to implement guardrails against incorporating bias but did not effectively test them to ensure that the model did not overcorrect, thereby introducing inaccuracies.
Google Likely Will Rebound from This Misstep
Google has taken a beating in the press around the issues with Gemini but should be commended for acting quickly to suspend the service and redoubling its efforts to fix the problems. The company would be wise to ensure that its teams implementing guardrails not only focus on eliminating historical biases but also work to ensure internal controls and processes are implemented with review teams to be sure a wide spectrum of inputs and viewpoints are considered (particularly around sensitive topics and prompts). This approach will ensure the models are fine-tuned to reflect a neutral assessment of historical people, events, and actions.
As enterprise-focused tools such as Gemini for Google Workspace continue to be rolled out, enterprise buyers can be confident that Google is clearly focused on prioritizing model tuning and training issues, which are key to reducing or eliminating hallucinations. I expect the company has quickly learned from this situation, and hopefully it will serve as a cautionary tale for other generative AI-focused companies as they look to roll out more advanced features and functions.
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Other Insights from The Futurum Group:
Google Cloud Widens Gemini Model Access for Vertex AI Users
Google Gemini Advanced: Google’s Counter to Copilot? | The AI Moment – Episode 15
Google Gemini Aims to Redefine Digital Frontiers
Author Information
Keith has over 25 years of experience in research, marketing, and consulting-based fields.
He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.
In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek, CNBC.com, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.
He is a member of the Association of Independent Information Professionals (AIIP).
Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.