Adults in the Generative AI Rumpus Room: Google, Tidalflow, Lakera

Adults in the Generative AI Rumpus Room- Google, Tidalflow, Lakera

Introduction: Generative AI is widely considered the fastest moving technology innovation in history. It has captured the imagination of consumers and enterprises across the globe, spawning incredible innovation and along with it a mutating market ecosystem. Generative AI has also caused a copious amount of FOMO, missteps, and false starts. These are the classic signals of technology disruption – lots of innovation, but also lots of mistakes. It is a rumpus room with a lot of “kids” going wild. The rumpus room needs adults. Guidance through the generative AI minefield will come from thoughtful organizations who do not panic, who understand the fundamentals of AI, and who manage risk.

Our picks for this week’s Adults In The Generative AI Rumpus Room are Google, Tidalflow, and Lakera.

Google: Generative AI Indemnification

The News: On October 12, Google Cloud announced in a blog post that it will provide its customers with intellectual property indemnity as it pertains to generative AI. The protections cover any allegations that Google’s use of training data to create its generative models used by a generative AI service infringes a third party’s intellectual property (IP) right. The company also provides indemnity for allegations that generated output infringe a third party’s IP rights. The generated output indemnity applies only if a customer did not try to intentionally create or use generated output to infringe the rights of others.

Products covered for indemnity include Duet AI in Workspace, including generated text in Google Docs and Gmail and generated images in Google Slides and Google Meet; Duet AI in Google Cloud including Duet AI for assisted application development; and Vertex AI Search, Vertex AI Conversation, Vertex AI Text Embedding application programming interface (API), Visual Captioning/Visual Q&A on Vertex AI, and Vertex AI Codey APIs.

You can read the full Google indemnity blog post here.

Adults because… Generative AI is a new market and the technology and use cases are largely untested. Assurances from generative AI vendors builds trust and confidence in the marketplace. Google is not alone in backing its work – Adobe and IBM are also doing so. IP and copyright will be a big issue for generative AI, so look for other players to offer indemnification, particularly in lieu of any specific AI regulations.

Google: Sensitive Data Protection Service and Generative AI

The News: On October 4, the security and identity team at Google Cloud published a post describing how its Sensitive Data Protection service can be used to help secure generative AI workloads. According to the post:

…generative AI requires data in order to tune or extend it for specific business needs…. However, one concern that organizations have is how to reduce the risk of customizing and training models with their own data that may include sensitive elements such as personal information (PI) or personally identifiable information (PII). Often, this personal data is surrounded by context that the model needs so it can function properly.

…Organizations can use Google Cloud’s Sensitive Data Protection to add additional layers of data protection throughout the lifecycle of a generative AI model, from training to tuning to inference. Early adoption of these protection techniques can help ensure that your model workloads are safer, more compliant, and can reduce risk of wasted cost on having to retrain or re-tune later.

Customers’ frequently use their own data to create datasets to train custom AI models, such as when deploying an AI model on prediction endpoints. Additionally, to fine-tune a model like language and code using data particular to the customer in order to increase the relevance and the business goals of LLM response.

Some customers use generative AI features to tune a foundation model and deploy foundation models for specific tasks and business needs. This tuning process uses customer-specific datasets and creates parameters that are then used at inference time; the parameters reside in front of the “frozen” foundation model, inside the user’s project. To ensure that these datasets do not include sensitive data, use the Sensitive Data Protection Service to scan the data that was used to create the datasets. Similarly, this method can be used for Vertex AI Search to ensure uploaded data does not include sensitive information.

You can read the full Google Sensitive Data Protection Service and generative AI blog post here.

Adults because… To truly unlock proprietary power with a large language model (LLM), enterprises must leverage their own data. Most have hesitated to date because of the fear that proprietary data will be used to train models or PII will be exposed. In other words, enterprises have a hard time trusting they can use LLMs and their own data due to security issues. Data security is a major issue for generative AI. Google’s Sensitive Data Protection service is a way for enterprises to feel more confident they can leverage their own data against AI foundation models.

Tidalflow: Test How an LLM Will Perform Before It Goes Live

The News: On October 10, Tidalflow emerged out of stealth mode and announced a $1.7 million round of funding. The funding round includes Google’s Gradient Ventures. A TechCrunch article described the company’s proposition:

Tidalflow can perhaps best be described as an application lifecycle management (ALM) platform that companies plug their OpenAPI specification / documentation into. And out the other end Tidalflow spits out a “battle-tested LLM-instance” of that product, with the front-end serving up monitoring and observability of how that LLM-instance will perform in the wild…. “The big problem is, if you launch on something like ChatGPT, you actually don’t know how the users are interacting with it,” Tidalflow CEO Sebastian Jorna told TechCrunch. “This lack of confidence in the reliability of their software is a major roadblock to rolling out software tooling into LLM ecosystems. Tidalflow’s testing and simulation module builds that confidence…. With normal software testing, you have a specific number of cases that you run through — and if it works, well, the software works,” Jorna said. “Now, because we’re in this stochastic environment, you actually need to throw a lot of volume at it to get some statistical significance. And so that is basically what we do in our testing and simulation module, where we simulate out as if the product is already live, and how potential users might use it.”

Read the Tidalflow TechCrunch story here.

Adults because… Today, LLMs have inherent challenges. I have already mentioned data security; other challenges include accuracy, bias, and hallucination. Joining a growing number of ancillary LLM products and services, Tidalflow is offering a way for enterprises to build confidence in LLM outputs without going live.

Lakera Offering LLM Security From Prompt Injections and Data Leakage

The News: On October 12, Lakera officially launched with a product called Guard. As reported in TechCrunch:

Swiss startup Lakera is officially launching to the world today, with the promise of protecting enterprises from various LLM security weaknesses such as prompt injections and data leakage.

…prompts can also be manipulated by bad actors to achieve far more dubious outcomes, using so-called “prompt injection” techniques whereby an individual inputs carefully crafted text prompts into an LLM-powered chatbot with the purpose of tricking it into giving unauthorized access to systems, for example, or otherwise enabling the user to bypass strict security measures.

…the company has recorded some 30 million interactions from 1 million users over the past six months, allowing it to develop what co-founder David Haber calls a “prompt injection taxonomy” that divides the types of attacks into 10 different categories. These are: direct attacks; jailbreaks; sidestepping attacks; multi-prompt attacks; role-playing; model duping; obfuscation (token smuggling); multi-language attacks; and accidental context leakage.

From this, Lakera’s customers can compare their inputs against these structures at scale.

“We are turning prompt injections into statistical structures — that’s ultimately what we’re doing,” Haber said.

Prompt injections are just one cyber risk vertical Lakera is focused on though, as it’s also working to protect companies from private or confidential data inadvertently leaking into the public domain, as well as moderating content to ensure that LLMs don’t serve up anything unsuitable for kids.

Read the TechCrunch story on Lakera here.

Adults because… Let us face it, there are a lot of adults in the generative AI rumpus room who are solely focused on reigning in the challenges LLMs inherently present. To use these powerful models, enterprises will have to deploy a range of defense and security mechanisms. By the nature of their design, LLMs are instructed by whatever words you choose to use. As a result, guardrails for AI safety must be put in place. Lakera Guard offers another tool in the expanding toolbox required for enterprises to leverage LLMs.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Adults in the Generative AI Rumpus Room: Anthropic, Kolena, IBM

Adults in the Generative AI Rumpus Room: Salesforce, DeepLearning.ai, Microsoft

Adults in the Generative AI Rumpus Room: Gleen, IBM

Adults in The Generative AI Rumpus Room: Arthur, YouTube, and AI2

Adults in the Generative AI Rumpus Room Cohere, IBM, Frontier Model Forum

Adults in the Generative AI Rumpus Room: Google, DynamoFL, and AWS

Author Information

Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.

Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

SHARE:

Latest Insights:

New Tools Streamline ERP Tasks, Add Carbon Tracking, and Enhance Predictive Business Insights
Keith Kirkpatrick, Research Director at Futurum, provides his perspective on the news from Epicor Insights 2025, including agentic AI to streamline ERP workflows, carbon tracking in Kinetic, and expansion of predictive insights with Grow AI.
Transformation Initiatives Drive Profitability as Company Posts Revenue Growth
Fernando Montenegro, VP and Practice Lead at Futurum, reviews Kyndryl's Q4 FY2025 earnings. Key highlights: Constant-currency growth, notable rise in pretax income, how 'three-A' initiatives drive results, and strategic tailwinds.
Q1 FY 2025 Results Reflect Resilience in Gross Margin and Record Design Wins in AI, Robotics, and Automotive as New Products Scale
Olivier Blanchard, Research Director at Futurum, examines Lattice’s Q1 FY 2025 earnings, highlighting record design wins across AI, robotics, and automotive, and how new products are paving the way for growth in FY 2026.
OpenAI Is Positioned as a Major AI-Powered Software Development Company, Competing With Microsoft, GitHub, Anthropic, and Startup Cursor
Analysts Mitch Ashley, VP of DevOps and Application Development, and Nick Patience, VP of AI Software and Tools, at Futurum, share their insights on the implications of OpenAI’s agreement to acquire AI coding tool company Windsurf. The acquisition propels OpenAI forward in its quest for leadership in the AI coding and agent development market.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.