Introduction: Generative AI is widely considered the fastest moving technology innovation in history. It has captured the imagination of consumers and enterprises across the globe, spawning incredible innovation and along with it a mutating market ecosystem. Generative AI has also caused a copious amount of FOMO, missteps, and false starts. These are the classic signals of technology disruption – lots of innovation, but also lots of mistakes. It is a rumpus room with a lot of “kids” going wild. The rumpus room needs adults. Guidance through the generative AI minefield will come from thoughtful organizations who do not panic, who understand the fundamentals of AI, and who manage risk.
Our picks for this week’s Adults In The Generative AI Rumpus Room are Google Cloud, Mayfield, and Context.ai.
Google Cloud Launches SynthID AI Watermarking Tool
The News: On August 29 at Google Next ‘23, Google Cloud announced the beta launch of SynthID as part of the Vertex AI platform. SynthID is a tool for watermarking and identifying AI-generated images that embeds a digital watermark directly into the pixels of an image. The watermark is imperceptible to the human eye. The innovation comes from Google’s DeepMind lab.
Some of the key elements of SynthID include:
- Available only for images created within Google Cloud’s image generator application, Imagen.
- Breakthrough for digital image protection in that the watermark is not visible to the human eye, and less likely to be manipulated.
- From the post: “While generative AI can unlock huge creative potential, it also presents new risks, like enabling creators to spread false information — both intentionally or unintentionally. Being able to identify AI-generated content is critical to empowering people with knowledge of when they’re interacting with generated media, and for helping prevent the spread of misinformation.”
Read the full announcement about SynthID on the Google DeepMind blog.
Adults because… Generative AI-fueled image generation can be used for unethical purposes – for misinformation and disinformation, deepfakes, and the like, and there are not a lot of good ways yet to combat such use. Digital watermarking shows promise as a tool to combat image generation misuse, but to date there are no standards for creating or discovering them, only proprietary efforts, such as this tool from Google Cloud, or others from Microsoft/OpenAI, Steg.ai, and Imatag. Google Cloud’s clout will help move the efforts to combat misuse of AI-generated images a bit closer toward global standards for digital watermarking.
Mayfield Declares People-First Framework for Investing in AI Startups
The News: On August 30, venture capital firm Mayfield declared it had customized its people-first framework to apply to AI companies and will use it to guide its investment decisions. The framework is based on five key pillars, including:
- Mission and values count. Do AI-first founders have a human-centric mission and values?
- A fundamental belief that AI will augment humans, not replace them.
- Asking founders to evaluate the trustworthiness of the models driving their innovation and encouraging them to look at holistic model evaluation (Stanford).
- Privacy governance areas that must be addressed include discovery and inventory of all data, detection and classification of sensitive data, understanding models access and entitlements by users, consent, legal basis, retention, and more.
Read the full article written by Mayfield’s Navin Chaddah on TechCrunch.
Adults because… The nascent generative AI market is a modern gold rush that has already produced several unicorns ($1 billion+ valuations); some of those will succeed and some will fail. The gold rush will include AI startups with half-baked value propositions, unfounded ideas, and a complete disregard for responsible AI. It is difficult to believe a VC will pass up on a tempting startup just because they do not have all their responsible AI ducks in a row, but Mayfield’s declaration that they will not invest in irresponsible AI founders qualifies them as an adult in this case.
Context.ai Bows Another Tool to Tame Errant LLMs
The News: On August 30, AI startup Context.ai announced it has raised $3.5 million from Google Ventures and Theory Ventures to continue developing its large language model (LLM) tool. Context.ai lets businesses track frequently-discussed conversation topics, identify where their products are performing well versus poorly, debug bad conversations, monitor brand risks, understand user retention, and measure the impact of new releases.
“The current ecosystem of analytics products are built to count clicks. But as businesses add features powered by LLMs, text now becomes a primary interaction method for their users. Making sense of this mountain of unstructured words poses an entirely new technical challenge for businesses keen to understand user behavior. Context.ai offers a solution,” said Context.ai Co-Founder and CTO Alex Gamble.
Read the full announcement about the funding on Context.ai’s website.
Adults because… Many LLMs are trained on datasets that LLM users cannot track back to. The problem with this limitation is that not knowing what sources or context LLM answers come from can make it difficult to gauge whether those answers are accurate. Context.ai analyzes the content generated by an LLM and the conversation with the end user to figure out if the user was satisfied with the response the LLM gave. A better option would be to track back specifically into the datasets, but in the new world of generative AI, Context.ai’s workaround solution at least helps increase LLM accuracy.
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Other insights from The Futurum Group:
Google Cloud Next: Vertex AI Heats Up Developer Platform Competition
Adults in The Generative AI Rumpus Room: Arthur, YouTube, and AI2
Adults in the Generative AI Rumpus Room Cohere, IBM, Frontier Model Forum
Adults in the Generative AI Rumpus Room: Google, DynamoFL, and AWS
Author Information
Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.
Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.