Introduction: Generative AI is widely considered the fastest-moving technology innovation in history. It has captured the imagination of consumers and enterprises across the globe, spawning incredible innovation and along with it a mutating market ecosystem. Generative AI has also caused a copious amount of FOMO, missteps, and false starts. These are the classic signals of technology disruption – lots of innovation, but also lots of mistakes. It is like a rumpus room with a lot of “kids” going wild. The rumpus room needs adults. Guidance through the generative AI minefield will come from thoughtful organizations who do not panic, who understand the fundamentals of AI, and who manage risk.
Our picks for part 2 of this week’s Adults in the Generative AI Rumpus Room are Cohere, IBM, and the founding members of the Frontier Model Forum – Google, Microsoft, OpenAI and Anthropic.
Cohere Launches Coral, a Safer AI Assistant
The News: On July 25, large language model (LLM) startup Cohere announced it has introduced Coral, described as a “knowledge assistant” in private preview with select customers. Coral, like Google’s Bard and Anthropic’s Claude, are alternatives to OpenAI’s ChatGPT.
In the announcement, Cohere is intentional about how Coral will stand out from “existing consumer chatbots”. In particular:
- Addresses hallucinations with explainability: According to the post, “To help verify generations, Coral can produce responses with citations from relevant data sources. Behind the scenes, our models are trained to seek relevant data based on a user’s need, even from multiple sources. This grounding mechanism is essential in a world where workers need to understand where information is coming from in a consumable way.”
- Secured data in private environment: Data used for prompting and the chatbot’s outputs will not leave the company’s data perimeter.
Read the full announcement on the Cohere website.
Adults because… Coral is an evolved, better, safer AI assistant. Open AI introduced ChatGPT a mere 9 months ago. In that short amount of time, the concept of an AI assistant has evolved and mutated rapidly to a point where the challenges for LLM-based AI assistants are being addressed and enterprise requirements are part of their design. For example, Coral’s approach to hallucinations is to provide citations. Some enterprises limit other informational inaccuracies, bias, and security by pointing the models not at public domain data, but rather strictly at private data (as exemplified by Salesforce and Adobe). Coral’s approach to security is to access public domain data for the assistant, but to limit where and when Coral uses private domain data.
IBM watsonx.governance Tackles AI Risk Management
The News: On July 11, IBM announced that watson.governance, one of the three products in the company’s enterprise-ready AI and data platform, is slated to be generally available before the end of 2023. watsonx.governance is a product designed to enable enterprises to direct, monitor, and manage the AI activities of the organization.
AI is typically handled by experts in silos, which is not scalable. Organizations build too many models without clarity, monitoring, or cataloging. Watsonx.governance automates and consolidates tools, applications, and platforms. If something changes in models, all that information is automatically collected for audit trail through testing and diagnostics.
Enterprises must understand the risks of AI in every application, case by case. Emerging best practices are for organizations to build an AI ethics governance framework, including establishing an ethics committee or board to oversee. This approach is a highly manual process. watsonx.governance addresses this need by automating workflows to better detect fairness, bias, and drift. Automated testing ensures compliance to an enterprise’s standards and policies throughout the AI lifecycle.
More than 700 AI regulations have been proposed globally. Most are umbrella regulations that are not overly specific. Most companies do not understand how to comply. watsonx.governance addresses this by translating growing regulations into enforceable policies within the company. The solution breaks down the requirements in the regulation and builds controls.
Read the full watsonx GA announcement on the IBM website.
Adults because… AI risk management is foundational and critical to operationalizing AI. Enterprises will learn this either the hard way, through ignoring it, or the easier way, by embracing it. IBM is in a great position to help enterprises navigate AI risk management. The strengths are in the vision and blueprint for automating model management/audit trail and for automating workflows to better detect fairness, bias, and drift and for automating testing throughout the AI lifecycle. The challenges for the product will be that some outputs are dependent on people and processes – for instance, will an organization construct a competent, effective AI ethics governance framework and committee? If AI regulations are vague, will the product be able to build good enough company enforceable policies? As the product moves through further testing and general availability, these questions might be answered in the affirmative or we might also see a shift in product features. Either way, an automated process to AI risk management is a very good thing.
Industry Group Forms to Build AI Model Guardrails
The News: On July 26, Google, Microsoft, OpenAI, and Anthropic announced they formed a new industry group, The Frontier Model Forum, with the intent of building and promoting best practices for the development of generative AI models. The group calls these models “frontier models”, though this is not necessarily a widely-accepted term yet. Other names associated with these models are foundational models or generative AI models.
The Forum’s objectives are to 1) advance AI safety research; 2) identify and promote best practices; and 3) collaborate with policymakers, academics, and other stakeholders to share knowledge about frontier model trust and safety risks.
Essentially, the Forum will work on developing standards for frontier models with the hope of informing regulatory bodies and influencing regulation.
Read the full Frontier Model Forum announcement on the Google blog.
Adults because… AI advocates have struggled for several years to coalesce and unify to form a definitive AI industry group. There is a good reason for this – AI is a broadly diffused technology, and it applies across a massive range of industry sectors and interests. It has taken a disruptive moment in the form of generative AI to unify advocacy. The Frontier Model Forum has the potential to solidify AI advocacy.
The Frontier Model Forum did not make any specific promises or suggestions about what they define as responsible development of frontier models, or what those risks, limitations, or impacts are to the public. There is a really good reason for that – it is very hard to build standards or laws on speculative outcomes. In these very early days of generative AI, there are few solidly commercially-viable use cases. Lawmakers have been forced to address some of the impacts of generative AI without the benefit of commercial outputs. Regulations must be broad enough to encompass unknown developments. With this in mind, there is a growing sentiment to incorporate flexibility into AI regulations and standards. In a recent position paper on the EU’s draft AI Act, the American Chamber of Commerce to the European Union suggested flexibility (noted from the executive summary): “ensure that requirements on providers of general purpose AI and foundation models are targeted, realistic and flexible”, “ensure a flexible and clear approach to standardisation.”
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Other insights from The Futurum Group:
Adults in the Generative AI Rumpus Room: Google, DynamoFL, and AWS
Cohere Launches Coral, a New AI-Powered Knowledge Assistant
IBM watsonx.governance Tackles AI Risk Management
Google, Microsoft, OpenAI, and Anthropic Form AI Industry Group
Author Information
Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.
Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.