Menu

Cohere Launches Coral, a New AI-Powered Knowledge Assistant

Cohere Launches Coral, a New AI-Powered Knowledge Assistant

The News: On July 25, large language model (LLM) startup Cohere announced it has introduced Coral, described as a “knowledge assistant” in private preview with select customers. Coral, like Google’s Bard and Anthropic’s Claude are alternatives to OpenAI’s ChatGPT.

In the announcement, Cohere is intentional about how Coral will stand out from “existing consumer chatbots” (assume that means ChatGPT):

  • Coral is powered by Cohere’s Command model, which is trained with chat, reasoning, and writing capabilities.
  • Customized: According to the post, “Customers can augment Coral’s knowledge base through data connections. Coral has 100+ integrations ready to connect to data sources important to your business across CRMs, collaboration tools, databases, search engines, support systems, and more.”
  • Addresses hallucinations with explainability: According to the post, “To help verify generations, Coral can produce responses with citations from relevant data sources. Behind the scenes, our models are trained to seek relevant data based on a user’s need, even from multiple sources. This grounding mechanism is essential in a world where workers need to understand where information is coming from in a consumable way.”
  • Secured data in private environment: Data used for prompting and the chatbot’s outputs will not leave the company’s data perimeter.

Read the full announcement on the Cohere website.

Cohere Launches Coral, a New AI-Powered Knowledge Assistant

Analyst Take: Open AI introduced ChatGPT a mere 9 months ago. In that short amount of time, the concept of an AI assistant has evolved and mutated rapidly to a point where the challenges for LLM-based AI assistants are being addressed and enterprise requirements are part of their design. What does Coral’s debut signal about what happens next with AI assistants? Are we on the cusp of next-generation computing? Here are the key takeaways:

Evolution of Consumer Assistant to Enterprise-Grade Assistant

AI assistants are getting better. As ChatGPT has been adopted, its potential has been quickly tempered by market challenges that would prevent enterprises from adopting it – hallucinations and other inaccuracies, bias, security, explainability, assistants’ lack of sufficient memory, and a need for specific methodologies to obtain the best results, like prompt engineering. Evolved LLM-based AI assistants and organizations that use them are layering in ways to address these issues. For example, Coral’s approach to hallucinations is to provide citations, some enterprises limit other informational inaccuracies, bias, and security by pointing the models not at public domain data, but rather strictly at private data (as exemplified by Salesforce and Adobe). Coral’s approach to security is to access public domain data for the assistant, but to limit where and when Coral uses private domain data. Anthropic and Meta’s Llama 2 have both increased the length of inputs and outputs users can use in each prompt, which in theory increases the assistants’ memory to improve their performance.

The Next-Generation Interface

Coral and the evolving enterprise AI assistants are bringing the world closer and closer to the next-generation interface, one where most software can be operated by telling it what to do in a user’s own words and even link operations across applications.

Since 2015, AI visionaries have dreamed of a day when natural language processing (NLP)-based AI would usher in a new era in computing – when a conversational interface would replace today’s text/mouse/point-and-click interfaces. In this vision, users tell (or type) what they want of software in their own words, and the interface understands what they want and executes operations, literally any function computer software performs today. The vision included this interface being able to perform complex tasks humans typically do in workflows – such as working across disparate applications to complete an operation.

Challenges Remain: How Good Will AI Assistants Be?

While AI assistants are making massive improvements in their capabilities, they are still more about potential and promise than effective, reliable tools. The barriers and challenges mentioned earlier loom large and solving those challenges is not a sure thing at this point. Further, there is the ongoing challenge of natural language understanding (NLU). AI systems have struggled to understand humans effectively. Human communication is one of the most complex, if not the most complex task that exists – so much of how humans communicate – tone, conversational history, reference linking, sarcasm, spatial elements, volume, inflection – are impossible for machines to translate. Generative AI does not necessarily solve the NLU problem, and many experts believe AI never will be able to duplicate human ability in NLU. The effectiveness of AI assistants will depend greatly on their NLU capabilities.

Competition

Enterprise-grade AI assistants will proliferate over the next 2 to 3 years. Competition will be fierce, because winners in this space will have opportunities to sell other services and applications into the enterprise market. Some will be designed as general-purpose assistants, some will be very narrowly focused, some will come from open source, while others will be based on closed systems. Vendors in this space will come from AI compute players, data management and governance players, enterprise software players, independent LLM players, and possibly other sectors we have not yet imagined.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Qualcomm-Meta Llama 2 Could Unleash LLM Apps at the Edge

Generative AI Investment Accelerating: $1.3 Billion for LLM Inflection

Oracle Launches Enterprise-Grade Oracle Generative AI Services with Cohere

Author Information

Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

Related Insights
ServiceNow Buys Pyramid Does this Spell the End of the BI Dashboard
February 13, 2026

ServiceNow Buys Pyramid: Does this Spell the End of the BI Dashboard?

Brad Shimmin, VP and Practice Lead at Futurum, along with Keith Kirkpatrick, Vice President & Research Director, Enterprise Software & Digital Workflows, analyze ServiceNow’s acquisition of Pyramid Analytics. They explore...
Does Nebius’ Acquisition of Tavily Create the Leading Agentic Cloud
February 12, 2026

Does Nebius’ Acquisition of Tavily Create the Leading Agentic Cloud?

Brendan Burke, Research Director at Futurum, explores Nebius’ acquisition of Tavily to create a unified "Agentic Cloud." By integrating real-time search, Nebius is addressing hallucinations and context gaps for autonomous...
AI Capex 2026 The $690B Infrastructure Sprint
February 12, 2026

AI Capex 2026: The $690B Infrastructure Sprint

Nick Patience, AI Platforms Practice Lead at Futurum, shares his insights on the massive AI capex plans of US hyperscalers, specifically whether the projected $700 billion infrastructure build-out can be...
OpenAI Frontier Close the Enterprise AI Opportunity Gap—or Widen It
February 9, 2026

OpenAI Frontier: Close the Enterprise AI Opportunity Gap—or Widen It?

Futurum Research Analysts Mitch Ashley, Keith Kirkpatrick, Fernando Montenegro, Nick Patience, and Brad Shimmin examine OpenAI Frontier and whether enterprise AI agents can finally move from pilots to production. The...
Amazon Q4 FY 2025 Revenue Beat, AWS +24% Amid $200B Capex Plan
February 9, 2026

Amazon Q4 FY 2025: Revenue Beat, AWS +24% Amid $200B Capex Plan

Futurum Research reviews Amazon’s Q4 FY 2025 results, highlighting AWS acceleration from AI workloads, expanding custom silicon use, and an AI-led FY 2026 capex plan shaped by satellite and international...
Arm Q3 FY 2026 Earnings Highlight AI-Driven Royalty Momentum
February 6, 2026

Arm Q3 FY 2026 Earnings Highlight AI-Driven Royalty Momentum

Futurum Research analyzes Arm’s Q3 FY 2026 results, highlighting CPU-led AI inference momentum, CSS-driven royalty leverage, and diversification across data center, edge, and automotive, with guidance pointing to continued growth....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.