Menu

Adults in the Generative AI Rumpus Room: Gleen, IBM

Adults in the Generative AI Rumpus Room: Gleen, IBM

Introduction: Generative AI is widely considered the fastest moving technology innovation in history. It has captured the imagination of consumers and enterprises across the globe, spawning incredible innovation and along with it a mutating market ecosystem. Generative AI has also caused a copious amount of FOMO, missteps, and false starts. These are the classic signals of technology disruption – lots of innovation, but also lots of mistakes. It is a rumpus room with a lot of “kids” going wild. The rumpus room needs adults. Guidance through the generative AI minefield will come from thoughtful organizations who do not panic, who understand the fundamentals of AI, and who manage risk.

Our picks for this week’s Adults in the Generative AI Rumpus Room are Gleen and IBM.

Gleen: Solving LLM Hallucinations

The News: On September 5, Gleen announced it has raised $4.9 million to accelerate its work in solving a major issue with large language models (LLMs) — hallucination. Gleen AI is focused on improving LLM-based chatbots that are focused on customer support/customer service and is now publicly available.

LLM-based chatbots tend to hallucinate, responding to queries with completely fabricated information. To address this problem, Gleen created a proprietary AI layer, independent of the LLM, that ingests enterprise knowledge across multiple sources, manages it, selectively feeds knowledge to the LLM, and cross-checks the quality of the LLM’s response, eliminating hallucination. Gleen AI is LLM-agnostic. It currently works with GPT 3.5 and 3.4, Anthropic, and Llama, and it is integrated into Slack, Discord, and other leading help desk solutions. Gleen provides software development kits (SDKs) and REST APIs for customers to integrate directly.

Read the full blog post on the public availability of Gleen AI on the Gleen website.

Adults because… It is interesting that given the potential impact of LLMs, there is so much work involved in making them behave properly. Hallucination is a massive issue for LLMs and if Gleen can solve this issue, it could translate into real productivity gains for generative AI applications.

If Gleen’s concept works, players will scramble to build similar solutions, particularly larger AI development platform/tool vendors, including the LLM players themselves. Gleen’s focus is on hallucinations or customer service chatbots, but LLMs do not discern in their hallucinations, which means it is likely that savvy players will develop hallucination fighters for any and all LLM applications.

IBM Rolls New Granite AI Models, Continues watsonx Platform Momentum

The News: On September 7, IBM announced several AI-focused rollouts and enhancements to watsonx, including the introduction of IBM’s own AI models within its Granite series. The Granite models are IBM-built AI foundation models (FMs) that the press release says are designed to support enterprise natural language processing (NLP) tasks such as summarization, content generation, and insight extraction. It is important to note IBM plans to “provide a list of the sources of data as well as a description of the data processing and filtering steps that were performed to produce the training data for the Granite series of models.” Granite models will be available later in September. Read the full Press Release on the Granite AI model and watsonx enhancements on the IBM website.

Adults because… There are not a lot of details about the Granite series yet to be revealed, but IBM made a point to mention its plans to provide a data source list and the steps performed to produce the training data for the Granite models. There is growing momentum for this approach to transparency in AI models. Many proprietary LLM developers refuse to divulge their training data sources, for various reasons, though most commonly because the LLM vendor sees that data source as competitive IP. IBM is positioning its AI models less as “secret sauce.” Rather, the proprietary value in the IBM stack is in watsonx and the value of the complete chain. This approach frees IBM to offer AI models that can meet transparency best practices.

It will be interesting to see the details of the Granite models and how they perform when launched. In theory, the responsible AI approach with transparency is a savvy strategy and fits well with the thread of responsible AI best practices IBM has committed to.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

IBM watsonx.governance Tackles AI Risk Management

Adults in the Generative AI Rumpus Room: Arthur, YouTube, and AI2

Adults in the Generative AI Rumpus Room Cohere, IBM, Frontier Model Forum

Author Information

Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

Related Insights
CoreWeave ARENA is AI Production Readiness Redefined
February 17, 2026

CoreWeave ARENA is AI Production Readiness Redefined

Alastair Cooke, Research Director, Cloud and Data Center at Futurum, shares his insights on the announcement of CoreWeave ARENA, a tool for customers to identify costs and operational processes for...
Arista Networks Q4 FY 2025 Revenue Beat on AI Ethernet Momentum
February 16, 2026

Arista Networks Q4 FY 2025: Revenue Beat on AI Ethernet Momentum

Futurum Research analyzes Arista’s Q4 FY 2025 results, highlighting AI Ethernet adoption across model builders and cloud titans, growing DCI/7800 spine roles, AMD-driven open networking wins, and a Q1 guide...
Cisco Live EMEA 2026 Can a Networking Giant Become an AI Platform Company
February 16, 2026

Cisco Live EMEA 2026: Can a Networking Giant Become an AI Platform Company?

Nick Patience, AI Platforms Practice Lead at Futurum, shares insights direct from Cisco Live EMEA 2026 on Cisco’s ambitious pivot from networking vendor to full-stack AI platform company, and where...
Twilio Q4 FY 2025 Revenue Beat, Margin Expansion, AI Voice Momentum
February 16, 2026

Twilio Q4 FY 2025: Revenue Beat, Margin Expansion, AI Voice Momentum

Futurum Research analyzes Twilio’s Q4 FY 2025 results, highlighting voice AI momentum, solution-led selling, and disciplined margin management as Twilio positions its platform as an AI-era customer engagement infrastructure layer....
ServiceNow Buys Pyramid Does this Spell the End of the BI Dashboard
February 13, 2026

ServiceNow Buys Pyramid: Does this Spell the End of the BI Dashboard?

Brad Shimmin, VP and Practice Lead at Futurum, along with Keith Kirkpatrick, Vice President & Research Director, Enterprise Software & Digital Workflows, analyze ServiceNow’s acquisition of Pyramid Analytics. They explore...
Does Nebius’ Acquisition of Tavily Create the Leading Agentic Cloud
February 12, 2026

Does Nebius’ Acquisition of Tavily Create the Leading Agentic Cloud?

Brendan Burke, Research Director at Futurum, explores Nebius’ acquisition of Tavily to create a unified "Agentic Cloud." By integrating real-time search, Nebius is addressing hallucinations and context gaps for autonomous...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.