Menu

Gleen: Solving LLM Hallucinations

Gleen: Solving LLM Hallucinations

The News: On September 5, Gleen announced it has raised $4.9 million to accelerate its work in solving a major issue with large language models (LLMs)—hallucination. Gleen AI is focused on improving LLM-based chatbots that are focused on customer support/customer service, and it is now publicly available.

LLM-based chatbots tend to hallucinate, responding to queries with completely fabricated information. To address this problem, Gleen created a proprietary AI layer, independent of the LLM, that ingests enterprise knowledge across multiple sources, manages it, selectively feeds knowledge to the LLM, and cross-checks the quality of the LLM’s response, eliminating hallucination. Gleen AI is LLM-agnostic. It currently works with GPT 3.5 and 3.4, Anthropic, and Llama, and it is integrated into Slack, Discord, and other leading help desk solutions. Gleen provides software development kits (SDKs) and REST APIs for customers to integrate directly. Read the full blog post on the public availability of Gleen AI on the Gleen website.

Gleen: Solving LLM Hallucinations

Analyst Take: It is interesting that given the potential impact of LLMs there is so much work involved in making them behave properly. Hallucination is a big challenge. Is Gleen the type of solution to solve it? What about other LLM challenges? Here are some of the key takeaways from Gleen’s debut.

Gleen Is on To Something

Hallucination is a massive issue for LLMs, and if Gleen can solve this problem, it could translate into real productivity gains for generative AI applications. On paper, the Gleen AI solution is interesting and makes sense as a solution that might be able to mitigate LLM hallucination. The fact that Gleen AI is an abstraction layer, independent of the LLM, makes this solution compelling and enables it to be LLM-agnostic. It is unclear whether navigating that layer with data will mean additional costs for data processing.

Hallucinations Are Not the Only Accuracy-Focused LLM Challenge

Hallucination is only one of several challenges enterprises face in deploying LLMs. To be fair, Gleen is also addressing false confidence and some accuracy issues with Gleen AI. Other issues that Gleen AI might also be able to solve explanability issues; however, Gleen AI probably does not provide the sources to back up its conclusions of the corrected answers. Another issue it might not solve is mitigating bias—LLMs tend to require a heavy dose of pre-production monitoring to weed out bias language and answers.

Cost Issues for Running LLMs

A cottage industry has sprung up to address cost issues for leveraging LLMs. Current compute costs for LLMs can be expensive. As a result, there are massive efforts by a number of chip manufacturers to build more efficient, purpose-built AI chips; a range of development tools have been designed to help AI models for LLMs to run more efficiently; and there are LLMs that are trained on smaller data sets.

Conclusion

Hallucination is not the only challenge enterprises using LLMs face, but it is a significant one worth solving. If Gleen’s concept works, players will scramble to build similar solutions, particularly larger AI development platforms/tools vendors, including the LLM players themselves. Gleen’s focus is on hallucinations or customer service chatbots, but LLMs do not discern in their hallucinations, which means it is likely that savvy players will develop hallucination fighters for all LLM applications.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Qualcomm-Meta Llama 2 Could Unleash LLM Apps at the Edge

Top Trends in AI This Week: August 25, 2023

OpenAI ChatGPT Enterprise: A Tall Order

Author Information

Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

Related Insights
January 21, 2026

AI-Enabled Enterprise Workspace – Futurum Signal

The enterprise workspace is entering a new phase—one shaped less by device refresh cycles and more by intelligent integration. As AI-enabled PCs enter the mainstream, the real challenge for IT...
AWS European Sovereign Cloud Debuts with Independent EU Infrastructure
January 16, 2026

AWS European Sovereign Cloud Debuts with Independent EU Infrastructure

Nick Patience, AI Platforms Practice Lead at Futurum, shares his/her insights on AWS’s launch of its European Sovereign Cloud. It is an independently-run cloud in the EU aimed at meeting...
Salesforce’s Slackbot Goes GA - Is This the Real Test for Agentforce
January 15, 2026

Salesforce’s Slackbot Goes GA – Is This the Real Test for Agentforce?

Keith Kirkpatrick, Research Director at Futurum, examines Slackbot general availability and how Salesforce is operationalizing Agentforce 360 by embedding a permissioned, context-aware AI agent directly into Slack workflows....
CIO Take Smartsheet's Intelligent Work Management as a Strategic Execution Platform
December 22, 2025

CIO Take: Smartsheet’s Intelligent Work Management as a Strategic Execution Platform

Dion Hinchcliffe analyzes Smartsheet’s Intelligent Work Management announcements from a CIO lens—what’s real about agentic AI for execution at scale, what’s risky, and what to validate before standardizing....
Will Zoho’s Embedded AI Enterprise Spend and Billing Solutions Drive Growth
December 22, 2025

Will Zoho’s Embedded AI Enterprise Spend and Billing Solutions Drive Growth?

Keith Kirkpatrick, Research Director with Futurum, shares his insights on Zoho’s latest finance-focused releases, Zoho Spend and Zoho Billing Enterprise Edition, further underscoring Zoho’s drive to illustrate its enterprise-focused capabilities....
NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy
December 16, 2025

NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy

Nick Patience, AI Platforms Practice Lead at Futurum, shares his insights on NVIDIA's release of its Nemotron 3 family of open-source models and the acquisition of SchedMD, the developer of...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.