Menu

Gleen: Solving LLM Hallucinations

Gleen: Solving LLM Hallucinations

The News: On September 5, Gleen announced it has raised $4.9 million to accelerate its work in solving a major issue with large language models (LLMs)—hallucination. Gleen AI is focused on improving LLM-based chatbots that are focused on customer support/customer service, and it is now publicly available.

LLM-based chatbots tend to hallucinate, responding to queries with completely fabricated information. To address this problem, Gleen created a proprietary AI layer, independent of the LLM, that ingests enterprise knowledge across multiple sources, manages it, selectively feeds knowledge to the LLM, and cross-checks the quality of the LLM’s response, eliminating hallucination. Gleen AI is LLM-agnostic. It currently works with GPT 3.5 and 3.4, Anthropic, and Llama, and it is integrated into Slack, Discord, and other leading help desk solutions. Gleen provides software development kits (SDKs) and REST APIs for customers to integrate directly. Read the full blog post on the public availability of Gleen AI on the Gleen website.

Gleen: Solving LLM Hallucinations

Analyst Take: It is interesting that given the potential impact of LLMs there is so much work involved in making them behave properly. Hallucination is a big challenge. Is Gleen the type of solution to solve it? What about other LLM challenges? Here are some of the key takeaways from Gleen’s debut.

Gleen Is on To Something

Hallucination is a massive issue for LLMs, and if Gleen can solve this problem, it could translate into real productivity gains for generative AI applications. On paper, the Gleen AI solution is interesting and makes sense as a solution that might be able to mitigate LLM hallucination. The fact that Gleen AI is an abstraction layer, independent of the LLM, makes this solution compelling and enables it to be LLM-agnostic. It is unclear whether navigating that layer with data will mean additional costs for data processing.

Hallucinations Are Not the Only Accuracy-Focused LLM Challenge

Hallucination is only one of several challenges enterprises face in deploying LLMs. To be fair, Gleen is also addressing false confidence and some accuracy issues with Gleen AI. Other issues that Gleen AI might also be able to solve explanability issues; however, Gleen AI probably does not provide the sources to back up its conclusions of the corrected answers. Another issue it might not solve is mitigating bias—LLMs tend to require a heavy dose of pre-production monitoring to weed out bias language and answers.

Cost Issues for Running LLMs

A cottage industry has sprung up to address cost issues for leveraging LLMs. Current compute costs for LLMs can be expensive. As a result, there are massive efforts by a number of chip manufacturers to build more efficient, purpose-built AI chips; a range of development tools have been designed to help AI models for LLMs to run more efficiently; and there are LLMs that are trained on smaller data sets.

Conclusion

Hallucination is not the only challenge enterprises using LLMs face, but it is a significant one worth solving. If Gleen’s concept works, players will scramble to build similar solutions, particularly larger AI development platforms/tools vendors, including the LLM players themselves. Gleen’s focus is on hallucinations or customer service chatbots, but LLMs do not discern in their hallucinations, which means it is likely that savvy players will develop hallucination fighters for all LLM applications.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Qualcomm-Meta Llama 2 Could Unleash LLM Apps at the Edge

Top Trends in AI This Week: August 25, 2023

OpenAI ChatGPT Enterprise: A Tall Order

Author Information

Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

Related Insights
Glean Doubles ARR to $200M. Can Its Knowledge Graph Beat Copilot
April 3, 2026

Glean Doubles ARR to $200M. Can Its Knowledge Graph Beat Copilot?

Nick Patience, VP & Practice Lead at Futurum, examines Glean's platform evolution from enterprise search to agentic AI, as it doubles ARR to $200M and battles Microsoft 365 Copilot for...
HP IQ Finally Brings Useful On-Device AI To Workspaces
April 3, 2026

HP IQ Finally Brings Useful On-Device AI To Workspaces

Olivier Blanchard, Research Director at Futurum, shares insights on HP IQ, HP’s workplace intelligence layer combining on-device AI, proximity-based connectivity, and IT control across devices and workflows....
CrowdStrike Deepens Agentic SOC Strategy Across Partners, Services, and Devices
April 1, 2026

CrowdStrike Deepens Agentic SOC Strategy Across Partners, Services, and Devices

Fernando Montenegro, VP & Practice Lead for Cybersecurity & Resilience at Futurum, examines CrowdStrike’s agentic SOC expansion across partners, IBM, and Intel, and what it means for security execution and...
LevelBlue–SentinelOne Partnership: Does Unified Security Improve Outcomes?
April 1, 2026

LevelBlue–SentinelOne Partnership: Does Unified Security Improve Outcomes?

Fernando Montenegro, VP & Practice Lead for Cybersecurity & Resilience at Futurum, analyzes the LevelBlue SentinelOne partnership and its focus on integrating threat intelligence, AI detection, and response to improve...
Palo Alto Bets on Agentic Endpoints Before Anyone Else Does
April 1, 2026

Palo Alto Bets on Agentic Endpoints Before Anyone Else Does

Palo Alto Networks bets big on AI agent security through Koi acquisition, with CEO Nikesh Arora backing it with a $10M stock purchase, positioning the company as first to formally...
IBM Pushes Voice AI Into watsonx. Can It Drive Enterprise Adoption
March 31, 2026

IBM Pushes Voice AI Into watsonx. Can It Drive Enterprise Adoption?

Nick Patience, VP and AI Practice Lead at Futurum, examines IBM ElevenLabs voice AI integration and how it enables multilingual, secure, voice-first AI agents within watsonx Orchestrate for enterprise workflows....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.