Using Retrieval-Augmented Generation With Access Control Technique for Managing LLMs Can Be Applied Enterprise-Wide

Using-Retrieval-Augmented-Generation-With-Access-Control-Technique-for-Managing-LLMs-Can-Be-Applied-Enterprise-Wide

The News: With the rollout of LLM-driven generative AI tools, many organizations are concerned about the ability of these tools to return correct, complete, and context-appropriate information in response to queries or inputs. The use of retrieval-augmented generation (RAG) can be used to allow LLMs to serve as an enterprise-wide front-end to a data source that contains information that evolves or changes over time.

AI software platform Coveo announced Relevance Generative Answering, a new capability within the Coveo Relevance Cloud AI platform, that takes this technique and enhances it for use with a range of use cases, including self-service, customer service, commerce, website, and workplace search, by combining the power of generative AI to surface information efficiently while ensuring that content is restricted by enterprise-governed levels of access control.

You can read the Press Release here describing the capability here.

Using Retrieval-Augmented Generation With Access Control

Analyst Take: Off-the-shelf, trained large language models (LLMs), such as ChatGPT, Bloom, and LlaMA, are emerging as potentially powerful tools for surfacing insights, generating responses, and improving access to information across various enterprise-wide applications. That said, most existing LLMs are pre-trained on a static set of data, which can limit their usefulness, in terms of relevance and timeliness. However, retraining LLMs every time a piece of data changes can be too time-consuming and expensive.

For example, an organization may wish to provide a customer-facing LLM query tool to let users find information about their products or services. However, a regular LLM likely has not been specifically trained on the most appropriate data sources to reliably provide the correct and complete answers to queries, which could include product documentation, the company knowledge base, and the company-sponsored user community site. Many of these sources will change over time, and open-source LLMs would not be able to capture any changes to the content without specifically being retrained using this data.

Using RAG to Provide Utility for LLMs

One way to handle this issue is by utilizing retrieval-augmented generation, which is a technique that allows LLMs to utilize specific datasets, even ones where the content is evolving or changing, while bypassing retraining. This allows LLMs to utilize the most recent and relevant information for generating reliable outputs. Using RAG, it is possible to control an LLM’s source of information simply by swapping out the documents it uses for knowledge retrieval, thereby limiting the possibility of the model hallucinating, or returning information that is out of date. This way, an LLM front-end could be deployed across enterprise-wide applications, but with different source documents based on each department, use case, or other segment.

However, simply using RAG techniques does not address the need to segment information, based on the user’s needs, nor the company’s desire to restrict information by type or level of user.

Incorporating RAG Across Enterprise Use Cases

Coveo takes a use-case approach to utilizing RAG, which incorporates not only relevance, but the ability to control the information returned by user access level. The Coveo platform already creates a unified index of all enterprise-wide customer service and knowledge documents across all data stores, and allows them to be updated as needed. When a user searches for an answer, the query will search this unified index while adhering to the user’s specific level of access to data, and will then return the most relevant content for the context of this user. Coveo Relevance Generative Answering will then use an LLM to summarize that relevant content, and create a personalized answer. This approach of combining data access control with generative summarization can be used to ensure that the content returned fits into the context that is appropriate for each use case and user.

As an example, consider an employee seeking information on compensation bands at their company. RAG could be used to make it easier to find compensation band data at the employee’s current level, and up to 2 levels above, but restrict access to and summarization of any other information. This ensures that workers can get relevant information, while preventing access to information that does not directly pertain to them.

Similarly, enterprises could use RAG and access control techniques to ensure that the information that is only pertinent to a specific group of customers is restricted to them, yet easy to surface. This is especially important for organizations that have tiered offerings, such as those with loyalty programs, tiered service contracts, or different classes of products, such as the multi-tiered automobile packages.

Driving Confidence by Linking to Source Data

Perhaps most importantly, Coveo’s generative AI capability will provide citations to the original documents to provide the source of truth. This helps to increase the confidence of both enterprises and their users that the information being retrieved is accurate, while also allowing any hallucinations to be quickly checked and addressed.

The enterprise-wide adoption of AI, both generative and general, requires the right controls to be implemented to ensure only the right users have easy access to data. Coveo’s approach of combining generative AI with access control is a key step in driving real-world utility for AI, which is based on making specific processes easier or more efficient, while limiting the potential for misuse.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Generative AI Capabilities Coming to Pega Infinity in Q3 

IBM watsonx.data is Now Generally Available as Part of Major watsonx Announcement

IBM’s Opportunity for Generative AI in the Enterprise

Image Credit: Coveo

Author Information

Keith Kirkpatrick is VP & Research Director, Enterprise Software & Digital Workflows for The Futurum Group. Keith has over 25 years of experience in research, marketing, and consulting-based fields.

He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.

In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek, CNBC.com, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.

He is a member of the Association of Independent Information Professionals (AIIP).

Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.

Related Insights
Can Large Language Models Be Trusted in Real Clinical Conversations?
April 24, 2026

Can Large Language Models Be Trusted in Real Clinical Conversations?

A new analysis benchmarks large language models on real clinician conversations, revealing critical safety insights as healthcare organizations rapidly adopt generative AI—findings that will shape enterprise strategies and regulatory approaches....
April 24, 2026

Will Edison International’s Board Refresh Accelerate Its AI and Digital Ambitions?

Edison International appoints M. Susan Hardwick as independent director, strengthening the utility's leadership as it confronts mounting pressure to modernize operations and leverage AI-driven infrastructure solutions....
Industrial AI
April 23, 2026

Can Lenovo’s AI Manufacturing Push at Hannover Messe Rewrite the Playbook for Industrial Scale?

Lenovo showcases AI solutions at Hannover Messe 2026, claiming 85% faster lead times. With 94% of manufacturers planning AI investment increases, competition intensifies between Lenovo, Siemens, and Rockwell Automation....
Is Anthropic’s $100 Billion Pact for AWS Silicon a Bargain in a Supply-Constrained Market?
April 23, 2026

Is Anthropic’s $100 Billion Pact for AWS Silicon a Bargain in a Supply-Constrained Market?

Brendan Burke, Research Director at Futurum, examines how Anthropic's $100 billion decade-long commitment to AWS Trainium and Graviton reshapes frontier AI infrastructure economics and supply dynamics....
ChatGPT Images 2.0 Raises the Stakes in Enterprise AI—But Will Reliability Keep Pace?
April 23, 2026

ChatGPT Images 2.0 Raises the Stakes in Enterprise AI—But Will Reliability Keep Pace?

OpenAI's ChatGPT Images 2.0 intensifies competition with Microsoft and Google, but enterprise adoption hinges on reliability. Futurum Group's Decision Maker Survey reveals 55% cite AI agent hallucination management as the...
Qodo Hands PR-Agent to the Community: Will Open Governance Accelerate AI Code Review?
April 23, 2026

Qodo Hands PR-Agent to the Community: Will Open Governance Accelerate AI Code Review?

Qodo's transfer of PR-Agent to community ownership marks a pivotal test for open-source AI against proprietary competitors demanding transparency and rapid innovation....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.