Menu

Balancing Average Resolution Time With CSAT in the Generative AI Era

Balancing Average Resolution Time With CSAT in the Generative AI Era

The News: Contact center, CRM, and other CX-focused software vendors are rolling out generative AI features, many of which are specifically designed to aid contact center workers, by surfacing quick insights and recommendations, summarizing the previous interactions a customer has had with other representatives, and handling the tedious post-call wrap up tasks that can not only reduce the potential number of interactions an agent can have in a day, but also contribute to burnout and fatigue.

These generative AI use cases should have a positive impact on an often-used contact center performance metric known as average resolution time (ART), which captures the average amount of time it takes a customer support or success team to resolve a support request. The metric is calculated by dividing the time it takes to resolve every support request by the number of tickets resolved within the same period, with a lower ART indicating a responsive and effective customer service team, which should lead to happier and more satisfied customers. That said, agent training is a key catalyst that will allow an organization to fully reap the benefits of generative AI in the contact center.

Balancing Average Resolution Time With CSAT in the Generative AI Era

Analyst Take: Generative AI functionality increasingly is being incorporated into contact center, CRM, and other CX-focused software, with many use cases directly intended to help contact center workers more quickly handle customer inquiries and support issues. As generative AI makes it easier and more efficient for agents to handle customer inquiries, several issues may become more prominent as it becomes commonplace in the contact center.

Will Gen AI Require Additional Training for Agents to Use It Properly?

A key concern about the use of generative AI technology is ensuring that agent training programs are developed, in terms of how to utilize human-in-the-loop generative AI tools most effectively, how to vet the automatically generated responses, and perhaps most importantly, understanding when it is more appropriate to ignore the suggestion or modify the result to ensure a positive customer interaction.

Indeed, even generative AI algorithms that have been trained to only utilize a specific corpus of data may still hallucinate, and provide incorrect or incomplete answers. But more importantly, comprehensive agent training is needed to ensure agents can identify the right time to utilize an AI-generated response, versus modifying or augmenting it with their own voice and insights.

While some vendors such as Zendesk have been rolling out features that allow AI-generated responses to be rewritten or modified to incorporate a more informal or lifelike tone, the technology is not yet perfect, and may not be able to match each individual agent’s style. For some customers, it will be readily apparent when an agent is interacting with them, versus what they perceive to be as a “canned” or generated response. This can be off-putting, particularly if the customer is dealing with a contentious or sensitive issue, as a shift in tone could be viewed as the agent choosing to disengage from a personalized response to a more generic or programmed response, simply to hurry the interaction along.

Will Gen AI Make It easier for Agents to “Pass Along” Answers Without Truly Considering the Situation?

Another potential pitfall with the use of generative AI in the contact center is the potential for agents to simply “pass along” AI-generated suggestions, to help them quickly complete interactions, instead of taking the time to ensure that the responses are appropriate, in terms of accuracy, relevance, and in alignment with the interaction’s tone and the customer’s emotional state.

This may be the result of two factors. First, simply passing along a pre-vetted response may be seen as the best way to handle a customer’s issue. If the agent misunderstands the customer’s intent, and sends along an AI-generated response too quickly, it can appear that the agent is more focused on wrapping up the interaction, rather than making sure that they are truly listening and trying to understand the customer’s issue.

Second, agents know they are evaluated based on metrics such as average resolution time and customer satisfaction. Simply sending off an AI-generated response without considering whether the response is accurate, complete, and matches the tone and emotional state of the interaction may result in customer dissatisfaction.

Agent training on assessing the inquiry, the limitations of generative AI, and the relative importance of ART, versus overall customer satisfaction, will be key to unlocking the value of generative AI tools.

Best Practices for Improving Average Resolution Time and Ensuring Customer Satisfaction

Generative AI is and should be deployed in contact centers to assist agents. But to ensure that the organization and its customers can benefit from the technology, several best practices and guidelines should be followed:

  • Ensure that agents are properly trained on the proper vetting procedure for AI-generated requests, including assessing the tone of the message, the suitability of using an AI-generated response based on the customer’s emotional state, and the style of the response, to ensure it is the most appropriate way to respond to the customer.
  • Train agents on how to modify/rewrite/append AI-generated information so that it blends more clearly with the conversational style they use, and so that the response does not appear machine-generated.
  • Train agents on how to read interactions for displays of emotion, and provide them with best practices for determining when it is acceptable to send out a pre-written response versus taking the extra time to modify or rewrite an AI-generated response.
  • Train agents how to properly probe customers, so that their intent is clear, which allows automated AI-based response engines to provide a more targeted and accurate response to customer issues.
  • Train agents to understand that while metrics such as ART are important, customer satisfaction and success are higher on the scale, and may require agents to take more time and utilize manual processes to ensure the customer is served properly, even at the cost of speed.

In all, the effective use of generative AI technology in the contact center is a combination of managing the technical approaches to deploying generative AI and the development and deployment of proper agent training techniques to allow them, the organization, and the customer to reap the myriad benefits.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

AI Contact Center Training Tools Provide Efficiency, But Need Guardrails

SurveyMonkey’s State of CX Report Reveals Customer Feedback Is a Priority for CX Teams in 2023

Salesforce Leverages AI, CRM, and Real-Time Data to Improve Financial Services CX

Author Information

Keith Kirkpatrick is VP & Research Director, Enterprise Software & Digital Workflows for The Futurum Group. Keith has over 25 years of experience in research, marketing, and consulting-based fields.

He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.

In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek, CNBC.com, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.

He is a member of the Association of Independent Information Professionals (AIIP).

Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.

Related Insights
Collapsing the Stack VAST Data’s Bid to Own the AI Data Loop
February 27, 2026

Collapsing the Stack: VAST Data’s Bid to Own the AI Data Loop

Brad Shimmin, Vice President at Futurum, analyzes the VAST Data platform updates from VAST Forward, detailing how the new Policy Engine, Tuning Engine, and Polaris architectures are simplifying the AI...
Are Enterprises Ready for the Virtualization Reset, or Just Swapping Out One Complexity for Another
February 27, 2026

Are Enterprises Ready for the Virtualization Reset, or Just Swapping Out One Complexity for Another?

Futurum’s Alastair Cooke shares his insights on new HPE research that finds that only 5% of enterprises are fully prepared for the so-called Great Virtualization Reset, even as two-thirds plan...
NVIDIA Q4 FY 2026 Earnings Highlight Durable AI Infrastructure Demand
February 27, 2026

NVIDIA Q4 FY 2026 Earnings Highlight Durable AI Infrastructure Demand

Futurum’s Nick Patience analyzes NVIDIA’s Q4 FY 2026 earnings, highlighting data center scale, networking expansion, and agentic AI adoption shaping AI infrastructure demand....
Salesforce Q4 FY 2026 Earnings Show Agentic AI Scaling, Guidance Steadies
February 27, 2026

Salesforce Q4 FY 2026 Earnings Show Agentic AI Scaling, Guidance Steadies

Keith Kirkpatrick, VP and Research Director at Futurum, analyzes Salesforce’s Q4 FY 2026 earnings, focusing on Agentforce scaling, enterprise AI execution metrics, and what FY 2027 guidance signals for growth...
The Storage Era is Dead; Long Live Everpure!
February 25, 2026

Storage Evolved: Everpure Takes on Data Challenges for an AI World

Brad Shimmin, VP and Practice Lead at Futurum, shares his insights on Pure Storage’s rebrand to Everpure as well as its supportive acquisition of 1touch.io, exploring why dropping "Storage" is...
Five9 Q4 FY 2025 Earnings Revenue Beat, AI Momentum, Cash Flow High
February 25, 2026

Five9 Q4 FY 2025 Earnings: Revenue Beat, AI Momentum, Cash Flow High

Keith Kirkpatrick, VP & Research Director, Enterprise Software & Digital Workflows at Futurum, notes Five9’s Q4 FY 2025 AI momentum and record bookings signal strong H2 FY 2026 growth....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.