Balancing Average Resolution Time With CSAT in the Generative AI Era

Balancing Average Resolution Time With CSAT in the Generative AI Era

The News: Contact center, CRM, and other CX-focused software vendors are rolling out generative AI features, many of which are specifically designed to aid contact center workers, by surfacing quick insights and recommendations, summarizing the previous interactions a customer has had with other representatives, and handling the tedious post-call wrap up tasks that can not only reduce the potential number of interactions an agent can have in a day, but also contribute to burnout and fatigue.

These generative AI use cases should have a positive impact on an often-used contact center performance metric known as average resolution time (ART), which captures the average amount of time it takes a customer support or success team to resolve a support request. The metric is calculated by dividing the time it takes to resolve every support request by the number of tickets resolved within the same period, with a lower ART indicating a responsive and effective customer service team, which should lead to happier and more satisfied customers. That said, agent training is a key catalyst that will allow an organization to fully reap the benefits of generative AI in the contact center.

Balancing Average Resolution Time With CSAT in the Generative AI Era

Analyst Take: Generative AI functionality increasingly is being incorporated into contact center, CRM, and other CX-focused software, with many use cases directly intended to help contact center workers more quickly handle customer inquiries and support issues. As generative AI makes it easier and more efficient for agents to handle customer inquiries, several issues may become more prominent as it becomes commonplace in the contact center.

Will Gen AI Require Additional Training for Agents to Use It Properly?

A key concern about the use of generative AI technology is ensuring that agent training programs are developed, in terms of how to utilize human-in-the-loop generative AI tools most effectively, how to vet the automatically generated responses, and perhaps most importantly, understanding when it is more appropriate to ignore the suggestion or modify the result to ensure a positive customer interaction.

Indeed, even generative AI algorithms that have been trained to only utilize a specific corpus of data may still hallucinate, and provide incorrect or incomplete answers. But more importantly, comprehensive agent training is needed to ensure agents can identify the right time to utilize an AI-generated response, versus modifying or augmenting it with their own voice and insights.

While some vendors such as Zendesk have been rolling out features that allow AI-generated responses to be rewritten or modified to incorporate a more informal or lifelike tone, the technology is not yet perfect, and may not be able to match each individual agent’s style. For some customers, it will be readily apparent when an agent is interacting with them, versus what they perceive to be as a “canned” or generated response. This can be off-putting, particularly if the customer is dealing with a contentious or sensitive issue, as a shift in tone could be viewed as the agent choosing to disengage from a personalized response to a more generic or programmed response, simply to hurry the interaction along.

Will Gen AI Make It easier for Agents to “Pass Along” Answers Without Truly Considering the Situation?

Another potential pitfall with the use of generative AI in the contact center is the potential for agents to simply “pass along” AI-generated suggestions, to help them quickly complete interactions, instead of taking the time to ensure that the responses are appropriate, in terms of accuracy, relevance, and in alignment with the interaction’s tone and the customer’s emotional state.

This may be the result of two factors. First, simply passing along a pre-vetted response may be seen as the best way to handle a customer’s issue. If the agent misunderstands the customer’s intent, and sends along an AI-generated response too quickly, it can appear that the agent is more focused on wrapping up the interaction, rather than making sure that they are truly listening and trying to understand the customer’s issue.

Second, agents know they are evaluated based on metrics such as average resolution time and customer satisfaction. Simply sending off an AI-generated response without considering whether the response is accurate, complete, and matches the tone and emotional state of the interaction may result in customer dissatisfaction.

Agent training on assessing the inquiry, the limitations of generative AI, and the relative importance of ART, versus overall customer satisfaction, will be key to unlocking the value of generative AI tools.

Best Practices for Improving Average Resolution Time and Ensuring Customer Satisfaction

Generative AI is and should be deployed in contact centers to assist agents. But to ensure that the organization and its customers can benefit from the technology, several best practices and guidelines should be followed:

  • Ensure that agents are properly trained on the proper vetting procedure for AI-generated requests, including assessing the tone of the message, the suitability of using an AI-generated response based on the customer’s emotional state, and the style of the response, to ensure it is the most appropriate way to respond to the customer.
  • Train agents on how to modify/rewrite/append AI-generated information so that it blends more clearly with the conversational style they use, and so that the response does not appear machine-generated.
  • Train agents on how to read interactions for displays of emotion, and provide them with best practices for determining when it is acceptable to send out a pre-written response versus taking the extra time to modify or rewrite an AI-generated response.
  • Train agents how to properly probe customers, so that their intent is clear, which allows automated AI-based response engines to provide a more targeted and accurate response to customer issues.
  • Train agents to understand that while metrics such as ART are important, customer satisfaction and success are higher on the scale, and may require agents to take more time and utilize manual processes to ensure the customer is served properly, even at the cost of speed.

In all, the effective use of generative AI technology in the contact center is a combination of managing the technical approaches to deploying generative AI and the development and deployment of proper agent training techniques to allow them, the organization, and the customer to reap the myriad benefits.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

AI Contact Center Training Tools Provide Efficiency, But Need Guardrails

SurveyMonkey’s State of CX Report Reveals Customer Feedback Is a Priority for CX Teams in 2023

Salesforce Leverages AI, CRM, and Real-Time Data to Improve Financial Services CX

Author Information

Keith has over 25 years of experience in research, marketing, and consulting-based fields.

He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.

In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek,, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.

He is a member of the Association of Independent Information Professionals (AIIP).

Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.


Latest Insights:

All-Day Comfort and Battery Life Help Workers Stay Productive on Meetings and Calls
Keith Kirkpatrick, Research Director with The Futurum Group, reviews HP Poly’s Voyager Free 20 earbuds, covering its features and functionality and assessing the product’s ability to meet the needs of today’s collaboration-focused workers.
Paul Nashawaty, Practice Lead at The Futurum Group, shares his insights on the Aviatrix and Megaport partnership to simplify and secure hybrid and multicloud networking.
Paul Nashawaty, Practice Lead at The Futurum Group, shares his insights on AWS New York Summit 2024 and the democratizing of Generative AI.
Vendor Leverages Amazon Q on AWS to Drive Productivity and Access to Organizational Knowledge
The Futurum Group’s Daniel Newman and Keith Kirkpatrick cover SmartSheet’s use of Amazon Q to power its @AskMe chatbot, and discuss how the implementation should serve as a model for other companies seeking to deploy a gen AI chatbot.