The News:
Despite research illustrating that consumers have confidence in AI as a tool for transforming their customer experiences, research conducted by Savanta and unveiled at PegaWorld iNspire found that most people still prefer to interact with a human being instead of AI-based solutions.
The research, which surveyed 5,000 consumers worldwide on their views around AI, its continued evolution, and the ways in which they interact with the technology, found that two thirds (67%) of respondents agree AI has the potential to improve the customer service of businesses they interact with, and more than half (54%) saying companies using AI will be more likely to offer better benefits to customers compared to businesses that do not.
Meanwhile, nearly half of respondents (47%) indicated they are comfortable interacting with well-tested AI services from businesses, and two thirds (64%) said they expect most major departments within organizations will be run using AI and automation within the next 10 years.
However, the survey also uncovered behavioral insights that are particularly relevant to CX software vendors and end users. Despite the use of AI in customer engagement, 71% of respondents said they still prefer to interact with a human being than the AI itself. Nearly 7 in 10 people (68%) also said they would trust a human bank employee to make an objective, unbiased decision about whether to give a bank loan more than an AI solution. Further, nearly three-quarters (74%) of respondents said they would trust a medical diagnosis from a human doctor over an AI system that had a better track record of being correct, but could not demonstrate or explain how it arrived at its decision.
Despite a Rise in AI in CX, People Still Prefer to Interact with Humans
Analyst Take:
Research conducted by Savanta and unveiled by Pegasystems at PegaWorld iNspire found that while people are generally accepting of the use of AI within a customer experience context, there are still significant trust gaps that the information or advice served up by the technology is trustworthy. People still prefer to interact with a human being instead of AI-based solutions, even when AI systems have a better track record with being correct. Indeed, the “black box” nature of some AI systems that cannot explain or demonstrate how a decision was made is a major reason for distrust, and as such, customers still prefer to interact with other humans.
Education and Experience
Like any relatively new technology, customers need to be educated about the type of AI being used, what its capabilities and limitations may be, and how it will be deployed. Human nature dictates a healthy skepticism and fear of technology that is deployed without any underlying explanation, and particularly when the technology is being used to execute decisions involving finances, medical or health choices, or other major activity or situation, it is incumbent upon the organization utilizing the AI to be as clear and transparent as possible.
Furthermore, from a CX perspective, customers must be afforded the time to gain more experience with the technology; organizations cannot expect customers who prefer to interact with humans to automatically embrace AI-only interactions because it is more efficient and financially advantageous to the company. Organizations need to demonstrate customer-centric reasons why using an AI system will provide better outcomes for customers, and then provide the right tools to ensure that these outcomes can be delivered.
Demonstrate Best Practices and Deploy Guardrails
Organizations must be clear about the type of AI that is being used, which data is being acted upon, and clearly describe the guardrails that are being used to ensure AI is not being applied carelessly. Full disclosures about the types of AI being used should be included in the terms and conditions of any customer-facing system, and customers should be notified that it is their right to opt in to any systems that use customer data. Perhaps most importantly, disclosure on the guardrails that have been put in place, such as the presence of a human-in-the-loop to address anomalies in AI results or to review the output prior to it being sent to a customer, should be posted and reviewed frequently, as well as the underlying guardrail processes themselves.
Identify Key Use Cases for Human Interactions
Myths or exaggerations about AI, such as those included in the study (e.g., 86% respondents said they feel AI is capable of evolving itself to behave amorally, with more than a quarter (27%) saying they think this has already happened, and 30% expressing concerns about AI enslaving humanity) may seem ridiculous to those who work with the technology frequently, but they reflect real concerns expressed by customers.
As most consumers still prefer to interact with humans in high-value or high-stakes situations, organizations need to take a steady and measured approach for implementing AI in these interactions to help dispel these myths and demonstrate that organizations are taking a prudent and safe approach to further incorporating AI across a wide range of use cases and situations.
Author Information
Keith has over 25 years of experience in research, marketing, and consulting-based fields.
He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.
In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek, CNBC.com, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.
He is a member of the Association of Independent Information Professionals (AIIP).
Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.