The hype surrounding the use of generative AI and large language models (LLMs) has been hard to avoid, with many CX platform vendors and conversational technology providers quickly launching press releases highlighting their products’ utilization of these new tools. However, few vendors have taken a similarly aggressive approach at the product level as LivePerson, which announced on May 2 the availability of its Conversational Cloud platform with new generative AI and LLM-driven capabilities, as well as a future capabilities roadmap.
Representatives from LivePerson, a provider of conversational experiences to B2B companies, said at the launch event that the deployment of generative AI and LLM tools will be focused on an extensive range of CX-and EX-focused use cases: empowering agents with guided workflows; providing automated summaries or delivering fully automated, voice conversations; streamlining employee engagement by automating HR and other business workflows; providing customer insights and business intelligence from conversational data; and allowing individuals to create their own personal AI bots via LivePerson’s Bella AI platform.
“One of our most significant new features will be conversational insights,” explains Alex Kroman, EVP Product and Technology at LivePerson. “This is an improved conversational intelligence experience that will enable you to create more effective automations and better understand your customers. We are also working on enhanced integrations, which will enable conversations to trigger thousands of business actions by integrating with commonly used business platforms.”
LivePerson is fully leaning into the use of these new AI tools, reflecting the company’s confidence in the guardrails that have been incorporated into its platform to ensure safe and fair use of these new tools.
“The primary determinant of what your AI wants to and is able to talk about is the data you expose to it,” says Joe Bradley, Chief Data Scientist at LivePerson, noting that LLM responses are restricted to a curated collection of knowledge content that is managed within LivePerson’s controlled environment.
Bradley adds that the customer will be able to choose between both more permissive and stricter governors of the generative AI, in the form of prompts and other controls tested on hundreds of use cases. Brands can also simply use these tools behind the scenes, serving a human agent, letting the human agent work faster and more efficiently.
LivePerson also noted that the platform allows customers to test the technology extensively before exposing it to their own customers, and will include tools to measure and handle conversation-and-answer quality regression to make sure new versions of a bot are safer than the older versions. The platform also includes monitoring and interruption capabilities, which can be set up to ensure that the platform does not go off the rails.
“You’ll also have access to real-time sensors that can identify new types of problems inherent with generative AI like hallucination, prompt abuse, and critically, you’ll have the ability to train and refine your own sensors like these with your data and your human agent feedback,” Bradley says. “You’ll even have a separate generative AI bot that can test your AI with thousands of conversations using your conversational data to simulate your own customer’s behavior with your AI.”
Perhaps most importantly for customers, LivePerson clearly laid out its roadmap for the incorporation of generative AI and LLM within their platform. This approach stands in contrast to many other CX platforms and technology providers, which have been far less transparent in their plans for incorporating generative AI and LLMs into their products.
With the initial platform launch, generative AI capabilities powered by LLMs will be available to help agents deliver a better omnichannel experience (providing recommended answers, content summarization, and the creation of non-code-virtual assistants that can interact via voice or digital channels). Also available at launch are LLM-powered voice bots that can handle phone conversations and direct customers to the channel that is best suited to help them based on intent, sentiment, and specific needs, along with Voicebase analytics for training and improvement. The self-service Bella AI service is also available now, allowing the automated creation of conversational experiences without the need for complex setup processes or programming expertise.
Later in the summer, conversational insights, enhanced employee engagement templates for IT and HR use cases, and 1,500+ integrations connecting LLMs with automated content curation will be launched, enabling the resolution of any action across Voice or Messaging AI. Additionally, LLM functionality across the three initial launch categories (Generative AI, Voice AI, and Bella AI) will be made, including enhanced safety features, self-service options, and insights capabilities.
Author Information
Keith has over 25 years of experience in research, marketing, and consulting-based fields.
He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.
In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek, CNBC.com, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.
He is a member of the Association of Independent Information Professionals (AIIP).
Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.