Menu

LivePerson Launches Generative AI Products and Tools

Deployment of New AI Tools Includes a Focus on Customer Safety and Control

The hype surrounding the use of generative AI and large language models (LLMs) has been hard to avoid, with many CX platform vendors and conversational technology providers quickly launching press releases highlighting their products’ utilization of these new tools. However, few vendors have taken a similarly aggressive approach at the product level as LivePerson, which announced on May 2 the availability of its Conversational Cloud platform with new generative AI and LLM-driven capabilities, as well as a future capabilities roadmap.

Representatives from LivePerson, a provider of conversational experiences to B2B companies, said at the launch event that the deployment of generative AI and LLM tools will be focused on an extensive range of CX-and EX-focused use cases: empowering agents with guided workflows; providing automated summaries or delivering fully automated, voice conversations; streamlining employee engagement by automating HR and other business workflows; providing customer insights and business intelligence from conversational data; and allowing individuals to create their own personal AI bots via LivePerson’s Bella AI platform.

“One of our most significant new features will be conversational insights,” explains Alex Kroman, EVP Product and Technology at LivePerson. “This is an improved conversational intelligence experience that will enable you to create more effective automations and better understand your customers. We are also working on enhanced integrations, which will enable conversations to trigger thousands of business actions by integrating with commonly used business platforms.”

LivePerson is fully leaning into the use of these new AI tools, reflecting the company’s confidence in the guardrails that have been incorporated into its platform to ensure safe and fair use of these new tools.

“The primary determinant of what your AI wants to and is able to talk about is the data you expose to it,” says Joe Bradley, Chief Data Scientist at LivePerson, noting that LLM responses are restricted to a curated collection of knowledge content that is managed within LivePerson’s controlled environment.

Bradley adds that the customer will be able to choose between both more permissive and stricter governors of the generative AI, in the form of prompts and other controls tested on hundreds of use cases. Brands can also simply use these tools behind the scenes, serving a human agent, letting the human agent work faster and more efficiently.

LivePerson also noted that the platform allows customers to test the technology extensively before exposing it to their own customers, and will include tools to measure and handle conversation-and-answer quality regression to make sure new versions of a bot are safer than the older versions. The platform also includes monitoring and interruption capabilities, which can be set up to ensure that the platform does not go off the rails.

“You’ll also have access to real-time sensors that can identify new types of problems inherent with generative AI like hallucination, prompt abuse, and critically, you’ll have the ability to train and refine your own sensors like these with your data and your human agent feedback,” Bradley says. “You’ll even have a separate generative AI bot that can test your AI with thousands of conversations using your conversational data to simulate your own customer’s behavior with your AI.”

Perhaps most importantly for customers, LivePerson clearly laid out its roadmap for the incorporation of generative AI and LLM within their platform. This approach stands in contrast to many other CX platforms and technology providers, which have been far less transparent in their plans for incorporating generative AI and LLMs into their products.

With the initial platform launch, generative AI capabilities powered by LLMs will be available to help agents deliver a better omnichannel experience (providing recommended answers, content summarization, and the creation of non-code-virtual assistants that can interact via voice or digital channels). Also available at launch are LLM-powered voice bots that can handle phone conversations and direct customers to the channel that is best suited to help them based on intent, sentiment, and specific needs, along with Voicebase analytics for training and improvement. The self-service Bella AI service is also available now, allowing the automated creation of conversational experiences without the need for complex setup processes or programming expertise.

Later in the summer, conversational insights, enhanced employee engagement templates for IT and HR use cases, and 1,500+ integrations connecting LLMs with automated content curation will be launched, enabling the resolution of any action across Voice or Messaging AI. Additionally, LLM functionality across the three initial launch categories (Generative AI, Voice AI, and Bella AI) will be made, including enhanced safety features, self-service options, and insights capabilities.

Author Information

Keith Kirkpatrick is VP & Research Director, Enterprise Software & Digital Workflows for The Futurum Group. Keith has over 25 years of experience in research, marketing, and consulting-based fields.

He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.

In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek, CNBC.com, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.

He is a member of the Association of Independent Information Professionals (AIIP).

Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.

Latest Insights:
Collapsing the Stack VAST Data’s Bid to Own the AI Data Loop
February 27, 2026
Article
Article

Collapsing the Stack: VAST Data’s Bid to Own the AI Data Loop

Brad Shimmin, Vice President at Futurum, analyzes the VAST Data platform updates from VAST Forward, detailing how the new Policy Engine, Tuning Engine, and Polaris architectures are simplifying the AI data pipeline....
Workday Q4 FY 2026 Earnings Mark AI Agent Push Amid Slight Outlook Miss
February 27, 2026
Article
Article

Workday Q4 FY 2026 Earnings Mark AI Agent Push Amid Slight Outlook Miss

Keith Kirkpatrick, VP and Research Director at Futurum, analyzes Workday’s Q4 FY 2026 earnings, focusing on the company’s agentic AI product direction, commercial attach signals in expansions....
Synopsys Q1 FY 2026 Earnings Highlight EDA and Ansys Momentum
February 27, 2026
Article
Article

Synopsys Q1 FY 2026 Earnings Highlight EDA and Ansys Momentum

Brendan Burke, Research Director at Futurum, analyzes Synopsys’ Q1 FY 2026 earnings, highlighting AI-driven design automation momentum, strong Ansys contribution, and implications for silicon-to-system engineering workflows....
Will ServiceNow's Autonomous Workforce Redraw the Map for Enterprise AI Execution
February 27, 2026
Article
Article

Will ServiceNow’s Autonomous Workforce Redraw the Map for Enterprise AI Execution?

Keith Kirkpatrick, VP & Research Director at Futurum, covers ServiceNow’s announcement of its Autonomous Workforce, and discusses the implications for organizations seeking to use AI agents to handle L1 service desk inquiries...
Latest Research:
Cybersecurity in the Age of AI: Moving from Fragile to Resilient
February 27, 2026
Research
Research

Cybersecurity in the Age of AI: Moving from Fragile to Resilient

In this Futurum Research report, Cybersecurity in the Age of AI: Moving from Fragile to Resilient, created in collaboration with N-able, we outline a modern framework for business resilience built...
The Open Lakehouse Imperative: Delivering AI Value Without Compromise
February 27, 2026

The Open Lakehouse Imperative: Delivering AI Value Without Compromise

In The Open Lakehouse Imperative: Delivering AI Value Without Compromise, a report commissioned by Oracle, Futurum Research explains why many lakehouse stacks still force tradeoffs—and what a truly open, multi-cloud,...
The Agentic Frontier: Why Converged Data Engines are the Optimal Foundation for Autonomous Enterprise AI
February 20, 2026

The Agentic Frontier: Why Converged Data Engines are the Optimal Foundation for Autonomous Enterprise AI

In our latest report, The Agentic Frontier: Why Converged Data Engines are the Optimal Foundation for Autonomous Enterprise AI, commissioned by Oracle, Futurum Research examines why agentic AI is exposing...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.