Webex Contact Center Incorporates Generative AI Conversation Summaries

Low-Latency Summarization Capabilities to Improve Efficiency and Collaboration

The News:

Cisco announced at its Cisco Live customer event on June 7 the incorporation of generative AI summarization technology into Webex Contact Center, which is designed to provide workers with better efficiency and collaboration enhancements. The announcement was made in parallel with the preview of generative AI capabilities to simplify policy management and threat response within Cisco Security Cloud.

Cisco announced several summarization use cases in which generative AI can be deployed to help drive efficiency and collaboration between customers and employees within Webex.

  • Catch Me Up will allow users to quickly catch up on missed interactions, including meetings, calling, chats and more.
  • Intelligent meeting summaries with key points and action items, which allow users to opt in to automatically generate the most important elements of a Webex meeting, extract the key points, and capture action items with owners.
  • Create summaries in Vidcast, the company’s video messaging tool, which will produce highlights and chapters so viewers can navigate to the most important parts of the video quickly.

Meanwhile, generative AI-driven conversation summaries within Webex Contact Center will provide agents with a fast, automated way to consume long-form text from digital chats with customers, as well as facilitating post-call wrap-up and resolution tasks with customers. These low-latency chat summaries are designed to provide the agent with a clear summarization of issues and resolutions already explored via self-service and previous interactions, as a summary of the call to both the agent and customer once it ends.

The Catch Me Up summaries, Intelligent meeting summaries, summaries in Vidcast, and conversation summaries in Webex Contact Center are not yet available in general release, but will be available by the end of 2023.

You can read the full press release here.

Webex Contact Center Incorporates Generative AI Conversation Summaries

Analyst Take:

Cisco announced on June 7 at its Cisco Live customer event several use cases for its generative AI-powered summarization function, which are designed to improve agent efficiency and collaboration with others. The key features include Catch Me Up, which utilizes summaries to let users quickly catch up on missed interactions, including meetings, calling, and chats; intelligent meeting summaries that automatically generate the most important elements of a Webex meeting, extract the key points, and capture action items with owners; and the creation of summaries in Vidcast, the company’s video messaging tool, which will produce highlights and chapters so viewers can navigate to the most important parts of the video quickly.

Large Language Model Selection

According to Cisco, Webex is using a range of state-of-the-art open-source and proprietary large language models (LLMs) in current released and upcoming Webex products. The company explained in an email interview that it conducts prompt engineering for each use case and each model they integrate and deploy.

A key question remains on the user expectations for generative AI functionality within CX platforms. Using smaller generative AI models is not terribly expensive, but more complex models that are more accurate and powerful can quickly escalate the cost of deploying generative AI within a platform that is used to manage hundreds or thousands of interactions per day, every day. As such, vendors are going to be faced with the challenge of assessing which generative AI models are used for which use cases, and how to appropriately cover the costs.

For its part, Cisco is still in the process of working out pricing for its generative AI summarization; Cisco indicated that the company is still developing the specific features and will be releasing further details later in the year. This approach mirrors that of many other vendors that have recently announced generative AI features, including Salesforce, Twilio, and others.

Training and Grounding Generative AI Models with Customer Data

Cisco says that it does not rely on using customer data or agent interactions to train general AI models. Essentially, this means that customer data is not used to train larger generative AI models, such as those from OpenAI and others, as this data would then be available for use by any other entity that uses these models.

However, Cisco says it is experimenting with the fine-tuning of models for each individual customer, using their own data, essentially creating mini models for each customer that is not fed back into the LLM. This also ensures that no cross-use of customer data is shared between models. The company also says it is working with a select group of customers who explicitly “opt in” for use of their data for broader features development.

The training of the company-specific, internal models using opt-in customer data likely will help the models become more effective and accurate. My sense is that customers—particularly large enterprises with a range of use cases—will largely be willing to share this data with platforms such as Cisco, due to the desire to “push the envelope,” in terms of what can be accomplished with generative AI, while understanding that there still need to be guardrails around the technology.

Low-Latency Summary Generation Permits Seamless Handoffs During Interactions

One of the other interesting features in Webex Contact Centers is the conversation summaries. Cisco says its approach to summarization is to improv efficiency and collaboration during engagements. The conversation summaries are created using generative AI models that quickly summarize interactions with “sufficiently low latency,” ensuring that neither the customer nor another agent needs to wait to see a summary of the previous interactions. This facilitates smooth handoffs between self-service, previous human agents, and dropped and reconnected calls.

This low-latency approach is a requirement for contact centers. In the past, handoffs between agents were accompanied by a delay, as one agent relayed the details of the interaction to another. This created additional wait time for the customer, and additional friction in the agent-to-agent process, particularly if the issue was significantly complex. The ability to minimize the elapsed time between handoffs, while improving the accuracy and completeness of interaction notes, will be a key factor in ensuring a good contact center experience.

Author Information

Keith Kirkpatrick is Research Director, Enterprise Software & Digital Workflows for The Futurum Group. Keith has over 25 years of experience in research, marketing, and consulting-based fields.

He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.

In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek, CNBC.com, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.

He is a member of the Association of Independent Information Professionals (AIIP).

Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.

SHARE:

Latest Insights:

Deal Doubles HPE’s Networking Business While Positioning the Company for AI, Datacenter, and Cybersecurity Market Opportunities
Fernando Montenegro, Vice President and Practice Lead, Cybersecurity & Resilience at Futurum, analyzes HPE's acquisition of Juniper Networks, combining comprehensive networking portfolios across enterprise and service provider markets.
Andy Palmer, Director of Technology at AWS, joins us to reveal how strategic collaborations and custom silicon are defining the future of enterprise AI.
Strengthened Partnership with Samsung Foundry Yields Major Advances in HBM3, EDA Flows, and IP on SF2 and SF2P Nodes
Ray Wang, Research Director at Futurum, shares his insights on Synopsys and Samsung’s expanded collaboration to fast-track AI and multi-die chip design using certified flows, advanced packaging, and a robust portfolio of silicon IP.

Latest Research:

In our latest market brief, Unlocking the Future of Hybrid Cloud with Red Hat OpenShift Virtualization, developed in partnership with Red Hat, The Futurum Group outlines the evolving virtualization landscape, the economic and operational drivers behind infrastructure modernization, and the technical innovations powering OpenShift’s hybrid cloud strategy.
In our latest Market Brief, The AI-Powered Content Revolution, created in partnership with Egnyte, Futurum explores how AI is transforming cloud content management from static storage into a dynamic system of insight, automation, and compliance. The report offers guidance on how enterprises can deploy AI agents, streamline knowledge discovery, and future-proof their content operations.
In our latest market brief, Modern Data Protection for Modern Threats: A Strategic Blueprint for Cyber Resilience, written in collaboration with Quantum, Futurum outlines how IT leaders can move beyond reactive data protection toward proactive recovery-readiness by implementing a multi-tiered and performance-optimized cyber resilience architecture.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.