5 Questions with Ellen Loeshelle, Qualtrics Director of Product Management, Intelligence Platform

Using AI For Efficiency and Scale, but with a Focus on Usability and Building Trust

I recently spoke with Ellen Loeshelle, Director of Product Management, Intelligence Platform, at Qualtrics to hear more about how Qualtrics already has been using AI, its plans for the future, and the need to keep focused on usability.

Can you provide some introductory background on your professional experiences that have led you to Qualtrics and that shape your approach to your role? 

Ellen Loeshelle, Director of Product Management, Intelligence Platform, Qualtrics

My background, both from an academic and personal interest standpoint, is all around languages and technology. I came to Qualtrics via the Clarabridge acquisition, where I started my career almost 11 years ago. I did two years in services and delivery sort of roles, and then switched over to product, and I’ve been in product management for about the past eight years. 

At Clarabridge I spent a good chunk of my time specifically managing the natural language processing (NLP) product which included managing our text analytics stack, and then I had the opportunity to directly oversee all the Clarabridge products. Now I manage the intelligence platform at Qualtrics, meaning all the former Clarabridge assets as well as the machine learning (ML) and R&D assets.

I think about our offerings as being a really broad toolbox that includes a lot of different intelligent functionalities. Some of them are using more exciting algorithms than others, but everything has a focus on being purpose-built for Experience Management (XM) rather than hitting some academic standard, for example. While it is important to stay relevant and use all the latest, cool technologies, Qualtrics likes to stay hyper-focused on the problems we’re trying to solve. Pragmatism is important.

How has the path of AI evolved, and what has been Qualtrics’ focus?

The way I like to think about AI historically is that at first, it was very buzzy and people were using AI for SEO hits for many years. I personally struggled with this because it didn’t feel like it meant much. So, I kind of had a personal reckoning of knowing that AI is really the set of capabilities that we use to try to mimic the way the brain works. The brain works in a lot of different ways, right? In some ways it works by using rules, but also based on probabilities and statistics and our learned experiences. I thought the real benefit will come when you can blend all these things together to traverse our day-to-day life. 

I would say that the momentum has shifted over the last five years towards more machine learning-based components, because they’re more accessible, less expensive, and there are more people trained on how to use them and build them. But in the spirit of building technology to help humans do human things at scale, I would say that’s been many years in the making. Large language models (LLMs) have been around and useful even before all this generative AI hype.

When I think about investments that Qualtrics and all of its acquired companies including Clarabridge, User Mind and Autumn, have made it has always been in that spirit of making it useful for customers rather than having it be the new bright, shiny object.

How is Qualtrics currently using LLMs, and what are plans for the future?

There are a variety of ways that we have been using LLMs and for the most part they’ve been under the surface, and customers and users are not aware of what is happening behind the scenes. 

LLMs have been very important for voice transcription, for example, taking audio recordings and transcribing them into text. The modern way of doing that is based not just on phonetics, but on language structures and language patterns. So LLM has been super critical there. And we’re seeing that market become commoditized. 

We also use it for deep linguistic processing of sentences that might come from customer feedback or employee feedback. It could be solicited surveys, reviews, social media, calls, and chats – really, anywhere that customers are talking to you, directly or indirectly. I personally am such a huge language nut, and being able to use LLM to break down sentences and understand structures is a really interesting functionality.

And all those things help downstream as we start looking at topics, sentiment, emotion, effort, and intensity. All the dimensions or enrichments that Qualtrics has been known for many years as having out of the box capabilities. 

We have also used LLM historically to help cross the multilingual divide. For example, when looking at sentiment. We can build a sentiment model in one language, but the cost to build from scratch in the other, you know, 10, 15, 30 languages that you might support as a technology becomes really, really expensive. You can’t start from scratch every time, and it was starting to become cost-prohibitive for us to extend all our enrichments and all of the other languages that we wanted to target. So LLMs and similar kinds of technologies have helped us scale globally, faster. 

We’re now exploring the question of, what are more customer facing utilities that we can introduce into our products? We’re looking at summarization capabilities, interactive analysis capabilities, semantic search, and content generation. These are areas that are really good fits for this type of technology right now. 

What are customers asking for? 

This question is so interesting, because there’s a big gap that I see now and have seen for a while. The leadership folks, or the people who sign the checks – that group, and then the internal risk management group, and then what users actually need. A lot of times, these groups are not in alignment. A lot of people are getting plans in place quickly and racing to the finish line as they don’t want to be left behind. But when you get down to what the actual user needs or wants there is a pretty big gap between the most sophisticated chat bot features and the tools needed to do their job, or the tools that they trust to do their jobs. 

The trust factor is a big consideration. As an example, if you are an individual analyst and you need topresent content up to C-level, they need to be able to defend exactly what is being produced. Having a black box solution and saying – well, this is what it is, just believe me, this can be scary for a lot of these folks. 

The balance that I’m constantly seeking and like to encourage my team on is how do we give the benefits of the efficiency and the scale that come with these technologies, but in a way that doesn’t betray our users’ trust? How do we keep humans in the loop? I’m very, very passionate that at this stage of technology development and for the foreseeable future, humans need to be a part of this. We can make their jobs a lot easier and help them participate in higher order tasks rather than the rote, tedious stuff – but we aren’t in a position to eliminate them.

In more highly regulated industries such as financial services, insurance, and health care, their tolerance for ML, not just GenAI, is pretty low. We often go through model risk management reviews for these organizations for every single component that they want to use that has some flavor of ML or AI. Sometimes we have to approach it in more of a white box way instead of a black box way that is going to make everyone more comfortable. Transparency helps with this technology and helps users build trust in the technology. I think over time, the tolerance of ambiguity will increase.

What should potential experience management customers be aware of, and wary of, when it comes to using LLMs?

Two things come to mind. The market is moving so fast that we don’t know who the players will be in six months. Sole attachment to a single vendor or attachment to a single type of capability is risky. Obviously OpenAI had the first mover advantage here, but there have been a lot of fast follows from the big players, and there are a bunch of smaller players that are doing good work right now that will pop up soon. People should keep an open mind regarding who they want to work with and how attached they are to any specific kind of technology when building a system.

And then the second is about buy vs. build. Internal data science teams are becoming more ubiquitous, and we are seeing them in every organization. And these people are super talented and motivated to use their skills to build bespoke systems for their organizations. AI expert systems, ML, whatever it is. This is great, but whenever you build something, you must maintain it. And the cost of doing that is often overlooked. You don’t want to task people to build something that you’re not going to be able to sustain and support in six months or 12 months, or that it gets out of date, or funding starts to dry up for the project.

I like to think about internal data science teams as complementary to any sort of technology you buy. You should find a technology partner that will be a really good fit to work with your data science team, not in competition. They should elevate each other. Technology should help remove a lot of the rote, tedious stuff, and provide that IT and security infrastructure. If you need dashboards, provide dashboards. If you need alerts, provide alerts. But ultimately, your data science team should be able to use that, and build on top of it so they can actually elevate their work instead of recreating things that are already commonplace in the market.Excerpt: Sherril Hanson, Senior Analyst at The Futurum Group, interviews Ellen Loeshelle, Director of Product Management, Intelligence Platform at Qualtrics, about the use of AI for efficiency and scale, with a focus on usability and building trust.

Author Information

As a detail-oriented researcher, Sherril is expert at discovering, gathering and compiling industry and market data to create clear, actionable market and competitive intelligence. With deep experience in market analysis and segmentation she is a consummate collaborator with strong communication skills adept at supporting and forming relationships with cross-functional teams in all levels of organizations.

She brings more than 20 years of experience in technology research and marketing; prior to her current role, she was a Research Analyst at Omdia, authoring market and ecosystem reports on Artificial Intelligence, Robotics, and User Interface technologies. Sherril was previously Manager of Market Research at Intrado Life and Safety, providing competitive analysis and intelligence, business development support, and analyst relations.

Sherril holds a Master of Business Administration in Marketing from University of Colorado, Boulder and a Bachelor of Arts in Psychology from Rutgers University.


Latest Insights:

The Six Five team discusses Oracle Q4FY24 earnings.
The Six Five team discusses enterprise SaaS reset or pause
The Six Five team discusses Six Five Summit 2024 wrap.

Latest Research:

In our latest Research Brief, Fortifying Operational Technology (OT) Systems Against Cyberattacks–done in partnership with Honeywell– we examine the benefits of a comprehensive strategy for protecting OT assets against cyberattacks requiring asset discovery, ongoing risk assessment, and compliance management.
The Futurum Group’s Research Brief, Unlocking AI Potential: How HPE Private Cloud AI Accelerates AI Deployment and Innovation, completed in partnership with HPE and NVIDIA, delves into the complexities of AI deployment and the solutions offered by HPE's Private Cloud AI.
In our latest research brief, Intel AI Everywhere: Ready to Transform the AI Ecosystem, we analyze why Intel is perfectly suited to pace the AI Everywhere proposition. We look at why Intel is fundamentally committed to the core proposition of bringing AI everywhere, through key offerings such as Intel Xeon processors, Gaudi accelerators, and Intel Core Ultra Processors, which are aimed at ushering in the age of AI PC and securely distributing AI workloads in data center, cloud, and edge environments.