Deconstructing Generative AI – Six Five On the Road, with Pegasystems

Deconstructing Generative AI - Six Five On the Road, with Pegasystems

In this episode of Six Five On the Road, host Keith Kirkpatrick welcomes Peter van der Putten, Lead Scientist & Director of Pega’s AI Lab at Pegasystems for a conversation about generative AI in the enterprise.

Their discussion covers:

  • The current state of generative AI
  • Common use cases for the technology, pitfalls and hurdles to avoid when deploying generative AI
  • A preview of the topics he and others will be discussing at PegaWorld iNspire 2024

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Listen to the audio here:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.


Keith Kirkpatrick: Hello, and welcome to another episode of Six Five On the Road. I’m your host, Keith Kirkpatrick, Research Director, Enterprise Applications with The Futurum Group. Today we’re going to be addressing a really interesting topic. Generative AI. You really can’t read an article or talk with a vendor, or even talk with an end customer without discussing generative AI. So to help discuss and dive deep into these issues around generative AI, I’m pleased to welcome Peter van der Putten, Director of the AI Lab at Pegasystems. Welcome, Peter.

Peter van der Putten: Thanks. Great to be here.

Keith Kirkpatrick: Well, great, Peter. Maybe we can just start off, and if you could just give a little bit more background about yourself and why you’re here talking to us today, that would be great.

Peter van der Putten: Yeah, sure. Yeah, so I’m the Director of Pega’s AI Lab, so I look at how our clients can innovate their business with AI, put their business on steroids, or optimize their corporate goals and strategies. But likewise, I need to look at how we can apply it to ourselves as well. We’re an enterprise platform provider, so how can we innovate our own platform as well using AI? And next to that on one day a week, also an assistant professor at Leiden University trying to keep up with the cool kids in the field of AI, which is hard enough.

Keith Kirkpatrick: Yeah, absolutely. I’m sure it is. The state of innovation is really kind of off the charts these days.

Peter van der Putten: It is.

Keith Kirkpatrick: So why don’t we start with just getting your assessment on where we are in the market with generative AI. We hear a lot of hype about the technology, but I’m just curious, are we actually seeing deployments with customers, or is it just pilot programs? Maybe you can just shed a little bit of light on what is your assessment of where generative AI is today?

Peter van der Putten: Yeah, absolutely. Like you said, there’s been a lot of noise and talk about enterprise generative AI. But a lot of companies are more on the brink of moving into actual pilots. So some are piloting, others are still investigating. I think that’s also because enterprise generative AI, whilst the underlying AI is very similar to the AI that we would use as a consumer, actually the experience is quite different. So McKinsey looked at the market, and based on their analysis, they said, well, if you only take three business functions, customer service, operations, marketing, maybe a development, that already captures 75% of the benefits you can create with generative AI.

So in enterprise GenAI, it’s not about this freeform experience where you just go into some chat window and paste in proprietary data from customers. It’s really about building very use case specific capabilities that are GenAI powered. Plus, yeah, I was already alluding a little bit to some of the risks around generative AI, and that that also means that companies are taking this step-by-step. But we do see now a lot of companies doing piloting in these areas that I mentioned, yeah, with some very nice examples of use cases.

Keith Kirkpatrick: Yeah, let’s dig into that a little bit, because obviously if you read some of the popular press or even some of the trade press about generative AI, there are just a multitude of use cases potentially for generative AI. But I’m curious to hear what are the ones that are actually entering into pilots now that actually show a lot of promise for enterprises? Obviously we’re talking about ones where we want to make sure that there is not only safety, but also business value.

Peter van der Putten: Yeah, absolutely. So I think it actually aligns nicely with that observation that there’s some of these pockets that are ahead, like marketing, customer operations, customer service development. So in marketing, you can imagine marketers can use it to make more kind of engaging recommendation content. So let’s say I’m a bank and I want to recommend whatever, certain credit card to customers. For some reason, maybe the message is not resonating, we can use generative AI to create variants of those treatments with different persuasion styles, for example.

Or different tone of voice, or a particular way to talk about the product, which resonates better with particular target groups. In customer service side, and yeah, it can also help of course to assist agents or customers directly with resolving their customer service issues. And then finally, I think in… Well, that translates also to operations as a general area, as well as the development of apps. And I don’t mean just coding assistance, but also local platforms can be driven empowered by generative AI.

Keith Kirkpatrick: Yeah. I just want to quickly go back to something you mentioned there about making sure that messaging actually resonates with customers. It sounds like what you’re talking about here is really taking a lot of the customer data that organizations already have, and really making it much more easier to interact with that data to provide a much more hyper-personalized experience that is going to resonate more than sort of a generic marketing campaign.

Peter van der Putten: Yeah, absolutely. And I think you can approach it from the marketing angle, but ultimately your employees are your customers too. And you can personalize the employee experience as well. But if we start with that customer experience, the example that I gave is, let’s say, the credit card again. And let’s say, for some reason these recommendations are not performing well with tech-savvy millennials, and then you could ask generative AI to rewrite those treatments more towards that particular audience, based on the characteristics of that audience as well, and that you can tap into from the data you already have about those customers.

But that also translates more to almost like personalization towards the agent, for example. And so if we want to make recommendations to coach agents or any kind of case or process work, or frankly to resolving a particular case or process or service issue. We should be using and leveraging all the information that we have about the particular case at hand, that we’re working on right now. So imagine you’re calling into whatever, cancel your contract or you lost your credit card, and that we pull in all the information that we have about you as a customer, but also what have we discussed so far in our conversation, or maybe in previous conversations, and feed that into the AI to make more relevant recommendations.

Keith Kirkpatrick: You actually touched on another point, which I think sometimes gets a bit distorted in media reports, which is AI or generative AI right now, it is not necessarily going to replace humans in all cases. It sounds like it’s really sort of an Aid or an assistant to help them do their jobs more efficiently, perhaps more accurately, and of course more consistently. Is that something that you’re finding when you’re talking with customers?

Peter van der Putten: Yeah, absolutely. Yeah. So the marketing use case, of course, was more direct to customer. Even there, there’s a marketer in the loop who would sign off that content. But in a customer service scenario, it could start with actually coaching an agent towards resolving the issue at hand. So in that sense, AI is not just artificial intelligence. Of course, we’ve been taught by Hollywood for 100 years that the robots are coming and taking our jobs. But in many cases, AI is also augmented intelligence. So how can it work shoulder to shoulder and next to an employee, or ultimately shoulder to shoulder next to a customer, to resolve a particular interaction? It’s also something that we talk about in our AI manifesto.

So we put out manifesto around what are the best ways to create value with AI, but also responsible impact and seeing AI also as augmented intelligence that works shoulder to shoulder. Yeah, that’s a key thing that we highlight in that manifesto. And it’s kind of cool, because in the early days of AI, if you go back to the ’60s, and you had Licklider who was heading up the defense R&D agency, or Engelbart, I believe he was at Berkeley at that point in time, and they all looked at AI not just as AI replacing humans, but very much something to augment human intellect, or papers. Licklider wrote this paper which is called Man-Machine Symbiosis, which is all about not the AI is an evil bot that comes to us to take our jobs or take over the world, but really something which is more like an assistant or a buddy that, let’s say an intern who could help you.

Keith Kirkpatrick: Right. Right. The other thing, Peter, that you mentioned earlier, which is interesting, is looking at how generative AI can really assist on the operations front. Obviously Pega has a low-code platform. It seems that generative AI is a great way to sort of enhance that experience to allow people who are not necessarily super highly skilled developers, to create applications that allow productivity workers or operations workers to do their jobs more quickly and more efficiently. I’m curious to see what the adoption is like for generative AI on the backend as opposed to sort of customer facing use cases.

Peter van der Putten: Yeah, great question. So let me start maybe then with, say the productivity work as you say, but more let’s say from front to back office. And then I’ll swap around more to let’s say the low developer who would actually kind of imagineer these types of experiences. For a productivity worker, it’s very similar actually to let’s say a front office customer service employee. You can look at the entire end-to-end process, and then you can find opportunities for generative AI, right brain AI, but also opportunities for left brain AI to actually optimize that experience.

Let’s say a particular, I don’t know, insurance claim comes in. First, maybe a front office agent is dealing with that issue. You immediately get the question of, “Oh, can we deal with this claim automatically, based on the information in the claim?” Maybe we can extract some of that with generative AI, but we can also use more left brain AI to predict what is the likelihood for this case that is actually an opportunity for fraud or leakage. Now, based on that, maybe it gets routed to a particular claims agent, claims agent collects some additional material that maybe the left brain goes off and say, “Well, there’s under age driver here, or this is the third time in a row that we see a claim. This is going to be a complex claim.

We need to have someone have a look at this particular claim more towards the back office. Let’s transfer that engagement, but let’s also use generative AI to provide a summary of all the things that have been collected so far, and everything we also know about this particular member, let’s say over the course of the last whatever, six months. So that when I get, let’s say a back office operations, I get this call transferred, I have a good idea of what’s going on.

Again, I can then work on that case, maybe I’ll get recommendations either from left brain AI or from right brain AI, about what are particular risks to be assessed, or what is the likely outcome of this case? Can we all sign it off and it’s all good? Or what kind of body shop to recommend if we want to reduce the costs to deal with this particular claim? Or can we settle this claim, can we recover it? Indeed, is there a question of fraud or leakage here? So that’s more from the product, let’s say, front to back, from office operations to back office operations. The other angle to it is more like, what is the experience for people that are building these experiences, and how can GenAI help there?

And for that, you can look, I think at the entire life cycle, when people talk about AI helping developers, people often think in terms of programmers. They have some magic belief that if you take an AI coding assistant and you combine it, then anyone could develop a program. Well, that’s not true. So because it will just generate a lot more computer code, and at least non-developers, AI coding tools are great for developers, but for non-developers, they wouldn’t have a clue what it is, what they’re producing. Whereas on the flip side, indeed we take more of a low coding approach, where the artifacts that make up the application, they can also be developed by let’s say what we call citizen developers.

But then you want to support that end-to-end journey, not just the development of the application. Of course, there we can use GenAI, I don’t know, to build out a data model or to make test data. But earlier on, you have this whole ideation phase of an app. And we all fear this moment that we were summoned into a meeting room where there’s a bunch of consultants and a pile of sticky notes, and where everyone goes like, “Let’s design this from scratch,” type of thing.

But GenAI can actually also be used to accelerate that process, when you have this capability called Pega Blueprint, where you can just say, “Oh, I want to build…” this example, let’s say a motor insurance claims application, and it will generate an entire app, a prototype app from scratch, with workflow, with data model, or all these particular elements. And then people can say, “Well, we like these parts. These parts we would like to have different.” But at least you have something concrete to look at. And so that would also speed up not just the development cycle, but actually the ideation cycle, the requirements gathering cycle that happens before it.

Keith Kirkpatrick: So it does seem that generative AI really has a lot of power to Aid in that iterative process. You can go from sort of a blank slate to a fully fleshed out idea. And generative AI can kind of take you on those steps. Really, really interesting. You know though, Peter, with all of the great things that generative AI can do, I do wonder if there is sort of this false belief that if you just deploy generative AI, everything’s going to be great. But I think there are some pitfalls that enterprises in particular need to work to watch out for. I wonder if you could talk to me about that and this whole issue of ethical AI and the use of guardrails, to make sure that the models don’t kind of go astray.

Peter van der Putten: Yeah, great point. Yeah. So indeed it starts with actually identifying the high-value, low-risk use cases, but we already spoke about it here, but it is an important point to think about. But then when you think about enterprise concerns, there’s concerns around hallucination, will these model just make up stuff? Explainability, fairness, bias, toxicity. But also are we leaking any privacy-sensitive data, for example? So those are some of these enterprise considerations. But also when people talk about these type of things, they may forget about also the very basic considerations in terms of quality versus cost. So that could be use cases where you want to have your super-duper model with your top quality, but it could be other use cases, frankly, where simpler models or services actually are good enough at much lower cost.

So those are some of those considerations. And for instance, how we go about it is that we provide kind of a central place in the architecture where you can call out for, let’s say a generative AI answer or reply. So have the local platform, we have lots of capabilities we built ourselves, but our clients also built their own applications. But by virtue of having a central place, you have one place where you can control and manage all of this. For instance, you can control, “I don’t want to send out any privacy-sensitive data. Let’s filter that out before we go out to the GenAI system.” Or “I want to make sure that I use particular services where we opted out out of, let’s say, retention policy, so that no data is retained.” You only make, they call it inference time as AI people love to call it. There’s no use of that data for training. Or also explainability or addressing these kind of aspects of accuracy and hallucination.

So one particular hot item at the moment are the so-called retrieval augmented generative systems. Apologies for the AI terms here, but the idea there is to ground generative AI in bodies of knowledge. And that sounds a little bit abstract maybe, but let’s say, go back to that insurance example. Your customers could have all kinds of service questions, and your agents as well, in general, for providing service. Well, low and behold, you have a huge section on your website where there’s tons of information, but no one goes there, because if you search, you get a 100 hits and most of them are actually not relevant to you. But you can’t also say, let’s just go to, let’s say, ChatGPT because you’ll get an answer from a competing insurance company. Right?

Keith Kirkpatrick: Right.

Peter van der Putten: So the idea of a RAG is to say, “Hey, let’s take all that content, in this case, about self-service, put that in a particular corpus, and then put a GenAI on top.” So if I have a question, the first thing we’ll do internally, is search that body of knowledge, collect the top search hits, and then give it to the GenAI, and say, “Well, based on this question and this information that we found, which is very specific to insurance self-service for this particular customer, let’s address the question.” And in this case, it’s self service, but based on a particular corpus. We could also say, “Well, it’s a set of internal documents, say, for when we push a particular claims case to the fraud department, and they have all kinds of internal policies around what it is, what they need to be looking for.

Again, you can put, let’s say, one of these knowledge bodies, one of these RAGs, on top of it, and make sure that you really get way more accurate domain specific answers, including addressing concerns like transparency. Like, “Can we also get references? What particular documents is this answer based on?” Or you deal with security levels or access levels, like the frontline claims agent could also go to that little fraud knowledge body, but by definition, they might only have access to a subset of those documents, right? Because otherwise you have 2,000 claims agents who could potentially… There could be an evil person who would start to leak that information or game the system. So those are some of those considerations.

Keith Kirkpatrick: Absolutely. And Peter, the other thing that I think is important to acknowledge here, in addition to using RAG, it’s important to have the guardrails in place to make sure that if the LLM is unable to find a suitable answer, that there are guard rails in place so it doesn’t start to hallucinate on its own, and there’s something in there that says, “I’m not sure,” or “I’m not able to get this request”, and then kick it to a human. I mean, I’m wondering how many organizations are considering that, because again, there’s a lot of very enticing power with generative AI, but we’ve seen many cases out in the world already where an LLM starts to hallucinate, even though it’s been grounded, because it just doesn’t know and there hasn’t been the right controls put in place.

Peter van der Putten: Yeah. And that risk is way larger also with just GenAI, because yeah, it’s so open that it’ll also always try to come up with a particular answer. But the benefit of something like a RAG or knowledge body, that indeed it is grounded, and you can also look at other things like when we do that internal search, what is the match percentage of the top document? Does it reach a certain minimal threshold, or can we instruct the GenAI to not get creative? For instance, there’s these things like temperature, and you can set it all the way on the not so creative side, because you don’t have to, you have very concrete information that is in the corpus. So there’s no need for the GenAI to get creative. And you can give it particular instructions as well, where you say, “Well, if you’re not sure, if the information is not in the search result, refuse to answer. Just apologize and say, ‘Well, I’m not able to answer this question.'”

And it’s funny, it’s a lesson from AI that we maybe learned a hard way through Tay and Clippy, and all those horror stories, all the way back to Eliza in the ’60s. Bots going haywire or overselling the expectations around AI bots. It’s way smarter to actually undersell and make it kind of something which is more like a helpful assistant, but which is not like the superhuman, actually. It might even be better to undersell its ability as something which is at the level of the intern, like I said. So we called it Knowledge Buddy for a reason. We didn’t call it Knowledge Wizard or Einstein, or a guru or some of that nature.

Keith Kirkpatrick: Right. So we’re not at Singularity yet.

Peter van der Putten: Exactly. Yeah, yeah, exactly. That would get very dull if we would always be there. We’d be sitting on the beach drinking on margaritas.

Keith Kirkpatrick: That doesn’t sound like a bad plan. But anyway, Peter, I really appreciate your time today. I wanted to just ask you a final question. Obviously we’re coming up on PegaWorld 24 and I that Pegasystems has been working on a lot of really interesting things. I’m wondering if you could just give us a little bit of a preview on what we can expect to hear about at the upcoming event.

Peter van der Putten: Yeah, I mean, we’re going to get loads of great stories about how can you actually combine the smarts of AI with the muscle of automation workflow to really drive tangible benefits, tangible impact. So it will be much less about, this is all the stuff you could possibly do, but more about, hey, here are some real capabilities that we already put out there. Like we’re an East Coast company, so we’re in that sense, very grounded in reality and capabilities that we already have released, as opposed to just talking into Nirvana. One particular thing I’m going to talk about is I’ll have a breakout on where enterprise GenAI might be heading. Now we have this concept of what could a North Star enterprise look like? Because rather than saying, “Oh, we have AI’s technology, let’s find some good applications,” that’s the wrong way around. You should say, “What kind of enterprise do I aspire to be, and how can AI help me to get there?”

So that aspirational company, we call that the autonomous enterprise, a self-driving business that optimizes towards certain goals. And I’ll be talking a little bit more about where I see GenAI going to support that particular goal. And no surprise, it’ll be all around giving GenAI more autonomy, as opposed to us just having some pre-written problems sitting somewhere and firing it off to GenAI. Maybe you want to give it some tools like this knowledge body, and give it a corpus, but you can extend it to a much more grandiose vision by giving tons of tools to the GenAI. You give it some vague assignment, what it is, what your problem is, or what you like to achieve. And the GenAI is smart enough to figure out what tools should I use to resolve this particular issue or to reach this particular goal.

And then within set guardrails, it’ll use those tools to get information, to take particular actions, so look at the outcomes, and then again, observe and see what are the steps I need to take to ultimately reach the positive business outcome for this particular interaction, or process, or workflow. So it’s all about giving them within set guardrails, giving a lot more autonomy to the generative AI, that it becomes more of an agent essentially, that can pursue its goals, the goals that we gave it, pursue those goals, and figure out the basics itself, as opposed to us having to instruct in this strange prompting language, what it is.

Keith Kirkpatrick: Right. All right, Peter. Well, we’re really looking forward to hearing your talk at PegaWorld, and I want to thank you so much for taking the time to give us that preview and for your thoughts today.

Peter van der Putten: Yeah, it was great to be here.

Keith Kirkpatrick: All right, everybody, hit that subscribe button. Join us here for all of our episodes of Six Five On the Road, and for our other interviews with insightful leaders from across the technology industry. Thanks, and we’ll see you all again very soon.

Author Information

Keith has over 25 years of experience in research, marketing, and consulting-based fields.

He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.

In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek,, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.

He is a member of the Association of Independent Information Professionals (AIIP).

Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.


Latest Insights:

The Six Five team discusses Oracle Q4FY24 earnings.
The Six Five team discusses enterprise SaaS reset or pause
The Six Five team discusses Six Five Summit 2024 wrap.
The Six Five team discusses Apple WWDC 2024.