AI in 2024: Insights, Ethics, and the Generative AI Revolution – Futurum Tech Webcast

AI in 2024: Insights, Ethics, and the Generative AI Revolution - Futurum Tech Webcast

On this episode of the Futurum Tech Webcast – Interview Series, host Daniel Newman welcomes Peter van der Putten, Lead Scientist & Director of Pega’s AI Lab at Pegasystems for a conversation on AI in 2024, including insights on ethics, governance and the hype around generative AI.

Their discussion covers:

  • The concept of right-brain AI versus left-brain AI and what that means in terms of strategy and use cases
  • The balance between the two that businesses need to achieve success
  • Top AI trends that Peter thinks we’ll see more of in 2024
  • Whether the hype around generative AI is overshadowing the actual business value of the technology
  • How ethics will continue to be a part of the conversation and Pega’s approach to ethical AI

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Listen to the audio here:

Or grab the audio on your streaming platform of choice here:

Disclaimer: The Futurum Tech Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript:

Daniel Newman: Hey, everyone. Welcome back to another episode of the Futurum Tech podcast. I’m Daniel Newman, host today, founder and CEO at The Futurum Group, and we’re going to be talking about AI today. Yeah, we’re starting 2024 where we left off 2023, but the good news is we’re going to be bringing some new insights, perspectives, and we’re going to bring a little science to this. And of course, everyone is going to say it’s all science, but we’re going to bring a little bit of the social science. We’re going to bring a little bit of the thoughtful science about this particular area of interest and how it’s going to really play out over the next 12 months.

We’ll also talk a little bit about an AI manifesto from Pega who will be providing our guest today, as well as a little bit of an overview of their upcoming event, Pega World, that will be later this year. So without further ado, I want to welcome Peter van der Putten to the show. Peter, welcome to the Futurum Tech podcast. How are you doing?

Peter van der Putten: I’m great. Thanks for having me.

Daniel Newman: It’s great to kick off the year, you saw me in the preamble there talking a little bit about what I call the science and the social science of what’s going on in AI. We saw over the last, I don’t know, we’ll call it 14 months, 13 months, November 2022, a new generative AI tool came out and it was like AI had just been invented for the first time. Now anybody, including yourself, myself that’s been around the industry knows that’s just emphatically not true. But there’s always these inflection points that create what I would call mass adoption, commercialization where something goes from being used maybe in the labs, in the academic and research centers, maybe by very small subsets of enterprise, maybe like quantum computing is today is where AI was for some period of time, and then it breaks free at scale.

And that was really, I think, to some extent what we saw in 2023, and we’re going to talk about that here in the show.But how about before we jump in and talk about that, give me the quick background on your role? You’re the lead scientist, director of Pega’s AI Lab. How did you get there? Give me a little bit of the back story and talk a little bit about what you do every day.

Peter van der Putten: No, absolutely. Yeah. So if I go way back, I studied AI, I actually started to study that in 1989 so it’s quite a while ago, a couple of summers, AI summers ago basically, but I was always very much interested in not just AI as the technology but also what is the impact on business? What is the impact on society? What is the impact on people? How do you apply it also in a responsible manner? So that’s why I really moved into the area of how can we generate value with AI in business, not just for businesses but also for customers alike?

At Pega, I’m heading up the AI Lab so basically, I’m looking at not just how our clients can get better value out of AI but also applying it to our own company, coming up with new improvements, innovations, be it completely on the business end or completely on the technical end leveraging AI.

Daniel Newman: Yeah, it’s a big role. When did you start? How long have you been here in this role?

Peter van der Putten: So I’ve been in the AI Lab since we started at April two years ago, but I’ve been focusing on AI at Pega. If I cheat a little bit with two M&As in between since 2002, and before that, I was working at an AI research company.

Daniel Newman: Yeah. So Pega has been talking about AI for a long time, and I still remember some of the keynote software that writes software or some of this was said, and I still remember one of the quotes, and I can’t give it attribution but it came from a Pega world. Someone basically gave us this breakdown of Facebook knows you better than you know yourself. This is five years ago, six years ago because this was pre the C19 stuff, and it was a presentation that was done that was helping end next best action technologies and understanding how this technology could parallel to basically help people and help companies to better deliver enterprise solutions, customer experiences. So this isn’t a new thing for Pega. It’s great to see that there’s investment being made.

I wanted to talk about something, what we call right brain, left brain AI. This is something that I’ve heard from Pega and from you guys. Talk a little bit about what that is because I know right brain and left brain in a standard anthropological standpoint, but what do you mean when you apply that to AI?

Peter van der Putten: Yeah. So I use it as a bit of a metaphor because it’s easy to get lost with all the hype around ChatGPT and whatnot. I loved your preamble when you said people acted as if AI came out of nowhere a year ago. Maybe for let’s say the AI, whatever, the AI geeks, they were like, “Well, no, we’ve had AI for a long time.” I think you’re really right. You’re spot on in the sense that, well, first, AI up until, like I said, I studied in the ’80s up until the mid ’90s was primarily in the labs. Then it moved more into the mainstream in the sense that even if you Google or if you search a particular destination in Google Maps or you’re shopping on Amazon, AI, you’re being exposed to AI tens of times or a hundred times per day. But as an average consumer, you didn’t really experience that.

And I think ChatGPT was the first time, more from indeed making it accessible point of view, the first time for many people that they essentially became a maker of AI, that they got a little peek under the hood. If you’re doing your clever prompt engineering to create a more even cuter Corgi in DALL-E or if you are a kid doing his homework with ChatGPT, you are fooling around with the prompts to see if you can get the right answer. So I think that explains a little bit why for many people, ChatGPT was their first exposure, their first conscious exposure to AI.

And ChatGPT is a form of right brain AI because it’s what we call generative AI, almost like the creative AI. It’s AI that generates new stuff, text, images, whatever. But next to that right brain AI, we’ve had this left brain AI already for a long time, and our left brain is primarily concerned with making smart decisions or, as humans, at least we think we’re making smart decisions, so there’s the many actions we could take and which one is the right one based on predicted outcome, predicted success, etc.? I think in that sense, that metaphor of left brain and right brain, it can help a little bit to navigate this whole AI landscape. We have generative AI, which is more the right brain AI, the creative AI, but there’s all these other forms of AI, predictive analytics, natural language processing, real-time decisioning, which fit better in this left brain category.

Daniel Newman: Yeah, that’s actually really interesting and I think that’s another way to say this, was AI BG, before generative, and AI after generative, AG, BG and AG. And the reason I point that is generative provided a certain set of capabilities that are very obvious like text creation, image creation, but all those stuff that I would call before generative were really enterprise practical apps, the things like workflow optimization, automation, process optimization, analytics, deep analytics that could be used for understanding churn or understanding customer probability.

These were things that by the way, you’d go to Wall Street, you go to Madison Avenue, you go across social media platforms, this has been done for some time. Anybody that’s played with a Netflix recommender engine and knows like, huh, that really is the show I would want to watch or that this is what NVIDIA has done for a long time with its filter with Merlin and Jasper and these different tools. And the point was this has been a thing for some time but people are just really starting to recognize it.

Peter van der Putten: Yeah, exactly. So it was so well done, so hidden from everything, it was so hidden that we really didn’t really experience it as a consumer, not in a-

Daniel Newman: It’s ridiculous.

Peter van der Putten: It’s hidden.

Daniel Newman: Right? It’s seamless.

Peter van der Putten: Exactly. Yeah. I think that’s in a way the beauty of ChatGPT and those type of generative AI tools, that it makes you more of a maker. It’s no longer just the AI illuminati you are controlling it, but it’s also your mom finding a new site for an apple pie or your kid doing their homework. And it’s nice and it’s important because this is, at risk of sounding a bit cliche, it is a transformative technology both for business but also for society, and we all need to be part of it.

So to make sure that yes, as a business, we can use it to become more profitable, but also in a way that customers actually like it that it’s actually to the benefit of the customer in terms of getting better service or more relevant recommendations or more optimal frictionless experiences. And not that we feel, whatever, spydom or not in a way where we would feel being disadvantaged because the decisions that are being made are not fair, for example.

Daniel Newman: Well, privacy is something that will always have a bit of a continuum of control and obviously the less private we are, the better and more targeted a lot of these things can be. But having said that, we don’t always necessarily want that. But there is a symbiotic relationship between the two. As you said it, the deep analytics that maybe were left brain that helped a company say this customer is likely to leave, provided the right brain an opportunity to say let’s generate something text and offer that can now be put in front of them that’s like this is a way we’re going to try to handle retention. So we’re going to write a nice letter. We’re going to talk about all the good times we’ve had together, all the great things our current customers are experiencing by being more loyal. Oh, here’s an offer we’re going to provide you to stay as a customer.

What I’m saying is, and then instead of historically where someone would have to find that in analytics and then you would have to assign somebody and say, “All right, let’s get on this. Let’s create a custom program or a template,” which by the way would often be templated and not specific enough. Now it can create very quickly a generative thing that is very specific to that person that can consider both PII data as well as using text generation to say, “We’re going to get the exact offer right.” So there’s a tie between these things. Right and left brain AI become very symbiotic.

Peter van der Putten: Yeah, absolutely. Ultimately, you’ll get the best results if you combine it too. If we would be humans walking around only with a right brain or only with a left brain, you probably sometimes encounter individuals like that, but you’ll experience that that’s not preferred. Likewise, in AI, if you’re able to combine left brain with your right brain, that’s a great combination. The example that you give here is more, let’s say, in a one-to-one marketing, creating all kinds of creatives that better resonate with particular needs and interest of customers, but use the left brain to really understand what is it what a particular customer actually needs here in the moment or in customer service, it could be figuring out what is the issue that the customer likely has so that we can help the customer quicker.

But generative AI could also be used for things like, oh, let’s summarize this entire conversation. If halfway I need to transfer you to another agent, I don’t need to explain myself again as a customer. These are some good examples of left brain and right brain actually working together in symbiosis.

Daniel Newman: So Futurum Intelligence recently did a decision-making dashboard where we actually reached out to over a thousand enterprise buyers of AI to get a better understanding of directionally what’s going to happen in 2024. A couple of things that we’ve found were, one, customers are still having a lot of … They’re doing a lot of vendor gymnastics to figure out who the right provider is. They’re trying to find those that they can trust to (a) understand the technology, and (b) know how to implement it. But we’re also seeing that actually ’23 was a bit misleading because while companies like NVIDIA, sold lots of GPUs and cloud providers bought lots of GPUs to set up for AI, you actually heard companies like Cisco come out and they talked about, well, we haven’t gotten most of this stuff installed yet, meaning that we have lots of backlog for hardware, meaning the implementation and the practical application of AI is really early days.

We saw in our data 300% more companies plan to spend over $2 million on AI implementation this year. So we’re seeing a triple of companies that are starting to spend on the implementation of AI, not just buying hardware to support it. I’m curious, from your lens, running the AI Lab and talking to customers at Pega, what are some of the trends you see in 2024, and is our data and seeing customers moving from proof of concept to bigger investment, is that what you’re seeing and hearing on your end?

Peter van der Putten: Yeah, no, it’s absolutely something we’re seeing. Like we were discussing just before, pre-generative AI, pre-GA, there’s been tons of application of AI as well already, but I think in the generative AI space where 2023 was a lot about what kind of generative AI models are coming out and this Cambrian explosion of large language models, for example, that will continue. So in a way, yeah, you see Google coming out with Gemini, for example, open source trying to make a play for it. But ironically, that Cambrian explosion of these large language models will also lead to, I think, to more that that market will become a bit more commoditized. And then actually the emphasis will shift more towards what you’re saying, how are we going to apply this? How can we, in that sense, let’s say enterprise generative AI at least uses the same technology that sits behind Bart or that sits behind Bing or whatever but the experience is very different?

The experience is really something where you would build this generative AI into a particular workflow or into a particular interaction or into a particular feature. So that also means that the emphasis shifts a bit from which underlying model or service do I use that becomes less important actually with that commoditization of that market, you want to be able to switch providers with the flip of a switch. The emphasis is much more on how can I build applications on top that would actually leverage these underlying foundation models. So that’s where we see a lot of the emphasis shifting now because that’s also where the real value is, not whether you use model or service A, B or C. First we had no choice. Now we have more and more choice. So essentially, it becomes to some extent a commoditized market but it’s really trying to build these applications on top and features and workflows on top where you’re building the generative AI.

Daniel Newman: Yeah. And I think there’s a lot of consideration about consumption and how it’s consumed. I think SaaS is really going to shine in the generative AI era because people want the features and capabilities, but they don’t really want to have to mess with all the infrastructure that requires to do it. Now again, big companies will always have those challenges. If you’re a massive enterprise, government institution, you’ve got lots of data and data sprawl and provenance and of course, you’ve got technical debt of everything from your mainframes to your data storage infrastructure to trying to get everything available for utilization in AI.

Having said that, when you’re a newer company built on SaaS, it’s going to be like, hey, how do we apply generative AI? How do we apply … It is going to be like we can do this very simply on a subscription. We swipe a credit card and we can start using small datasets and citizen data science becomes a real thing. We’ve talked a lot about through this conversation, but gen AI, hype reality, do you think to some extent, I kicked this bucket and talking off about how the hype brought AI to the front, which is a good thing but at the same time, the practical application of AI is not only and certainly won’t only be generative? Is it eroding the business value that we’re so focused on generative when there’s so much more AI and AI capabilities that are not generative?

Peter van der Putten: Yeah. Well, as an AI guy, I’m happy with any type of interest in AI of course. But you’re right, I think there’s a lot of both the hype and the doomerism around AI, around gen AI. It’s a bit counterproductive because it’s getting the attention away from, let’s say, a more pragmatic approach where you really look at what are the business outcomes that I want to optimize here? Do I want to get better … Do I want to provide more relevant experiences to my customers or optimize my marketing campaigns or do a better job at handling customer service issues? It should not start with the technology. It should start with what are the business outcomes that I want to optimize here? And then look very pragmatically for very practical uses and applications that would deliver the highest bang for the buck. And then you will find out that there’s that …

For instance, in the generative AI space, the most important thing is to find those low risk, high return type use cases. The example that I gave, whatever, in customer service, summarization of a call. At the end of the call, it will save you, well, at least 10 seconds, 15 seconds, if not more, on every customer service call. That’s an absolute no-brainer in that sense.

Daniel Newman: And the quality is way better too. I’ve used the tools from some of the different, both the collab tools and the CX tools, and those summarizations are really good. Summarization and then action and then assignments and all that stuff, it’s game-changing because you also were expecting those people to actually be able to figure out what the crux of the call was, what the important stuff was. And the AI just tends to do a better job than your average call center worker in terms of figuring out-

Peter van der Putten: Yeah. If you were ever only receiving it, they have this average handling time target so in the comments or the wrap up, they type in, “Hey, you type in the minimum amount of words and you hit enter off to the next customer.” And that’s where generative AI can really make a succinct but complete summary of all the things that happened. You change one or two bits and off you go. So agent happy, customer happy, because the next call, if I call in again as a customer, actually the agent I will talk to is way better informed.

So yeah, in principle, that’s a no-brainer, but what it boils down to is that you need to move away from this big AI is going to be perfect or AI is going to be a nightmare towards now let’s be pragmatic. Where do we want to apply it? Where can we have the biggest bang for the buck and have a very pragmatic outcome-oriented, application-oriented approach to applying AI, really looking for not the artificial intelligence but the actionable intelligence. Where can you put the intelligence to work responsibly?

Daniel Newman: Well, that’s a great segue to the final topic here today and that’s going to be ethics. We’ve heard throughout the year, the rapid onset of generative creates some real concern about ethics. We saw the New York Times recently brought a lawsuit against OpenAI, and that’s a combination of ethics and responsibility and use of copyright or potentially protected material. We saw the first iteration of this, use go through the search era, but search had a much more clear path to attribution and monetization. Now when you’re seeing things abstracted and summarized, how do you put preference? And then when you’re abstracting, summarizing and wanting to be accurate, you can’t necessarily pay to have placement. And if you pay to have placement, that’s going to always bring a challenge to the person wanting to pay the most versus the highest quality. It’s no longer driven by an algorithm that it was historically. So that’s one thing.

And then of course, you have just overall privacy of data. You’ve got the security that generative and AI solutions, bringing risk to cyber, bringing risk to sovereignty, bringing risk to elections. So I’ll give you an open opportunity to talk because we can hit on all of those things, but give me a little bit of just your perspective on the ethical and responsible AI requirements and what would happen.

Peter van der Putten: Yeah. So indeed these are many of the aspects that you mentioned, privacy, security, copyright, etc. Typical other ones that people talk about is also bias and fairness. If you make automated decisions, so if you’re applying for a loan or you do credit risk decisioning or you’re investigating fraud, are you making fair decisions? So fairness is a big one. Accountability. So an organization should never be able to hide behind computers and says no, right? So ultimately, if you build some system, whether it’s a stupid system or a smart system, you are responsible for the decisions that the system makes. So accountability is a big one as well.

So there’s a range of those principles, but then also it means it’s quite easy to get lost a little bit in all of those principles. And then the more higher level principle is in a way a lot easier, sometimes call it empathy but in principle, it means don’t do to others what you don’t want to have done to yourselves. So you need to make sure that if you have AI systems, they make automated decisions. Yes, as a company, you need to make a profit. That’s fine. Customers are not against it, but you need to make sure that you balance the various needs of the various stakeholders. And if you do that well, I think that’s the more high level thing. What is the objective of my AI system? Am I balancing the various stakeholder needs, including the customer? Then I think that’s the most important thing.

And then you can also look at those other aspects, but I think that’s actually the key thing that the customers and citizens care about. As long as you use it to the benefit of me, not just to the benefit of you as a company or as an organization or a government, then I’m fine actually. I’m demanding you to make better use of my data to provide better service, and if you use it against me, I won’t be happy about it. So I think that’s ultimately the same lesson that was learned in privacy. Customers are not against their data being used. Actually, I want you to use my data if you use it to my benefit.

So I think that’s maybe the higher level thing to think about. What we do see is that there’s this pendulum that first was like, oh, we’ll all do ethics and self-regulation, but that’s moving now more towards real regulation. So on this side of the pond, my accent gives a hint, but I live in Europe where we have the EU AI Act that’s going to be introduced. It’s about to be finalized, so that’s real regulation around AI, but it’s not unique to Europe. When you look to the US, Biden, of course, he signed an exact order around AI, not so far as regulation but I think it was quite an intelligent move on behalf of the Biden administration by addressing government procurement, etc. and government agencies, etc. to reap the benefits of AI by doing it in a responsible manner.

That’s also a way to influence that AI is being used in a good way. And I don’t see that as people position it sometimes as a trade-off between innovation and regulation. I think this is a good example where both can go hand in hand. If it’s a sensible regulation that encourages trustworthy, responsible use of AI, that will only lead to more innovation in that particular area and also, in terms of longer term acceptance by customers and by consumers of these technologies. So we see that as a good thing.

Daniel Newman: Well, Peter, there’s a lot there to unpack and we’re going to have to call it there, but what I will do is make sure that in the show notes that we link to the AI manifesto that you mentioned. I think it’s going to be a continuous conversation and debate as AI continues to proliferate and people are always going to be balancing experiences in exchange for data and of course want to feel that their privacy is at least being managed by those that they are allowing their data. But we know that this won’t be solved in a short period and probably won’t be solved with any single piece of legislation. But the goal of getting policymakers, enterprises and consumers working in some type of empathic capacity to make this work well on a societal basis is going to be really important.

I want to thank you so much. We went a little long, and that’s because this was just so darn interesting. Like I said, I’m going to put your manifesto link and I’ll put a little information about Pega World, which I mentioned earlier in the show, in the show notes so that everyone can check it out. But Peter, for this one, I got to say goodbye. Let’s try to catch up in the next year or so and see how things come up.

Peter van der Putten: Yeah, that would be awesome. Thanks for having me.

Daniel Newman: All right, everybody, hit that subscribe button. Join us for all of our episodes here on the Futurum Tech podcast. I’m Daniel Newman, host, CEO and founder of The Futurum Group saying goodbye for now. See you all later.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Brad Shimmin, VP and Practice Lead at The Futurum Group, examines why investors behind NVIDIA and Meta are backing Hammerspace to remove AI data bottlenecks and improve performance at scale.
Looking Beyond the Dashboard: Tableau Bets Big on AI Grounded in Semantic Data to Define Its Next Chapter
Futurum analysts Brad Shimmin and Keith Kirkpatrick cover the latest developments from Tableau Conference, focused on the new AI and data-management enhancements to the visualization platform.
Colleen Kapase, VP at Google Cloud, joins Tiffani Bova to share insights on enhancing partner opportunities and harnessing AI for growth.
Ericsson Introduces Wireless-First Branch Architecture for Agile, Secure Connectivity to Support AI-Driven Enterprise Innovation
The Futurum Group’s Ron Westfall shares his insights on why Ericsson’s new wireless-first architecture and the E400 fulfill key emerging enterprise trends, such as 5G Advanced, IoT proliferation, and increased reliance on wireless-first implementations.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.