In this episode of Enterprising Insights, The Futurum Group Enterprise Applications Research Director Keith Kirkpatrick explores Cisco’s 2024 Data Privacy Survey, and focuses on the divergent perspectives on data privacy between organizational privacy respondents and consumers. He specifically focuses on the elements that engender trust around consumer data, the steps taken by organizations regarding their use of AI, areas of concern around generative AI, and the controls that are being put into place around generative AI.
Finally, he will close out the show with the “Rant or Rave” segment, where Kirkpatrick picks one item in the market, and champions or criticizes it.
You can grab the video here and subscribe to our YouTube channel if you’ve not yet done so.
Listen to the audio below:
Or grab the audio on your favorite audio platform below:
Disclaimer: The Enterprising Insights podcast is for information and entertainment purposes only. Over the course of this podcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.
Transcript:
Keith Kirkpatrick: Hello, everyone. I’m Keith Kirkpatrick, Research Director with The Futurum Group, and I’d like to welcome you to Enterprising Insights. It’s our weekly podcast that explores the latest developments in the enterprise software market and the technologies that underpin these platforms, applications, and tools. So this week, I want to take a look at an interesting survey or study that just came out this past week. This is Cisco’s 2024 Data Privacy Benchmark Study. It’s the seventh edition of this particular study, that really looks at privacy perspectives from organizations, as well as consumers. Once we’ve gone through that, of course, I’ll close the show with my Rant or Rave segment, or I’ll pick one item in the enterprise software market, and either champion it or criticize it. So, why don’t we dig right into it?
So, as I mentioned, Cisco has released their latest version of its Data Privacy Benchmark Study. This is a study that really looks at the different privacy trends, some of the challenges and opportunities in the market when it comes to collecting and using, and of course, protecting consumer data. It draws upon data gathered in the summer of 2023 from an anonymous survey, which respondents did not know who was conducting the study, and respondents were similarly unknown to the researchers. So if we look at the demographics of the survey, they surveyed 2,600 security and privacy professionals in 12 countries, five in Europe, four in Asia, three in the Americas. So they were asked about their organization’s privacy practices and spending, reactions to privacy legislation, AI, and other data localization requirements.
Now, the findings from this research demonstrate the continuing importance of privacy to businesses and how they serve their customers. So top-line findings, it’s pretty obvious, privacy is a critical element for really engendering consumer trust. 94% of organizations said their customers wouldn’t buy from them if they didn’t protect data properly, and that’s pretty obvious in this day and age. Another sort of key fact there, organizations do strongly support privacy laws around the world. 80% of the respondents said that legislation has a positive impact on them, and this is really because it levels the playing field amongst all companies out there. It’s not something where one company doesn’t have to comply with the regulation. All of them do, so everyone has to be spending to make sure that data is protected. Now, of course, companies would be pushing back more if there weren’t solid economics. The survey found that the economics of privacy has remained attractive. 95% of respondent companies said that benefits exceeded the cost, and the average organization realized a 1.6 return on their privacy investment.
Now, it’s not all great news. The survey did also find that there is relatively slow progress in terms of building customer confidence with respect to AI. 91% of organizations say they still need to do more to reassure their customers, that they’re doing the right thing with AI. Of course, the other thing, of course, is that yes, organizations are getting some value from generative AI applications, but they’re still concerned about the risks to intellectual property, or that the data entered could be shared with competitors in the public. So let’s dig into a couple of interesting parts of this survey. We’ll take a look first at figure 13, which is a question that really was comparing the consumer view versus organizational view on customer data. And really, what this question was getting at was trying to understand what organizations can do to build and maintain trust when it comes to customer data. Now, we look at the consumer view, 37% of respondents said that providing clear information on data use was the most important thing to them in terms of building trust. We contrast that with the organizational view, we’ll see that, really, that was sort of the third choice down, at 21% in terms of providing clear information. And it’s interesting because I think there’s a real opportunity there for organizations, to really kind of bring news, to bring that up to meet the same sort of level as with the consumer view, because really, consumers understand that companies want to collect data.
They want to use data. That’s a given. It’s 2024, we understand that, but they also want disclosure. They want to know, “What information are you actually collecting?,” “How is it being stored?,” “How is it being protected?,” “What is it going to be used for?,” “Who is it going to be shared with?,” and, “How is it going to be secured against hackers or any kind of other intrusions?” And really, it’s an opportunity for organizations to be as clear and transparent, and really, almost make that a competitive differentiator by being so out in front of their competitors in terms of saying, “This is what we collect. Here’s how it’s going to be used,” because if you do that, I think you really engender a lot more trust because you don’t leave consumers with the impression of, “Well, I know they’re collecting information, but I’m not sure what’s going on with it.” “Are they selling it to third-party data brokers? Are they doing something else with it that they shouldn’t be doing?” That does not engender trust, that just makes people suspicious. Now, I think if you look at it, data breaches are a big question as well here. Customers want to know that whatever information they put in, it is being held as securely as possible.
This is an issue that unfortunately, as customers continue to give information, they are worried about it, because every week, they see that another company has had a data breach and they wind up having to go in and worry, or change their credentials and worry that their information is out there. So I think it’s a point that is going to continue to be something where organizations really need to work at, establishing that trust, putting in systems to protect customer data, and then just saying, “Hey, here’s all that we’re doing to try to protect your information.” Now, we’ll move on to another interesting question here in the survey. If we take a look at figure 14, this question is revolving around the use of AI. And when they were asking about AI here, I believe that essentially, what they’re really talking about here is the use of generative AI, because it’s … This is the most visible form of AI right now in terms of what customers might be exposed to. I suppose I’d have to dig a little deeper in. I suppose they’re also including a more traditional, predictive, or analytics-based AI as well in terms of the algorithms being used. But anyway, let’s take a look at the actual server result here. 50% of organizations say that they explain how the AI application works.
They also say that the human is involved in the process, and that they’ve instituted an AI ethics management program. That’s all about roughly half of the respondents said that their organization does this. About a third said they actually audit the application for bias. This is really interesting, because if we think about it, what do people really care about when we’re talking about machines and artificial intelligence? They want to know that it is not just some black box where data goes in, and who the heck knows what’s being used to feed in to provide an answer, for the algorithm to generate an answer? And really, if you look at it in terms of certain applications, let’s just take retail, for example, yes, it’s important that a customer, if they use some sort of a generative AI tool to look for clothing or something like that, that it works appropriately, it captures all of the right elements to provide a relevant response or a relevant suggestion. But I think the bigger question here, and this is where I think the survey points out a real deficiency, is the lack of auditing applications or algorithms for bias. And in some product categories, it’s more important than others. If you think of, from a consumer example, things like loan scoring, or if you want to apply for housing in certain applications, or something that really impacts a major factor in someone’s life, they really want to make sure that that application or that algorithm has been audited to make sure there aren’t different data points in there. They’re skewing the way the algorithm is working in such a way where there is bias in the way the algorithm actually works.
So I think there’s a real opportunity there, again, for organizations to go through and make sure that they periodically are auditing applications, auditing the algorithms, to make sure that they’re doing what they’re supposed to be doing, obviously first, for making sure that there isn’t bias there, but also to make sure that there isn’t model drift, where an algorithm that was initially trained on certain data, and if it’s ingesting other information, making sure that it’s still doing what it is supposed to be doing. Now, let’s move on to figure 17, and this is interesting because we’re starting to get into the question of concerns about generative AI. Clearly, it’s fundamentally different than, really, anything else that has been introduced, both in terms of a consumer facing application, but also being used within organizations. Now, users have a number of different concerns here. If we take a look at this chart, we can see that end users who have generative AI within their organization, they’re concerned about the company’s legal IP rights. They’re concerned that information could be shared with the public, or it could be shared with competitors.
Of course, we’re always worried about hallucination. There’s this concern here about detrimental for humanity. I assume that they’re just worried about things like a lack of human field, within influencing results or not being able to influence results. There’s, of course, that age-old fear about how AI or generative AI could replace employees, other employees, and then, of course, fears about their own job. But I think the most interesting ones here, really, are the top two, because this gets into a really interesting issue, particularly for enterprises, who’ve probably over the last year or so, started to let their employees experiment a little bit with generative AI, or they just didn’t know what was going on and employees were hopping onto the ChatGPT website, or Bard, or whatever the case may be, and they might have been screwing around with it and inadvertently putting information in to those services without realizing what the ramifications were. And if we look at figure 18, this is getting right to that question. This question asked, “What information have you put into a ChatGPT-like interface?,” and we can see here information about internal processes, non-public information about a company, about your company, employee names, other information about those employees, or even customer names and their information.
This is all pretty scary stuff, because if you think about a ChatGPT or a Bard or something like that, once you put that information in, it’s as good as being public, and slapping it up on one of those old-fashioned electronic bulletin boards, and you see there, it’s almost information about internal processes. 62, that’s almost two-thirds of respondents admitted to doing that. That is pretty scary, and I think one of the reasons why this is so high is because generative AI was introduced so quickly, and it just became one of those gee whiz, shiny metal objects that everyone can’t stop looking at, playing with and screwing around with, that enterprises really weren’t prepared. They weren’t prepared for people who just might go in there and type something in, and not even understanding what the ramifications were, or thinking that it was a closed system, when obviously it was not. So it was a real concern, and I think the results of the survey demonstrate that it wasn’t just a small minority of people doing it. It was a lot of people doing it. So what are companies doing about it? If we pull up figure 19, we can see the survey asked about generative AI controls.
And you can see here that some of them have data limitations. Some organizations, 63% had data limitations, 61% had restrictions on tools, 36% had data verification requirements, and 27% said that their organizations did not allow generative AI at all. Now, I think there are a few things here that we need to really dig a little deeper into. In terms of things like tool restrictions and data verification, data limitations, all of that kind of stuff, that might be well and good on a company’s private system. If you’re using the company’s information, they can clearly say you can’t go to these sites. It’s pretty easy to put blockers in, say, “If you type in bard.com, you will get blocked, and you won’t be able to use that.” The challenge, of course, is that if an employee is working from home, on their own device or if they’re working from home, or if they’re just using a different device at work that is not company owned, there’s a chance that they could still be doing these things and bypassing all of these controls that may be put in place. I think, really, what it comes down to is it’s almost like if you’re talking to a child about not smoking.
Yes, you can do everything you can to hide the cigarettes and put them up on a high shelf, put them behind bars, but really, the most effective way to keep them from stop smoking is really educating about the harm that it can cause, not just in the short-term, in the long-term, and I think it’s a strategy that needs to be deployed within organizations when it comes to generative AI. It is about making sure they understand that if they type something in there, that information is going back into that model, and it’s going to be training on that data. It’s going to be exposing that data, and that is a real concern for folks, for organizations that are trying to make sure that their company data, their private company data stays private. I think that there was a real concern, or a real just lack of understanding about, “What were the good use cases for generative AI at the beginning?,” and people thought, “Okay. Well, hey, I really don’t feel like writing this report,” and so I’ll just pop in all the stats and everything like that in there and say, “ChatGPT, write a report-based on these figures,” and not realizing that all of that information is obviously being captured by that model, and that, I think ultimately, it is incumbent upon organizations to clearly not only just say, “No, you shouldn’t use it,” but really, lay out exactly what’s going on with them in terms of when they put information into those tools, and what really is at risk.
So I think there are certainly some tools out there in the market that are designed to sort of help that process in terms of, instead of just saying, “Okay, you can’t go to the site. It’ll explain why,” and that’s certainly helpful, but ultimately, a better strategy is also to have an authorized tool, where you are grounding … Basically, you’re allowed to use this particular tool, but it is restricted to your own company, your own company’s systems, and any information that they’re looking for, or want to use is grounded within the company’s own data source, so you’re not going out to the regular old internet and exposing all that information because ultimately, that’s going to be a real problem moving forward. So I think overall … I mean, this is a really interesting study, I think it’s worthwhile to go to Cisco’s website, download it, take a look. It is available for free, and you can really kind of dig into the results. I’m also going to be writing a research note, where I go a little bit further into detail in terms of some of the information within it, but ultimately, I think there are a few recommendations that could come out of this. Obviously, it’s all about transparency, “How is data collected?,” “What data is collected?,” “How is it stored?,” “With whom is it shared?,” “How is it protected?”
You need to make that clear to consumers who are working with your company, and then, of course, making sure that information is also available to your own employees, because ultimately, there is a real concern as well about the information that is collected internally. In terms of personal information, you obviously have a lot of your own personnel records. You want to make sure that they have an appropriate mechanism for protecting your data. In terms of managing algorithms, that sort of thing, it comes down to making sure that there’s a periodic review period in terms of going through, making sure that algorithms are still doing what they’re supposed to be doing and there hasn’t been any kind of bias that might’ve crept in, either because of a data weight issue or initial data sets being somewhat biased. It’s just important to go back and check, and make sure that the models are doing what they’re supposed to be doing, remaining in compliance with all laws, and are returning the results that seem to make sense, and again, making sure that you communicate this to any stakeholders who might be involved with those algorithms.
Then, of course, when it comes to generative AI, making sure you have the appropriate control mechanisms in place for using generative AI applications, but more importantly, educating them to make sure that they understand what it is that they’re using, why it’s important to protect data, and why it’s important, not just to make sure that you’re only using approved tools, particularly when you’re using company information, and this is going to be an ongoing challenge, particularly as we start to see even more advanced tools coming to market. I’m thinking specifically things like generative AI-powered image generation or sound manipulation, that sort of stuff, because you can get into some fairly questionable or paralegal situations out there if it’s not used properly and appropriately, and you’re not only putting yourself at risk, but also the company at large. So with that, I’m going to wrap up that section. And again, be sure to take a look at my research note that I will be publishing in the next week or so, going more detail into this survey.
Now, I’d like to close out this week’s episode by coming to the Rant or Rave section. And today, I’ve got another rave, and this one is, “I’m so pleased to be talking about this.” So this past week, Pegasystems actually just released a new generative AI-based assistant technology out into the market. It’s designed to make it easy, to handle really, any kind of administrative type use cases out there, marketing, sales, anything like that. Basically, it’s a generative AI-based assistant designed to reduce busy work. It handles things like, some will say, content generation, summarization, all of those good things that we’ve see out in the market. But the thing that I’m really raving about is that they’re calling it the Pega GenAI Buddy, instead of copilot. It is so refreshing to see an organization use a different moniker than copilot, to call their generative AI assistant something different.
So definitely kudos to Pegasystems for calling it the GenAI Buddy. I believe that a research note I just wrote on this is also up on The Futurum Group website, so you can see some more details about what they have to offer, but it’s a great name because it is different, and it will help them differentiate from the dozens of competitors out there, who seem to be using copilot or something like that, which that’s not to say anything is wrong with any of the other technologies, it’s just it’s a little bit hard to differentiate when everyone is using the same sort of name for their own tool out there. So, again, kudos to Pega, and that is my rave for the week. Well, that’s all the time I have for today. So I want to thank everyone for joining me here on Enterprising Insights. I’ll be back again next week in another episode, focused on the happenings within the enterprise application market. So thanks to everyone for tuning in. Be sure to subscribe, rate, and review the podcast on your preferred platform. Thanks, and we’ll see you next time.
Author Information
Keith has over 25 years of experience in research, marketing, and consulting-based fields.
He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.
In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek, CNBC.com, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.
He is a member of the Association of Independent Information Professionals (AIIP).
Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.