On this episode of The Six Five – Insider, hosts Daniel Newman and Patrick Moorhead welcome Christina Montgomery, Vice President and Chief Privacy & Trust Officer from IBM for a conversation on AI and the importance of her role in AI as Chief Privacy and Trust Officer.
Their discussion covers:
- An introduction from Christina Montgomery about her role and what it entails at IBM
- What role, as the Chief Privacy Officer, did she play in the development of IBM’s AI platform, watsonx
- What concerns clients have about AI and how is IBM addressing them
- Her recommendations on where organizations can begin when looking to adopt AI
Learn more about IBM’s AI platform, watsonx, on the company’s website.
Be sure to subscribe to The Six Five Webcast, so you never miss an episode.
Watch the video here:
Or Listen to the full audio here:
Disclaimer: The Six Five webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Transcript:
Patrick Moorhead: And The Six Five is back with another conversation on AI. Dan, 2023 we talked a lot of AI, and here we are in 2024 talking more about it. And that’s because AI can do some incredible things for the enterprise.
Daniel Newman: Yeah, it’s a continuation, Pat. 2023, I’m starting to get these quippy considerations like, “It was the year of the GPU. It was the year of the infrastructure. It was the year of Gen AI,” and now I’m saying 2024 is the year of implementation. 2024 is the year when we start to see all this investment shift from, we’ll always be training, but it’s that training to inference. And as you move from training to inference, and people are now tapping into that data, and using it for the enterprise, using it for consumer reasons, using it on their mobile devices and on their PCs, wherever they’re using it, and they’re figuring out how to use it for not only for good, but using it safely, having the data protected, being in compliance, working closely with government and regulators, because this is the most fundamental shift that we have seen since the what, social, since internet? I don’t know if there’s been a transformation trend yet, Pat, that’s bigger than this one.
Patrick Moorhead: Yeah. One of the big things in these big transitions that we’re seeing, sometimes the whole concept of privacy and trust or not discussed a lot. I think we saw that with the social media revolution, mobile, local, social, right? But when it came to generative AI, companies like IBM were all in on talking first about the privacy and trust that’s required to pull this off, particularly for the enterprise. And when you have something like LLM and generative AI, that’s even more important.
And with that, I’d love to introduce IBM’s head of privacy and trust, Christina Montgomery. Welcome to The Six Five, first time. We’ve been talking a lot of IBM AI on here, and I feel like this closes the circle of the AI portion, the data portion. And here we have the governance, which is all about privacy and trust. Welcome.
Christina Montgomery: Thank you. Thanks so much for having me.
Daniel Newman: Yeah, Christina, so let’s start off talking a little bit about the role. Look, I think you heard Patrick say it, I’ve said it. We’ve actually been on the record many times both on broadcast and on our show here saying, “Hey, look, IBM, they got to market fast, but with governance and compliance in mind, when it came to the first generally available enterprise Gen AI solution,” but it has been with a strong bend towards privacy, trust, compliance, governance, and your role is Chief Privacy and Trust Officer. And so IBM was early with the Chief Privacy Officer role title you previously had, talk a little bit about what this Chief Privacy and Trust Officer role entails for you at IBM.
Christina Montgomery: Yeah, so when I took over the role, it was essentially to build out an operational infrastructure and an AI ethics board around AI principles for trust and transparency, that we put into place. And those were the AI should augment, not replace human intelligence, that it should be transparent and fair and explainable. And then from a business model perspective, IBM is an enterprise provider. Our clients are some of the world’s biggest businesses. We also take the position and have a principal that says we’re not going to use client’s data. It belongs to the client. So unless a client asks us, we’re not training our AI on that.
But we knew we needed a governance framework essentially, to hold us accountable to those principles, and then to work on building them into practices across the company. And so that’s essentially what I’m doing in this job. My team is responsible for that cross collaboration for providing our business units with the information that they need to comply with global regulation, but also to hold ourselves accountable to the principles that we articulated importantly a few years ago on this topic.
Patrick Moorhead: We’ve done a lot of interviews on watsonx over the last couple months. It really kicked off at IBM in Florida at your big customer event. And then recently, Dan and I chatted with Rob Thomas and Dario Gill about watsonx. When we think about actually any type of product like this related to AI, we talk a lot about, “Okay, how does research connect with development?” And IBM actually threaded those two groups to get to this time to market with something that is very technically challenging. But you were, in fact, your role, played a huge part in this development process. Can you talk about some specifics about that?
Christina Montgomery: Yeah, I know you spoke to Dario and Rob about how far the company’s come in terms of collaborating, bringing research technologies into practice really quickly. And that’s so important here, because we have such a rapidly evolving technology landscape. We also have a rapidly evolving regulatory landscape in this space, and it’s happening at the same time. And so what my team does is we contribute back in a number of ways to the business. We contribute in both informing, supporting, the clearance of products and the like, and then adopting and using that technology internally ourselves.
So, what do I mean by that? First we have an AI ethics board, that is one of the first in the industry. It’s fairly mature in its practices, and we have helped contribute some of those practices to the technology itself. Things like workflows into the product capabilities, things like fact sheet capabilities, working with the research team and the like. The AI risk atlas, that’s part of the product documentation for watsonx, is a work effort that was basically started from the ethics board. And the way our board works, I should also mention, is very cross-disciplinary. It has representation from every business unit. So, something like a risk atlas. We can drive the work to identify what are the new risks associated with generative AI, for example, and then that can get contributed back to the product.
The second way is, really we supported the review, and I know you spoke to Dario in particular, and Rob as well, about the data clearance efforts, data provenance, and how important it is to understand the techniques that are going into developing something like our foundation models, the data that’s being used to train it, so we can stand behind it for our customers, so we can provide IP indemnities, copyright indemnities and the like. And so my team played a significant part in that data clearance process as well. And then finally, we’re a living lab for the company. So we’ve been building our privacy compliance program at scale on IBM technology, and now we’re doing the same thing with respect to the AI capabilities, by using and helping to inform those product capabilities, testing them, being the client zero within the company for watsonx.
Patrick Moorhead: Is there the ability to throw a flag, or something like that, in the process? I’m just curious how we have this trifecta here, between research, product development and trust and privacy group. Is there a vote, or the ability to throw a red flag like, “Hey folks, let’s take a pause”?
Christina Montgomery: Yeah, it’s not the first time that a company has had to implement governance around new products. You think of something, and I think about this as privacy and security by design. You guys are well familiar with that, right?
Patrick Moorhead: Any type of PII information, has-
Christina Montgomery: Correct.
Patrick Moorhead: Yeah.
Christina Montgomery: So yeah, absolutely. And what we try to do is iterate, and that’s part of the value of adopting the technology. I always say my job as a privacy professional, and an AI governance professional, is so much more interesting working within a technology company, because we can be the early adopters, and testing is this really working? What are the new techniques with respect to anonymization, or pseudo anonymization, or with respect to, now you’re looking at filtering, prompting. There’s so many new issues and so many new techniques, and we can help with that directly by using it within our own practices here.
Patrick Moorhead: Thank you.
Daniel Newman: So this whole AI thing has a lot of complexity, Christina. Of course, we all enjoy the new capabilities it’s given us, anybody that’s played with any of these large language models. It’s an amazingly useful tool. Of course, we’ve got a lot of questions about accuracy. We have a lot of questions about the rights of created proprietary licensed information. There’s going to be new advertising models of how and how things get prioritized. Just like the search era. There’s going to be new means, and that’s in the more consumer facing.
In enterprise, there’s always going to be this mix, like we talked about PII, so you got a lot of challenges, and this is what I’m seeing as an analyst, but what are you hearing from clients? What are they sharing with you about their concerns about AI, generative AI, and how are you and IBM addressing those concerns?
Christina Montgomery: So first and foremost, our clients need AI that is trusted. They need to know that the output of the AI, particularly if they’re using it in the context of customer service and the like, they’re going to get accurate answers as a result of that AI. So, it has to be trusted. And a lot of that does come back to traditional issues associated with AI. In the generative world, there are issues around privacy, there are issues around misinformation, there are issues around bias and discrimination, and those are all concerns. But first and foremost, it comes back to fundamentally and foundationally, what data is going in to the AI and how do I know that the outputs coming out of it are going to be accurate and the like. So lots of concerns there.
I think also with generative AI, new issues around, “How do I make sure that my confidential information isn’t going to be used to train AI chatbots, or other models and the like, and be exposed in that way? How am I protecting it? How am I protecting my intellectual property in the context of these new AI models?” And then the third thing that I hear a lot too, particularly in the context I have at a lot of companies, are my peers who are responsible for a regulatory compliance for their companies. And there is a lot happening from a regulatory perspective in this space right now. And so, I think it’s how do I make sure that I am ready for the regulations that are coming, and how do I make sure that the AI I am adopting is going to be compliant with those regulations?
Patrick Moorhead: Christina, you’ve testified in front of Congress, which I guess fortunately I have not had to do in my 30 some years. A lot of government agencies I’ve testified in front of, but not full up congress. So, privacy, ethics, trust around generative AI is getting the biggest attention that you can get, on not only the national, but also the world stage. One of the things that I’ve noticed about IBM being in and around the company for over 30 years is, you go big with things, and you are a collaborator with other companies and institutions, to drive big change.
And one of these that you recently launched was the AI Alliance. Sure, it was about technology, it was about models, it was about the hardware and the innovation, but also getting on the same page as it relates to safety, and trust, the conversation that we’re having right now. What do governments need to consider here, maybe being part of this? How do they intersperse with this? Basically, how do they make it happen?
Christina Montgomery: I think when it comes to responsible technology adoption, and basically supporting what’s next for the world, you talked about what a moment we’re in. I think governments and companies both have roles to play. And we articulated our first point of view around AI regulation in 2020. So, we’ve since refreshed that, but it’s essentially the same points which are from an AI, from a government perspective, governments should enact precision risk-based regulation.
This is a technology that needs to be regulated in context. We don’t want to over-regulate, because you want that balance that supports innovation. And there are many, many low risk uses of AI that are hugely impactful. And we talk about the possibilities and the potential for AI to solve global challenges. So smart precision regulation for governments. And then for companies, companies should be held accountable for the AI that they are adopting and deploying. And that means adopting governance internally. So holistically, it’s really strong, it’s regulation at the point of risks, responsibility on the part of companies. And then, where the AI alliance comes in is, we want the future of AI development to be open and inclusive.
And so, that’s where our alliance comes in. We don’t believe that the AI of the future should be developed by four companies. This is a technology that needs to lift all boats. And so, the AI alliance is very focused squarely on that point, supporting open innovation, and identifying and mitigating risks. How do we do that? Because there have been concerns that are articulated, and the AI Alliance is tasked with helping to address those concerns.
Patrick Moorhead: I really love that balanced approach, because quite frankly what happens is, if there are a ton of regulations, the bigger companies are naturally the only ones that will be able to comply. What that means is, only the larger companies will be able to provide innovation. And last time I checked, smaller companies bring a lot of the innovation. Small companies become big companies. Big companies innovate like IBM, but small companies play a role in this as well. And I’m really glad to hear you say that, because to really raise the water for society, and what generative AI and AI can do, it’s going to take a whole lot of companies doing this responsibly, and not overregulated. Like the three bears, right? Take that just right approach in the middle.
Daniel Newman: Pat, if I can add, I think the importance of balancing. We’ve seen companies that will run over regulation because they realize that the cost of running it over, is less than the cost and benefit of their business growth. The EC, there’s a number of companies that have constantly been fined for breaking private privacy and regulatory rules, but I think there’s a question mark of one, how fast are you pushing innovation, how the rule is interpreted? Of course, lawyers would love to play with that gray space, but also companies are, it’s a balance. It’s always a balance of, “Hey, we’re going to go fast.” I think that’s where a lot of the indemnification came in, is like companies like IBM, you want to help your customers go fast, but you have to then give them the release valve of saying, “Well, you might go really fast and something could go wrong.”
We also are so confident are grounding, we’re so confident in our technology, that will support you in the event that that happens. And I think that’s the public/private trust assessment that’s being done. Public-private, private/private between companies and saying, “Hey, we all want to do this. We want to do it ethically. It won’t be perfect. There will be mistakes. There’s no way it’s going… But we’re willing to bet and put our money where our mouth is.”
So as we wrap this up, Christina, I’d love to just get your recommendation. You’re talking to customers, you’re acting on behalf of IBM, one of the leaders in enterprise AI, generative AI for enterprise solutions, what are you recommending to companies that are looking to start adopting, to start implementing, as I suggested in the beginning of the show, generative AI, and AI at scale in 2024?
Christina Montgomery: I think to bridge from the conversations which we had, which are around the product capabilities, the need for trust, and also this whole need for compliance, this is an area where the incentives are one and the same, because you can’t… I really do believe that putting guardrails in place in your company, and doing things like AI governance holistically at scale, using technology and the like, is not only going to help address regulatory compliance as regulations come on board, but immediately it’s going to help build trust and accuracy and make the AI more valuable.
You’re not going to have valuable AI if the data’s garbage, and you’re not getting value out of what you’re putting into it. So, what I always say is, first come up with a business strategy as a company. Where and how do you want to use AI? What does your data look like with data governance, basic data governance in place? And that’s where IBM, longstanding expertise around global risk and compliance, and knowing data and helping manage data for clients and on behalf of clients.
And then, so we’ve done things too, like take a maturity assessment for responsible AI. That’s something that we worked with our consulting team to help put together, this sort of AI maturity assessment understands, do you have a set of principles from a values-based perspective, around what types of AI you’re going to adopt? Meaning, do you say that you will use it in the context of augmenting humans, or you’re going to strive for transparency and explainability. And then holding yourselves accountable through a governance framework internally. And you can put that in place by first starting with the strategy and taking that maturity assessment and building a governance framework. And that will accomplish ultimately regulatory compliance, as well as helping you build AI that contributes value to your business.
Daniel Newman: Well, Christina, I want to thank you so much for taking some time here with Patrick and I to talk about this. I would say with some level of confidence that one of the biggest and most important continuing themes, after seeing all this investment turn to implementation, is going to be continued conversation around privacy. I said trust and privacy together, privacy, trust, regulatory, governance, compliance. So all words with slightly different meanings. But these are big, big focal points, and this means for you, that you’ve got a big job ahead of you. And so we look forward to continuing to follow what you’re doing. Hopefully you’ll come back and join us again here on the show sometime soon.
Christina Montgomery: I would love to. Thank you for having me today.
Daniel Newman: All right everybody, thanks so much for tuning in today. Hit that subscribe button. Join Patrick and I for all of our Six Five Insider conversations. We promise you AI is going to be in focus all year long. But for this episode, for Patrick and myself, it’s time to say goodbye. So, we’ll see you later.
Author Information
Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.
From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.
A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.
An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.