Search
Close this search box.

Securing the Next Frontier of AI Innovation – Six Five On the Road

Securing the Next Frontier of AI Innovation - Six Five On the Road

On this episode of the Six Five On the Road, host Shira Rubinoff is joined by IBM‘s Akiba Saeedi and Scott McCarthy, for a conversation on securing AI against emerging threats and integrating cybersecurity into the fabric of AI innovation.

Their discussion covers:

  • Current market trends and concerns from clients regarding AI
  • The importance of Governance in AI and the role of cybersecurity
  • Identifying and mitigating risks associated with AI technologies
  • The essential consideration of cybersecurity as part of AI development
  • Insights into the burgeoning field of Quantum technology and its implications

Learn more at IBM.

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Shira Rubinoff: This is Shira Rubinoff, President of Cybersphere, a Futurum Group company. I’m on the road with Six Five here at RSA 2024. I’m very happy to be joined by Akiba Saaedi, VP, IBM Security, Product Management and Data Security, as well as Scott McCarthy, Partner, Product Management and Cyber Security Services. Welcome to the show.

Akiba Saaedi: Thank you.

Scott McCarthy: Thank you.

Akiba Saaedi: And Scott and Akiba, can you please introduce yourself to our audience? Tell them a little about yourselves, so the role that you have here at IBM.

Scott McCarthy: Sure. My name is Scott McCarthy. I’m based in Boston, Massachusetts and have a 25-year career in cyber security. I lead product management for cyber security services. It’s a consulting and managed security services portfolio.

Akiba Saaedi: And hi everyone. My name’s Akiba. I live in New York, my favorite city in the world. I lead our product management for our data security portfolio. We are all about protecting data. I’m a little younger than Scott in terms of the cyber security time, about nine years, but more than 20 in a lot of different enterprise software spaces. So happy to be here and talk about all the cool new stuff going on.

Shira Rubinoff: Excellent. Well welcome. So much buzz about AI, certainly here at RSA, but that’s been the topic on everybody’s mind all the time, whether it’s from a positive standpoint, negative standpoint. So Scott, what are you seeing in the market today with clients related to AI?

Scott McCarthy: So what we’re seeing is incredible pressure to innovate with AI so that clients can improve productivity, drive growth initiatives, et cetera. You can’t follow anyone’s quarterly or annual report without the mention of AI. If you don’t hear about it, the stock goes down. If you do hear about it, the stock goes up. And so these organizations are pursuing this innovation and they’re talking to us about helping them transform on this AI journey. And from my perspective, we’re helping them from cyber security services secure that journey.

Shira Rubinoff: Well, that’s certainly important. And Akiba.

Akiba Saaedi: Yeah, I think just picking up where Scott left off, that’s really the big issue is how do we secure that journey and how do we do this differently than it’s happened in the past? Because we’ve seen other mass movements, mass movement towards cloud, mass movement towards application modernization and security was a bit of an afterthought and it came later. And we’re already seeing evidence that that’s already happening again. Right? There’s a mass rush to get generative AI deployed in organizations. And the people in charge of building the generative AI are not necessarily security experts. The security teams don’t necessarily know what AI life cycles look like, and it’s a whole new threat landscape that they have to figure out.

So there’s an awful lot right now. There’s a lot of education. That’s why it’s such a topic here at RSA, and that’s why we’re talking about all of this all the time is because people are really grappling with what are the top challenges? How do I think about this? And it’s really a shared responsibility. It’s a shared responsibility by those that own the AI models and those that are responsible for security in the organization. So given that it’s a shared responsibility, how are we actually going to come together to actually solve for some of these things? So there’s early days, early in the maturity cycle. What I’m hoping is that we’re going to do enough focus on secure by design from the get-go this time around.

Shira Rubinoff: And those are excellent points. And certainly we talked about it that AI is in its infancy. We certainly don’t understand all of it yet and how it works, and what needs to happen. And as you mentioned, it’s not a one spot in the organization that owns the AI. And we have AI being touched upon by many different facets of the organization, but they’re not talking to each other. And as it’s being deployed or developed, it needs to happen in a symbiotic way that’s good for the organization in a way that it’s a positive outcome.

Akiba Saaedi: Exactly.

Shira Rubinoff: I know. And as you mentioned, the rush towards it because we have to keep up with everyone else. That’s right. Very good points. Thank you for that. And there’s a lot of focus on government and AI. And where does cybersecurity fit in that?

Akiba Saaedi: So I think at the highest level, if you think about what is the risk in generative AI, it’s about trustworthiness. You can’t use something that you don’t trust. So there’s a lot of focus on, well, what makes an AI model trustworthy? Can it be biased? Can it hallucinate? Can it be manipulated? But that can it be manipulated is all about its security and its security components of how that entire pipeline and AI pipeline and lifecycle is secured. So you don’t have trustworthy AI if it’s not secure.

So how do you establish that? And a lot of the AI governance frameworks and a lot of the technologies, the teams, the methods, they really focus on all the other elements. And what we’re really advocating is that security has to be right there at the same position as bias and drift and hallucination. It’s part of the risk around a given model. There’s already a risk framework around AI models. Security has to be an important component of that.

Shira Rubinoff: 100%. Security always has to be built in from the ground up. And we’ve seen with different technologies and different advances in our systems and everything else that if security is not built in from the get-go, it’s just a problem waiting to happen. Exactly. And Scott, what are your thoughts on that?

Scott McCarthy: My thoughts? First of all, in total agreement. So we’re helping from a consulting perspective, our clients establish that framework within the organization and establish a shared responsibility model so that there’s clarity on roles and responsibilities in the organization. We’re also helping them understand the new risks that these technologies introduce. There are traditional security solutions that are required to secure AI applications, things like data security, credential management, protecting the infrastructure. So those traditional methods and controls are required.

But then with the advent of new risks, like things like prompt injection and other new risks that GenAI introduces and AI introduces, we’re having to bring new methodologies. We’re having to bring new partners to the table from our ecosystem partners to help solve those challenges.

Shira Rubinoff: It’s important thing that you also mentioned. Obviously everything’s important to talk about, but our whole frame of looking at security has changed that we are now reliant on partners and bringing in partners as we’re better together. And I think in the past it was very much a siloed effect of they do what they do well, we do what we do well, and let’s just see what the outcome is. And I know that IBM is a leader in looking at that and understanding that with partnerships, they’re able to bring the best of breed to the table and make what they have even that much better and let the others utilize what they have to make their solutions much more effective along the way.

Akiba Saaedi: I think that too, that the framework that you establish, right, most large organizations have some sort of governance framework. It has to be, and there’s a compliance team that’s a part of that as a stakeholder. There’s the AI, the data, and the AI teams that are part of that as a stakeholder. There’s the security teams, there’s legal, there’s risk, there’s a lot of stakeholders within that. That is the place that you have to come together in thinking about how you will employ new policies and how you will think about now putting for the new types of risks that exist around AI as an extension around.

So that there is, and that’s part of what we want to see evolve as well, is that companies are really not taking the siloed approach internally either. I can’t emphasize the shared responsibility enough. And that means you have to be designing this together and you have to be thinking about each other’s perspectives so that you’re actually designing the right thing from the start, like you mentioned, secure by design.

Shira Rubinoff: Certainly. And also you did mention that you have to think about the new security risks around AI. I think a lot of organizations are looking at it as what is AI going to do for me to either better, faster, stronger, more insight, all these positive, wonderful things that we’re looking at from a positive lens. But there’s also the negative things that we have to look at. What are the security risks? What are the issues that we have to contain and secure in order to utilize AI for the good? So that’s also something we have to think about.

So we certainly think about cybersecurity when it comes to really any advancing any new technology or a technology that we’re moving along within our ecosystems, whether it being cloud, whether it be in IoT and certainly in AI. How would you feel or how would you discuss cybersecurity when it comes to AI?

Scott McCarthy: Well, so we look at it from two dimensions. One is securing AI and one is AI in security. And I think both are going to be important because it’s not going to be sufficient to have just humans alone protecting the organization against AI. So we’re helping clients deploy AI technologies from IBM and our ecosystem partners into their security operations teams and programs so that they can better defend the organization, assess the risks, understand where they’re vulnerable, understand that attack surface, but also incorporate these new GenAI applications into their incident response plans and procedures.

And then from a securing AI perspective, it’s leveraging the traditional methods as we’ve talked about, but also looking at threat modeling of the LLMs and of these applications, testing the applications before they’re moved into production and ensuring that new threats like the pumped injection and others can be prevented.

Shira Rubinoff: Can you talk to a use case specifically that you’ve seen in IBM that really talks to what you spoke about? And I think that would be very beneficial for our audience to really understand what you’ve just described.

Scott McCarthy: Well, I’ll share a client example. So I was on a conversation with a client. IBM was working with a client to deploy new customer transformation application that was GenAI powered. So it was going to be interacting with their clients. They were running a POC, but they hadn’t factored security in from the start. So as I was explaining the risks involved in securing GenAI, the client said to me, “This isn’t the POC. The team that’s deploying the application hasn’t been able to answer any of the questions that our security teams are asking. And now we’re concerned we won’t be able to move into production.” So we came to the table with our expertise, our tools, our ecosystem partners, and we ensured that the application was secure so that we could move it into production.

Shira Rubinoff: Oh, that’s certainly very important.

Akiba Saaedi: So we’ve actually developed a framework for how to think about security for AI. And I’ll draw it.

Shira Rubinoff: Draw it for us.

Akiba Saaedi: I’ll draw it here, right? There’s really three major components if you think about the new attack surface, the new threat surface, right? We talk about secure the data, secure the model, secure the usage. Okay? So the data and all of the elements that come into that training data, not only if you take a third party model that’s been trained by external data, but most organizations, if they’re really making it useful in their own context, they have to link their business data to it. So there’s a whole set of issues and series around data poisoning and data exfiltration and all these other things.

So secure the data is one part of the framework that you have to really think about and how you’re going to go about doing that. The model itself can be manipulated. And so how you secure the model, how you really understand all of its elements. And then the usage, the applications that, as Scott was just mentioning, that are connected to that model and the data that’s actually connected to the model and which applications. So you have to understand that entire landscape and its connectivity to be able to identify where you might have risk vulnerabilities across all of that landscape.

So that’s kind of the three major things that are new about generative AI. Underpinned by secure the infrastructure, which Scott’s already talked quite a bit about, underpinned by the entire AI governance around it. So if you take that framework and you really sort of drill into each of those areas, you start to uncover and think about really where you need to have appropriate focus in how you’re thinking about securing AI models.

Shira Rubinoff: That’s true visibility into it, which I think-

Akiba Saaedi: Exactly.

Shira Rubinoff: How do you move forward in the organization without the proper visibility. And I think that beautiful model that you just drew, shadow AI. We’ve had shadow IT, shadow data, shadow everything. Right now we’ve got shadow AI, and that visibility is like step one. You have to have visibility to what’s even operating in the organization because that alone is kind of a starting point and that’s where a lot of organizations are right now because anybody can bring anything in. Even if there’s a policy, there’s nothing that physically stops somebody from downloading a third party model and deploying it somewhere inside the enterprise.

So really even understanding what you have deployed and getting that level of visibility, there’s so much more that’s going to evolve in this space over the coming months and years, but it sort of starts with visibility.

Akiba Saaedi: Certainly. I know a lot of CISOs I spoke to, is they just don’t have the visibility.

Shira Rubinoff: They just don’t know what they own. They don’t know what they have. They don’t know where things sit or catches it. And a lot of different solutions out there are kind of giving the overall scheme of things, but not diving down to the deep nitty-gritty of which is critical.

Akiba Saaedi: And that’s what we’re working on from the technology perspective. So we are actually working on a technology that will look at that end to end pipeline in terms of how we think. We are using that framework I described to think about how we’re even engineering the technology to help solve the problem and automate those issues and visibility and how you prioritize and how you actually go address some of those issues.

Shira Rubinoff: That is something very much needed in our ecosystem. So very much looking forward to hearing about that and seeing that. And AI is a big topic on everyone’s mind right now, but quantum also is rapidly gaining more interest. What are you seeing happening on this space? Because everyone’s thinking it’s five years out till we actually see something happening, but we all know that’s not true. It’s in its infancy yet there’s things happening. What can you tell us about that?

Scott McCarthy: Well, we’ve been working with clients for over five years now on this subject, the more mature clients, especially large financial institutions in Europe, in the United States, et cetera. We’ve been partnered with teams within IBM software, within IBM research to develop tools that will allow clients to assess where they may be vulnerable. And from a methodology perspective, we help clients. It’s not a point solution to fix the problem once. It’s about becoming crypto agile. And we have an approach to allow clients to get on that journey today, understand where the risks are, but also modify their approach to application development and managing keys and other aspects of their application stack and prepare them for the eventual risks that quantum will introduce.

Shira Rubinoff: How far out do you think we are from actually seeing that become provable?

Scott McCarthy: So the estimates vary. I don’t know if you have …

Akiba Saaedi: The estimates vary in terms of when will a quantum computer actually be in a position to break traditional cryptography. But I think that it’s now in the sense that I think when people start to really dig into understanding … Cryptography is so widespread, and just like application modernization and when you get into it right it’s a multi-year journey, so is this going to be. And so what people, I think where a lot of companies are right now and organizations are is just assessing their risk. They need to first understand what is my actual risk and building the roadmap because there’s no one year solving this problem, right? It is going to be a long-term kind of thing.

And you’re going to find much more vulnerability in your cryptography than you’re ever going to be able to humanly address. So you’re going to have to have a way to prioritize that. So that’s part of what we’re working with. The state of now is how do I inventory what my problem is? How do I prioritize when I’m going to go about it and build my roadmap for how I’m going to do that? And then as we go forward in the permanency and monitoring for changes and all that good stuff, and then how do I … Actually, I mean there’s a really significant shift when we talk about crypto agility, because decoupling cryptography from applications and managing something centrally is a fundamental change in the way, it’s going to turn encryption and cryptography on its head in a lot of ways.

So it’s now in terms of the length of time it’s going to take you to address that you need to start thinking about how to assess that risk, there’s a lot of speculation and I don’t know any of us can 100% predict, but it’s within this range of what we’ve been describing, in less than a decade where that happens. And I don’t know. Everything keeps getting sooner. The more data I see, it keeps coming in and in and in. But it’s really about understanding where you are now and the realization that the timeline for what you’re going to do is going to happen over some number of years. So you need to get started at least understanding what your journey is going to look like so that you can prepare your organization, your staffing, your budgets, your everything right for how you’re going to deal with that in the going forward.

Shira Rubinoff: Well, certainly I think we have to look at it with the lens that we’re also looking at AI. It’s not just a positive outcome for us with the maturity of AI, but the maturity of quantum. There’s also the negative aspect that’s going to come at us that much quicker, better, faster, with much more detailed, quick targets at our backs. Is that something that you’re focused on as well in terms of, I know a lot of organizations saying, “I don’t want to deal with this. This is something that is beyond me”?

Akiba Saaedi: We won’t have a choice because cryptography, at the end of the day, it’s all about getting into the data and the systems. And that’s your protection layer. And if that protection layer isn’t there, I mean there’s nothing more catastrophic than that in terms of an existential threat to your business.

Shira Rubinoff: Certainly.

Akiba Saaedi: If your most important systems and data can be accessed. So some people compare it to the Y2K. It’s not a Y2K analogy in the sense that it’s not a date that everything’s going to change. But it is an analogy in the sense that it is fundamentally every system that has to at some level be looked at, assessed and evaluated, if there needs to be a quantum safe cryptography that’s going to be implemented in that infrastructure.

Shira Rubinoff: Almost like a cyber hygiene across your organization on multi-levels moving around AI.

Akiba Saaedi: Yeah. And it is daunting. It’s daunting and it’s overwhelming, but that’s why you have to start to assess what your situation to be able to take that in a more reasonable path.

Shira Rubinoff: Certainly that’s an area of being proactive in your cybersecurity stance and not just reactive. So we have to be both.

Akiba Saaedi: Right.

Shira Rubinoff: But certainly with this coming to be in the mature state that it is, there’s a lot to be excited about. But as you mentioned, be being prepared, understanding the fundamentals, and really preparing the organization in a way that they’re ready and not just hands thrown up and they’re trying to scramble once it actually hits in a very impactful way.

Akiba Saaedi: Exactly.

Shira Rubinoff: And I’d love to ask both of you if you have any final thoughts around this topic for our audience.

Scott McCarthy: Well, my thought would be we’ve used the analogy of the cloud transformations and we’ve learned a lot to that journey. And as we’ve said, we’re seeing repeating patterns from that cloud journey. I think one of the reasons we ran into those troubles as part of the cloud journey was security wasn’t enabling the business to achieve their business objectives at the speed of the business. And security teams need to upskill. They need to become an enabler of the business. And I think business leaders need to be partnering with their security teams to make sure that they’re secure by design at the outset, and they’re not having to go back and retrofit these applications and add security back in after the fact because we know that’s the most costly way to approach this problem.

Shira Rubinoff: 100%. And that also leaves holes in the systems of trying to put a bandaid on a gap where security should have been built in. So very important information. Any final thoughts around this topic?

Akiba Saaedi: Just I think in the spirit of education, I wanted to mention just a couple days ago we released a new study. IBM’s Institute of Business Value does a lot of thought leadership type of work, and we do a lot of surveys of very senior executives, and we just did one on securing generative AI. So very timely to the discussion. And some of the results and the insights from that I think A) will be interesting to the audience to be able to understand, including 24% of generative AI projects right now being secured. So a very, very small percentage of overall projects being secured. But also talking in more detail about some of the things we’ve been talking about, but in a much more lengthy document.

So I would encourage anyone to take a look at that as well. You can find it out on our website if you just google securing generative AI from IBM Business Value Institute, and hopefully that’s useful to some of the audience to read up on that, and yeah, help with the education and getting things moving in the conversations and being the representation. I think that’s another thing I’ve been advocating for, be the representation in your organization for helping to understand what the risks are, why it’s important and why it’s a shared responsibility.

Shira Rubinoff: Oh, very important. And I also like to ask my interviewees for a cybersecurity business tip or helpful hint to our audience, whether it being something personal for the everyday person out there for an organization, but something that you feel passionate about in the cybersecurity world. So let’s start with you Akiba.

Akiba Saaedi: Okay. My tip is don’t talk in technical terms.

Scott McCarthy: That’s a good tip.

Akiba Saaedi: Please. It’s such a thing and security people go there so fast, but when you are trying to convince the senior leadership, when you are trying to convince the AI teams, when you are trying to really communicate with anyone, you have to do it in a way that is in their language, not your language. And so that’s my number one.

Shira Rubinoff: That’s a good one.

Scott McCarthy: So my tip is that this is coming fast and we’re all learning, frankly.

Shira Rubinoff: We are.

Scott McCarthy: I mean, we have together learned over the past 24 months or so, and the industry is learning. So I think we all need to, as cybersecurity professionals, need develop an eminence in this topic. And it’s okay to acknowledge that you might not know what you need to know today, but I think Akiba’s tip is a very good one. But develop the skills necessary to help your business protect the organization as we all go on this journey together.

Shira Rubinoff: That’s excellent. Well, Akiba and Scott, thank you so much for your time today and thank you for the information shares. I know our audience really enjoyed our conversation today, and I look forward to many more conversations with you both. And this is Shira Rubinoff with On the Road at Six Five Media here at RSA 2024. Thank you for joining us today.

Author Information

Shira Rubinoff

Acclaimed cybersecurity researcher and advisor, Shira is a global keynote speaker and presenter, and expert media commentator. She joined The Futurum Group in February 2024 as President, Cybersphere.

SHARE:

Latest Insights:

Paul Nashawaty, Practice Lead, at The Futurum Groups shares his take on top trends and challenges organizations are facing in The Rise of Cloud Native, WebAssembly, and FinOps in Application Development.
The Six Five team discusses Microsoft & BlackRock Sitting in a Tree
AMD Discusses Its AI Acceleration Strategy, Growth in Data Center GPUs, and Future Roadmap at the Goldman Sachs Communacopia Conference
Steven Dickens, Chief Technology Advisor at The Futurum Group, shares insights on AMD's rapid AI-driven growth, strategic acquisitions, and the company's ambitious vision at the Goldman Sachs Communacopia Conference.
Keith Kirkpatrick, Research Director at The Futurum Group, shares his insights on the proliferation of AI agents, and discusses the key elements vendors must consider to drive customer adoption and additional revenue from these new offerings.