On this episode of the Six Five On the Road at BlackHat 2024, host Shira Rubinoff is joined by IBM‘s Guy Shanny, Global Data Security Leader, Co Founder & CEO, Polar Security, and Matthew Shriner, Global Threat Management Partner & Portfolio Leader, for a conversation on securing AI against emerging threats.
Their discussion covers:
- The current landscape of AI security challenges
- Innovative strategies businesses are employing to safeguard AI
- The role of collaboration between tech companies in enhancing AI security
- IBM’s approach to threat management in the context of AI
- Future trends in AI security and threat management
Learn more at IBM.Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Listen to the audio here:
Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.
Transcript:
Shira Rubinoff: Hi, my name is Shira Rubinoff. I’m president of Cybersphere, a Futurum Group company. I’m here with Guy Shanny, global data security leader, co-founder, and CEO of Polar Security. Welcome.
Guy Shanny: Thank you.
Shira Rubinoff: I’m also joined by Matthew Shriner, global threat management partner and portfolio leader at IBM. Welcome.
Matthew Shriner: Thank you.
Shira Rubinoff: Please join me as I welcome them for the latest updates on how clients are securing AI against emerging threats and learn how you can start to secure your own generative AI initiative. So Matt, let’s talk about IBM’s Cybersecurity assistant. First, why did you choose the word assistant and also who is AI assisting?
Matthew Shriner: Well, great question. So we chose the name assistant because that is what is happening from a virtual perspective to assist that level one analyst that’s sitting in a SOC, the eyes on glass on the console. There’s two levels to it. There’s the sophistication that that assistant is bringing to a more advanced analyst, potentially even up to a level two. But that level one, let’s say they’re a junior analyst, for example, maybe they’re new in the role and there’s a lot of burnout associated with trying to sift through the volume of alerts and events that are coming into the console. So any tools that you can provide that give them a little additional guidance, support, additional resources are going to really virtually assist and help to retain and grow that analyst to make them more productive in the SOC.
Shira Rubinoff: Matthew, can you please share with us a real world example around this?
Matthew Shriner: Sure. So there’s really two elements. There’s what we call advanced threat disposition scoring, which is something we’ve had in place on our global managed security services platform for about eight years now. When we launched that in 2018, we were auto-dispositioning about 65% of the alerts as they come into the console. We’ve now increased that up to 85% of the alerts that come in, which is tremendous. This is the main focus of any optimized SOC, you want to reduce that alert fatigue coming into the console. But then from there, since we’ve implemented this cybersecurity assisted, we’re able to auto assist or assist another 45 to 50% of that remediation time. So giving immediate context, recommendations, additional resources for that analyst to drill down on right there for those remaining alerts that still need to be investigated. So you have that initial 85%, and then you have a 45 to 50% decrease in remediation time from there for that level one analyst, which is tremendous.
Shira Rubinoff: Well, that’s super impressive there. And the whole process of uncovering cyber threat isn’t a single step. How does cybersecurity assistant help across that workflow in the SOC?
Matthew Shriner: Yeah, so it’s related to what I just described in terms of, again, that auto-dispositioning or what we call ATDS that’s happening in our platform already, all the way to auto-recommending steps to remediation. The ultimate goal and the outcomes here are that it starts to up-level the quality of the work inside the SOC. And so our goal is to not just add more value to SOC analysts. Our goal is to actually do away once and for all with that level one and eventually that level two tier of analysts so that those analysts which are prone to burnout, that eyes on glass just going through alert after alert hours and hours, day after day, week after week to do away with that, up-level those workers into an L-3 almost triage and remediation role. That leads to a much greater job retention. And that starts to address the overall cybersecurity talent shortage because it’s that L1 and L2 role which is the number one burnout role contributing to that talent shortage.
Shira Rubinoff: And how does the cybersecurity assistant transform the workflow with that type of impact? And you say 48%, that’s quite a number.
Matthew Shriner: Yeah. Again, once we’ve auto-dispositioned, the bulk of alerts from the get-go for the remaining alerts that do come into the console, we’re immediately adding additional context. We’re able to annotate with root cause in the ticket, for example, we’re able to suggest new use cases and we’re able to close that loop so we’re not just aggregating and deciding what needs to happen next. We’re able to take the findings, annotate the root cause, and auto-create new use cases as a result. And so the end-to-end workflow in the SOC starts to become fully optimized. And, again, that also leads to the up-leveling of the work, the quality and nature of the work that those analysts are doing in the SOC.
Shira Rubinoff: Well, I don’t think people have typically thought about all the things that this can actually do, how it can help a company, how it could bring it to a whole new level that transitions companies to have better impact for their organizations. So thank you so much for sharing that.
Matthew Shriner: Thank you.
Shira Rubinoff: So let’s move over to Guy for a moment and talk about generative AI. Certainly so many challenges are faced when dealing with generative AI. What do you believe are some of the biggest challenges we all face when dealing with generative AI?
Guy Shanny: There are a lot of challenges. AI security challenges start and stop with the data. So we always say that AI security challenges are in fact data security challenges, but what does it mean? So the number one challenge that we see in the wild is around shadow AI and shadow data. Companies are utilizing more and more AI in their production environments that are creating AI data pipelines. There are lots of data stages in those pipelines. For instance, data cleaning, for instance, data enrichment, training the data, and so on. And in fact, the companies, they lose control. They don’t know where the data is, what models are utilizing the data, and touching the data. So that’s the number one challenge.
The number two challenge is around data access. Now, AI, this creature now is a gateway to the data. When an employee utilizes an HR chatbot, for instance, this chatbot accesses data, the core points data, and then retrieves it to the user. So now you have this AI model that accesses data you need to enforce some policies on, what data do they access, where, how, and so on. So that’s the number two challenge on data access. The number three challenge is around the posture, the posture of this, but the posture in fact with more specifically on the data pipelines that I just mentioned, because you have plenty of applications touching the data, you have data stores everywhere, in the cloud, in SAS, in the on-prem, and you need to make sure that those data pipelines are well protected and not publicly exposed, for instance.
Shira Rubinoff: Well, certainly I think you highlighted the main piece we talk about in cybersecurity, data is king, who has access to it, where it sits, where everything is pointing to. And certainly with this aspect of generative AI, you certainly highlighted that for them. So given your point that securing AI is a data challenge, how does IBM see organizations begin tackling data security for data AI initiatives?
Guy Shanny: So the first step as always is visibility. So we see a lot of companies are trying to create AI inventory where they’ll see old places in the company, in production, and also in dev staging and so on, where the company lacks AI, generative AI, the models, the data that is connected to these, and also the users and applications accessing to these. So first step is visibility. Second step is around posture. Let’s make sure nobody that is not allowed to access the data, let’s make sure that the data stores are not publicly exposed, well protected. Let’s see what data has been moved from one environment to another. So everything related to the posture of the data. Now, data at rest is not the only thing here. Data is continuously moving between environments. So that’s the number three, let’s say, on their to-do list on where the data is moving between environments and so on. So that’s the number three thing that we see companies are doing.
Shira Rubinoff: Oh, very interesting. I like that you highlighted the fact that data is moving. A lot of focus has been data at rest, where does it sit, what’s going on with it, but data and movement is even more important to focus on and handle in an appropriate way.
Guy Shanny: Agree.
Shira Rubinoff: So when I spoke with the IBM team at Think, I was at the Think conference, you shared your announcement of a preview of your IBM AI security offering. What use cases are you helping clients with today, things that people don’t know about?
Guy Shanny: So let’s try to zoom out for a second. Okay. So when a company wants to protect the AI lifecycle, the company needs to protect three main elements. The first one is, and we talked about it, the data. Okay, great. The second element is the model, the models themselves, because now we have a model that was trained. What about if the model was poisoned, model evasion, hallucination, and so on, on the model’s prompt injection and so on? The second step is let’s protect the model itself from malicious acts. And number three is around the usage, the users and applications utilizing the model governance of those and so on. So if we zoom out for a second, these are the three main elements that need to be secured and protected, the data, model, and the usage. And that’s exactly what our product does. We are first mapping all the data, we are mapping all the AI you have in your environments across the stack, on-prem, cloud, SAS, data warehouses and so on. Then once you have this visibility, we are protecting the models themselves, a firewall against the attacks that I just mentioned, prompt injection and so on. And then the governance let’s see who is using what and how.
Shira Rubinoff: Very interesting. Something I like to do in my interviews is talk to my interviewees and ask them to give their own personal helpful hints or thoughts around cybersecurity and here certainly around generative AI, AI, what you’d like to share with our audience from your own perspective. And Matthew, I’ll start with you if you’d like to share something with us.
Matthew Shriner: Well, thank you. I’d like to share two things actually. So of course we believe that we’re just at the very beginning of the adoption of generative AI capabilities and there’s so much more to come. We’ve talked about some of those capabilities and leap forward in terms of efficiencies that we’re achieving. But I actually wanted to add on to some of Guy’s points here. If you’ve been in technology for a while, all this activity and noise that we see around generative AI, there’s so much happening. It’s very much akin to what we saw in the late 1990s and early 2000s around dot-com, where there was this big rush to get everything online, get your banking apps online.
Shira Rubinoff: As quickly as possible.
Matthew Shriner: As quickly as possible so you could be first to market and get your IPO and garner investments and so forth. And then we started to see the challenges and the problems with that where these banking applications were being hacked because they never built security in, they didn’t test for security vulnerabilities. And there are some significant parallels now with companies that are building in LLMs. They’re leveraging various models that in the rush to adopt, in the rush to be first to market, they’re skipping some of the testing. And so our point of view is you have to test, otherwise you will fall victim. You have to be testing your AI models. And we have a great solution.
Shira Rubinoff: And you, Guy, would you like to please share something with our audience as well?
Guy Shanny: Yeah, sure. So I think that even though it’s a new technology, a cool technology that everyone wants to implement and so on, we need to first make sure that everything is secured, but we want the security to be enablers versus blockers. So that’s why I think that having security programs, security ideas on how to secure the models, data involved and so on is super, super important, because I saw the same in the containers, when R&D wanted to implement containers in production and got blocked by security and then came the container security, a new EDRs for containers to help with it. That’s exactly what we are trying to do with AI. So companies will be able to utilize the great power of AI without risking the security of the data.
Shira Rubinoff: Thank you. Well, this has been a very informative and very important discussion today. Thank you, Matthew, and thank you, Guy, for joining us here today. And we are here at Black Hat 2024. Thank you.Â
Matthew Shriner: Thank you.
Guy Shanny: Thank you.
Author Information
Acclaimed cybersecurity researcher and advisor, Shira is a global keynote speaker and presenter, and expert media commentator. She joined The Futurum Group in February 2024 as President, Cybersphere.