The Intersection of AI and Threat Intelligence – The Six Five On the Road

The Intersection of AI and Threat Intelligence - The Six Five On the Road

On this episode of The Six Five – On the Road, hosts Krista Macomber and Will Townsend are joined by Microsoft‘s Sherrod DeGrippo, Director of Threat Intelligence Strategy at Microsoft Secure for a conversation on the evolving landscape of AI and cybersecurity and how Microsoft Copilot for Security can be used to enhance threat intelligence strategies.

Their discussion covers:

  • How attackers utilize new AI capabilities and Microsoft’s strategies to assist organizations in combating these increasingly sophisticated threats.
  • The potential for AI to revolutionize cybersecurity with a focus on proactive threat detection.
  • The impact of Microsoft Copilot for Security on the daily routines of threat analysts.
  • The effects of generative AI on attack methods, the current state of AI integration within security tools, and the anticipated adoption rate within the industry.
  • Addressing the cyber-security skills gap with the help of generative AI.

Learn more at Microsoft.

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: The Six Five webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript:

Will Townsend: So The Six Five Media team continues its discussions with Microsoft here in the Big Apple. We’re talking cybersecurity. We have Sherrod DeGrippo joining us. Hi, Sherrod.

Sherrod DeGrippo: Hi.

Will Townsend: How are you doing?

Sherrod DeGrippo: Very well, thank you.

Will Townsend: So, Sherrod, you have responsibility for threat intelligence at Microsoft. Can you tell us a little bit about what you do on a daily basis?

Sherrod DeGrippo: Sure. I work with a lot of teams at Microsoft that focus on threat intelligence, which means understanding what the threat landscape looks like day-to-day. So they’re watching what threat actors do, and they’re putting that together and reporting on it. So we do things like give attribution to certain countries or crimeware groups, and say, “This group did this at this time,” and we take all that intelligence, and we give it to our customers, and then we take that intelligence, and we put it into the products that Microsoft uses to secure our customers as well. So there’s a constant feedback loop between what threat intelligence analysts are seeing and what our detection engineering teams are putting into the products.

Will Townsend: That makes sense. So AI and machine learning are nothing new, but generative AI is new. It’s this natural language interface. It’s sort of democratizing access. I think, for many years, we thought AI floated in the ether, and now it’s real, but bad actors are able to leverage this to their advantage, right? So I’m wondering, how are attackers leveraging generative AI, and what is Microsoft doing to help sort of flip the polarity and help defenders stay on their feet?

Sherrod DeGrippo: That’s something that a lot of people have been asking is, “How are threat actors using AI?” And because we were asked that so frequently, we partnered with OpenAI, as we do for many things, and we said, “Hey, let’s get together, and let’s do some real research to understand, from a threat intelligence perspective, what are threat actors doing with AI?” And we released that report last month, and it goes through Russia, China, North Korea, and Iran, and breaks down what those countries are doing, from a threat actor perspective, with AI, and what we’re finding is that they’re leveraging it as an early tool just like we are. It’s something they’re exploring, it’s something they’re interested in, and it’s something that they’re using to do research and targeting with, and that’s exactly as we would expect, and what we’re doing from our perspective, Microsoft is fully committed to making sure that we use responsible AI, but what I’m really interested in, and I think is really cool, is that we do a lot of AI red-teaming, which means we have teams of people that are constantly trying to break AI and make it do things it’s not supposed to.

Will Townsend: Hallucinate, maybe, a little bit?

Sherrod DeGrippo: Maybe hallucinate, maybe give you information that you really shouldn’t have access to in ways that aren’t really safe. They’re constantly trying to find those edges, so that they can refine the models, and make these tools as safely and as responsibly as we can. Something else that I really love that we’ve done is that we actually have, I think it’s the first, we announced it back in October, the first AI bug bounty, which means if you, as an individual, as a researcher, find an AI product, a model, an engine, that you can make do things that shouldn’t do, we’ll give you a bug bounty for that. So we’re really focused on making sure the community and researchers are enabled to find problems in our AI products, report those to us, get a little bit of a reward for it, and then we’ll be able to fix those things with a community effort.

Will Townsend: I love that. So how are you facilitating that bug hunt?

Sherrod DeGrippo: The world of bug bounties in security is pretty well-established, but the world of bug bounty for AI is new territory. So it’s actually very, as you said, democratized. There’s a lot of people that can get right into doing AI bug bounties, and when they find something that needs to be fixed in a model, or an LLM, they can send that to us. This is a new frontier for people who haven’t done bug bounties before, and people are pretty excited about the possibilities of finding a problem, and then getting a little bit of a reward for it, and having it fixed.

Will Townsend: Getting that recognition.

Sherrod DeGrippo: Yeah.

Krista Macomber: Absolutely. Yeah, and this is absolutely fascinating, and it’s just one of the reasons, Sherrod, why I’m so excited to talk to you today. It’s really giving color to something that we’ve been really kind of watching take fold, which is, you sort of alluded to this, right? But attackers have always been very innovative, they’ve always been evolving their approaches, of course, and generative AI is this new fuel. So I think it really just underscores why threat intelligence and really kind of becoming more proactive and preventative with the security approach is important. So, across what you’ve seen, can you maybe talk to where are some of the key areas where you think generative AI might make the biggest impact, I would say, for security teams?

Sherrod DeGrippo: I think we’re talking today, the star of today is Microsoft Copilot for Security, and I really think that it has incredible value for the full spectrum of security professionals, not just junior SOC analysts. That’s the example we hear because it’s most easy to relate to, but I think that there’s a lot to love for a senior analyst, for a threat and tele-focused person, for someone who does reverse engineering, and for someone in an executive role. In fact, I think the executive use case is actually one of the most interesting, because if a CISO needs to brief the board, which they’re constantly asked to do, tell us about our security posture, that CISO can so easily leverage the Security Copilot to give them what they need at the depth they want.

So, for example, their SOC director may go to the CISO and say, “This particular actor’s really hammering on our front door. We’re getting a lot of alerts from them. I’m a little concerned about it. I just wanted to let you know,” and a CISO, who may not have deep knowledge of that particular threat actor, can go work with Copilot for Security, learn all about that actor, and that executive can then say, “Okay, I learned these 20 pages that Microsoft provided me about this actor. Can you condense that into two paragraphs so I can make a board deck?” So it gives you the information at the depth and length that you want.

Krista Macomber: That makes a lot of sense. And I think, kind of the flip side to this coin, we’ve been talking quite a bit about how threat actors are using this, you’ve talked a little bit about how security teams might use this, but I think we’re still sort of in early days when it comes to the rubber hitting the road for customers actually using generative AI, not just in the security space, but in general. So wondering if you can talk a little bit to that. Is that what you’re seeing as well? And what should our expectations be when we think about the roadmap to customers really adopting and kind of getting their feet wet with not just Microsoft Copilot for Security, but just generally speaking, generative AI for security?

Sherrod DeGrippo: I think that that’s something that is fascinating to think about, because it is that sort of behavioral economics concept, which is so fascinating. It reminds me of a scene. Did you ever see the TV show Mad Men?

Will Townsend: Oh, one of my favorites.

Sherrod DeGrippo: Sure.

Will Townsend: One of my favorite.

Krista Macomber: Yes.

Sherrod DeGrippo: So me as well, I loved Mad Men, and, at one point, a secretary takes a dust cover off of a typewriter, and says, “Ugh, all this technology.” It was this early time, where this seemed, to us, we look at it, and we’re like, “A typewriter is so primitive,” but, to them, it was this big advent of something new, and I think that we’re in that place with generative AI as well, where we all have to change the way that we think, and think about, “Would this be a place that I could just use AI?” When you’re hitting that sort of frustration point of like, “Ugh, I’m doing all this. Why am I not using the AI tools that are available to me?”

And I can feel my mindset and thinking modes changing over having access to those tools. Now, I am immediately go, and say, “I need this email polished up a little bit. It’s not super cute. I want it to be nice, and impactful, and communicate in a way that I do,” and I bang out something really quick, and then I ask Copilot for outlook. I ask Copilot to help me fix that email to be more me, more cool, and I think that you have to think in an almost AI-first mindset of, “Before I bog myself down in the agony, is this something I can just have AI do for me?”

Will Townsend: Right, and then clean it up.

Sherrod DeGrippo: And then clean it up, and then fix it up, but we’ve got to train people in the workforce that getting to the AI point should come earlier. Asking AI needs to be a first resort, not a last resort.

Will Townsend: Yeah, very good point. Now, you’re celebrating or accolading the general availability of Copilot for Security. You’ve been in preview for about a year. So I’m wondering, so, from a threat intelligence perspective, or a day in the life for a threat analyst, I mean, what are you seeing? What value are you seeing these threat analysts leverage in the product?

Sherrod DeGrippo: So the things that I’ve seen is analysts really are able to integrate it into a very quick workflow that then becomes second nature, and you can really tell the difference between someone who’s been on Copilot for several months, and someone who’s just kind of feeling their way in the dark with it for the first time, because you can see that those experienced users are doing a wide variety of things, whereas, in the beginning, that user starts with, “Can it do this? Okay, so now that’s what I think of it as.” “Okay, so I asked it to give me a reputation information on an IP address. Well, that’s all I’m going to use it for,” and that’s kind of what they do for the first 10 days, and then they say, “Well, can I use it for more, and more?”

And they start expanding their understanding and scope of the tool.And so then, that more advanced behavior is, “I’m having it analyze a script for me in this window,” “I’m looking over here, and I’m having it give me reputation. So the reputation of the IP and the script that it analyzed for me means that the script did this, and the person clicked on that,” and it becomes much more woven into the way they think, and the way they operate, but it’s an interesting sort of like a reverse long-tail, and then it’s this very, very slow start, and then it’s like a rocket ship up. So once you hit that inflection point, people just interweave it into everything they do.

Will Townsend: So we’ve talked to some of your other colleagues, and the whole notion of the prompt book has come up, and so I’m assuming that that’s helping sort of accelerate the usage that you described there. I mean, can you go into a little more detail about how you’re seeing that be affected?

Sherrod DeGrippo: I think, over the history of human time, a blank slate’s always scary.

Will Townsend: It is.

Sherrod DeGrippo: Right?

Will Townsend: A little intimidating, right?

Krista Macomber: It is.

Sherrod DeGrippo: Like, “Here’s a canvas; make something of it,” “Here’s a blank wall; turn it into something.” I think that’s almost a primordial reality of humanity is a big blank thing is both exciting and, “Ugh.” And so the prompt books give you a little bit of a, “Color within these lines.” It’s a little bit of a coloring book, or a paint-by-numbers that gets you started, and I think, while the prompt books are fantastic, generally, you can move past those quite quickly, and they just sort of become recommendations, which is very important with Copilot, the recommendations, but I love that it’s a sort of choose-your-own-adventure with prompt books, or you can have the blank slate if you feel like that’s where you want to operate, and, usually, those that are more mature start going to blank slate, because they’re ready and they’re used to it.

Will Townsend: You’re reminding me of my childhood with these analogies, paint-by-numbers, and coloring books, and that sort of thing, but, Krista, jump in here. I know you’ve got some questions, for sure.

Krista Macomber: Yeah. I think, I mean, the playbooks, that’s a really great example, because I was going to ask what Microsoft is doing to help sort of nurture customers along this process, and I imagine the playbooks are a great starting point for that. I mean, to your point, they’re certainly not the end-all be-all, but can you maybe talk to anything else that Microsoft is doing to help users along this journey, and maybe help to kind of accelerate that adoption, and really open up their minds based on what their particular needs are?

Sherrod DeGrippo: I think that this is such an interesting time. This is a very Dune moment that’s in the consciousness right now, and one of the most important pieces of Dune is that, “A beginning is a delicate time.” Princess Irulan says that to us, and I think that this is similar to that, is that it is a delicate time, where we’ve got to help people help themselves, I guess, is really how it is. It’s like when a kid’s learning to ride a bike, and you’re just holding the back of it, and then you sort of let go, and I think that’s a bit magical, and I feel like we’re at that place with Copilot, and we see a lot of customers who are interested, they’re intrigued, they’re curious, and then, they’re like, “I don’t need your help anymore, Mom. I can ride the bike on my own,” and so they go from we’re helping you, we’re giving you these prompt books, we’re looking at your data, and seeing how your data makes sense within Copilot, we’re showing you, “Oh, look at all the threat actor information you can access easily,” and then it’s sort of, “Okay, I got this,” and I think that’s the best way for technology to be adopted, is for it to become native, and I think that we’re seeing that with those customers that have been in the preview for a while.

Will Townsend: Yeah.

Krista Macomber: So do you see a tool like this helping to kind of change the landscape for security professionals, maybe in helping to accelerate the transition from a junior to a more senior analyst, or even start to kind of change roles and functions, so that the security analyst can be even that much more strategic for their organizations, perhaps?

Sherrod DeGrippo: I hope that, and I think it’s absolutely possible. I’ve worked in security for 20 years now, and one of the things that I’ve heard the most from my career is, “I wish I was better at…” And Security Copilot can become almost a coach, as well as that copilot sit-next-to-you situation. It can kind of guide you, and one of the things that people have always expressed they want to get better at is reverse engineering. They want to be able to look at code, and intuit, like a psychic, what it does, and that’s very wishful, and it’s very sort of like, “Oh, if I could just pull the secrets out of this code.”

Yes, and many reverse engineers are very talented, and they work at it, and they work at it, and it’s labor, and what I love about script analysis with Copilot is that you just show it that script, and it will give you, “It does this, and then it does this. Here are the functions. This is the way it works,” and so you’re changing the way people are able to learn, and accelerating that, and you’re taking away any of the, “I’m afraid to ask. I don’t want to tell the person that I need help,” and it changes that, so that your confidence goes up, and I think increasing confidence is something that actually makes education faster, and better, and more, is they’re confident to learn.

Will Townsend: Yeah, for sure.

Sherrod DeGrippo: So Copilot does a lot of that.

Will Townsend: Well, and this is a great segue to my wanting to kind of head toward the close of our conversation, and there’s a huge massive skills gap in the security industry. I mean, how many millions of jobs are not fulfilled, right? And what’s exciting, when I see gen AI being able to do things like program older software languages, like Cobalt, and C++, and all of these things, can gen AI, can it help bridge that skills gap? Can it onboard security analysts much more quickly, and can it help sort of level that disparity that’s out there in the market right now?

Sherrod DeGrippo: I think there’s certainly an aspect of that, and I also think that there’s an aspect of providing guidance for security program overall, and so it looks at resourcing, and it can think about, “Okay, this is a high-priority problem for you, so reassign resources to this high-priority problem,” or, “This particular threat seems to be very prominent in your environment. Let’s get more people focused on that.” So when you think about a skills gap, for example, it might not be properly assessed in addition to priority. So, “Okay, you don’t have these skills, but I actually need these other skills more,” and I think that Copilot allows some guidance for an environment for an enterprise to understand, “This is where you’re actually having problems, and these people actually do have this skill,” and it can get a little bit bumped up with some Copilot. So I think that it will maybe not be an easy linear path of, “Well, you were a junior analyst; you have Copilot, now you’re a senior analyst.”

Will Townsend: Now you’re a senior, yeah.

Sherrod DeGrippo: It’s not a magic pill, right?

Will Townsend: Right.

Sherrod DeGrippo: But it is something, I think, that will allow a view of an organization, a view of where we need help, where we need to prioritize within our security posture, and allow you to better guide that. So it’s a decision-enhancer, I guess, which I think is a really good way to use it.

Krista Macomber: So, Sherrod, I’m really glad that you’re making that point. I never really thought about using generative AI to kind of uncover where the real needs are, and where the real priorities are, and then match those to the existing skills that the organization might have, and I think that’s a really great comment to maybe kind of start bringing our conversation to a close, because I think it does bring us back to the discussion we were having earlier about the threat landscape, and how that’s always changing, and so then, in turn, of course, the needs are going to be evolving, from a security perspective. So, before we close, anything else for you that’s top of mind that we haven’t touched on today that you think would be important for the audience to know?

Sherrod DeGrippo: I think I’m just really excited about something that, as a security practitioner, I’ve only been with Microsoft a year, so I’m still very new in the grand scheme of things, but I’ve been in security for 20 years, and much of that time, it has been sort of wished for that people could get access to Microsoft Threat Intelligence, that vast treasure trove of years, and years, and years of investigations, and intelligence briefings, and reputation, and atomic indicators, and all of these things. “Oh, if we could only get our hands on Microsoft Threat Intelligence,” and that’s what Copilot really has made real for a lot of people in threat intelligence roles, and in roles where they need to know who is doing what on the threat landscape; they can just ask Copilot, and it can tell them.

Will Townsend: Great. Well, Sherrod, thanks for the time. It’s been a great conversation.

Sherrod DeGrippo: Thanks for having me.

Will Townsend: Yeah.

Krista Macomber: Thank you so much.

Author Information

Krista Case

With a focus on data security, protection, and management, Krista has a particular focus on how these strategies play out in multi-cloud environments. She brings approximately 15 years of experience providing research and advisory services and creating thought leadership content. Her vantage point spans technology and vendor portfolio developments; customer buying behavior trends; and vendor ecosystems, go-to-market positioning, and business models. Her work has appeared in major publications including eWeek, TechTarget and The Register.

Prior to joining The Futurum Group, Krista led the data protection practice for Evaluator Group and the data center practice of analyst firm Technology Business Research. She also created articles, product analyses, and blogs on all things storage and data protection and management for analyst firm Storage Switzerland and led market intelligence initiatives for media company TechTarget.

SHARE:

Latest Insights:

Brad Shimmin, VP and Practice Lead at The Futurum Group, examines why investors behind NVIDIA and Meta are backing Hammerspace to remove AI data bottlenecks and improve performance at scale.
Looking Beyond the Dashboard: Tableau Bets Big on AI Grounded in Semantic Data to Define Its Next Chapter
Futurum analysts Brad Shimmin and Keith Kirkpatrick cover the latest developments from Tableau Conference, focused on the new AI and data-management enhancements to the visualization platform.
Colleen Kapase, VP at Google Cloud, joins Tiffani Bova to share insights on enhancing partner opportunities and harnessing AI for growth.
Ericsson Introduces Wireless-First Branch Architecture for Agile, Secure Connectivity to Support AI-Driven Enterprise Innovation
The Futurum Group’s Ron Westfall shares his insights on why Ericsson’s new wireless-first architecture and the E400 fulfill key emerging enterprise trends, such as 5G Advanced, IoT proliferation, and increased reliance on wireless-first implementations.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.