On this episode of the Six Five On the Road, host Krista Macomber is joined by Mike Nichols, VP of Product Management, Security, at Elastic for a conversation on the critical role of AI in transforming Security Operations Centers (SOCs). Mike shares insights on how artificial intelligence is not just the future but a present necessity in defending enterprise systems against advancing threats.
Their discussion covers:
- The transition from traditional SIEM to AI-driven security analytics and its impact on SOC workflows.
- Advancements in threat detection with the new Attack Discovery feature using generative AI.
- Enhancements in team productivity through the AI Assistant, offering tailored guidance for analysts and administrators.
- The implementation of the Search AI platform to improve the accuracy and relevance of generative AI responses by integrating public Large Language Models (LLMs) with private contextual data.
- A forward-looking perspective on defending enterprises with AI technologies, emphasizing that the future is already here.
Learn more at Elastic.
Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Or listen to the audio here:
Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.
Transcript:
Krista Macomber: Welcome to Six Five Media. We’re on the road here at RSA Conference 2024. I’m Krista Macomber and I’m joined here today with Mike Nichols, who’s a VP of Product for Security with Elastic. Mike, thank you so much for joining us today. How’s the conference going?
Mike Nichols: Well, it’s been great. For the past couple years, we’ve seen the conference kind of decline a bit with the pandemic. And it really feels like it’s back in full swing again. It’s busy, it’s active. There’s a lot of people around.
Krista Macomber: I’m sure, and I know we’ll certainly get into it. You guys, Elastic has a great announcement this week so I’m sure there’s been a lot of traction on that. But Mike, before we kind of dig into some of those details, I wanted to maybe take a step back and talk about a challenge that we’re hearing within the security operations center. And that’s really around kind of the alert fatigue that we’re seeing become created by some of the traditional SIEM tools that your security analysts and your SOC teams are utilizing. And so I know that that’s really central to your announcement, is using artificial intelligence to again kind of maybe streamline and address some of those challenges. So can you talk a little bit about that?
Mike Nichols: Yeah, the job of analysts is hard. I was a defender in a previous life before I became a product manager. We spend a little bit of our time doing the really important stuff, finding threats, remediating, but a lot of our job is doing things that are frankly not that great. And it leads to analyst burnout, it leads to retention challenges. And alert fatigue is one of those problems, right? There are so many threats emerging and new products emerging, look at RSA, but each creates a new set of alerts and then you combine them into a SIEM, which has its own alerts and all of a sudden you’re sort of overwhelmed.
Krista Macomber: Right.
Mike Nichols: And the challenge is we at Elastic really believe in democratizing security. We’re trying to provide the access to enterprise security to everybody because the threats don’t discriminate, right? The threats are hitting every kind of company, even ones that don’t have security teams. SIEMs need to be everywhere. They need to be for everyone, but the problem is they’re not accessible for one. And we solved that problem in Elastic. We have a completely free and open solution. You can get us on the cloud. But usability is the second part of that problem. How can you use a product? And SIEMs have been… Some people have a little bit of a frustration with the term because it comes with it a weight of challenges and hard to use.
Krista Macomber: Mm-hmm. Exactly. Exactly. And especially if you think about maybe some more junior level analysts that are looking to really hit the ground running and become impactful to their team, especially as the threat landscape just continues to advance. We see attackers now how generative AI. So I would say what we’re seeing is in addition to kind of streamlining some of these day-to-day operations, it’s also about kind of upscaling as well.
Mike Nichols: Exactly, yeah, elevating the analysts. And I think cybersecurity got relegated to the world of IT security, but really the best analysts just have analytic skills, analytic thought process. And when they get into a SOC and they’re unfortunately sat down typically in front of this deluge of alerts, it can be overwhelming. And we might scare away fantastic talent that is ready to take the job on. And so how can we summarize, simplify the information for them and let them do what they do best, analyze and almost detective work through the problem and not do all the, we like to call it limiting the SOC, all the stuff that isn’t fun to do, get rid of that and let them focus on the thing that really matters.
Krista Macomber: Absolutely. 100%. So Mike, I wanted to talk about one of the other things that we’re hearing so much about the show, which is generative AI. My understanding, looking at the announcement from Elastic, is that you guys are using generative AI to support threat detection. Is that right? So kind of going back to these security teams, these SOC teams are inundated with these alerts. It’s really difficult to identify threats as they’re happening and what they really need to be paying attention to.
Mike Nichols: Yeah, we found that last year obviously we got hit with this phenomenal model elevation. All of a sudden everyone was trying to apply generative AI to many different use cases. And so we looked at the typical SOC and sort of the core things that SOC needs to accomplish, the core workflows that they’re working on and try to focus on what are the big pain points and where can we see if there’s a way to leverage generative AI to help solve that. A key part of our focus was not creating a new process because people are already overwhelmed. How can we take the existing process and improve it? So we focus on alert triage, which is like we’re saying, that first step of I log into the morning and it’s overwhelming.
What we found is that Elastic itself is… Our security product is built on the Elastic search AI platform. Elastic is a core company. We power things like Walmart.com and Uber and Netflix. We are a data company. We can find insights and data super fast. And we also have a built-in database. We use what’s called retrieval augmented generation, which provides sort of private context to those models. So we’re able to take all that amazing differentiation and power and pull it into security pretty quickly.
So now what we’re doing with this new announcement, what we call attack discovery, as we can take that massive list of alerts that you just somewhere in there is probably a problem. Now there this false negative challenge where, “There’s a problem in there somewhere, I’m not going to get to it. There’s an infection that happened and how do I fix that? How do I bring the biggest problem up to the top?” We take all that information, we then sort of vectorize the content, bring that context to a model of your choice and then come back. What’s amazing, it comes back and says, “Hey, here’s the thing that matters most.”
We actually leverage the MITRE ATT&CK framework and we look for… People have been doing grouping of alerts and things for a long time, but they typically use atomic indicators. “Here’s the same username, here’s the same hash.” And those are useful, but really attacks spread. They start somewhere, they get credentials, they laterally move to a different system. There’s reconnaissance, there’s port scannings. Those are hard to find because they don’t have the same indicator tying them together. But these models are phenomenal, what I like to call the serendipity moment, being able to discover that these things are uniquely tied because this is a typical exfill problem that we see tied to this recon problem so then these things must be related.
So what comes back from this big massive list is just here’s one attack and it maps it across MITRE’s ATT&CK framework. Our goal is that analysts, when they come in the morning, the first thing they look at is that page. Take those really important problems, the top of the list, out of the big list they have, and then you can go back to the alert list and triage through. And those are probably false positives. You got to tune and things. But you don’t miss the most important thing because we’ve found the problem and brought it up to the top.
Krista Macomber: And that’s so critical because we were kind of referencing, attacks are just continuously evolving over time. I know you referenced kind of the lateral movement, especially as attackers are focusing on the identity instead of even just brute force hacking you. So the ability for these analysts to log in and just what are the critical potential vulnerabilities, things of that nature that they need to address, I’m sure you’ve been getting some really fantastic feedback so far as we’ve been having conversations at the booth this week.
Mike Nichols: Yeah, we’ve had great feedback from the booth. And also we have some phenomenal design partners and others have been using this in the field. We had our Assistant, which is sort of that but in a box, that’s been generally available since July of last year. So we’ve had people, a lot of our customers using that already to get that sort of phone of friend help. And now this new embedding of that workflow, this new attack view, attack discovery view is something we just came out with, but we’ve had design partners using it.
And in Elastic, our customers range from, of course, we’re over 50% of the Fortune 500, but we also have a huge amount of commercial customers and even customers just using it at their house from the way that it is so accessible. And so this idea of making SIEMS for everyone, bringing this technologies detection capabilities down, we found companies that typically don’t have traditional SOCs. They’re the IT manager who takes one hat off and puts the security hat on. They’ve been really impressed by this because it allows them to not miss a core problem but also not feel overwhelmed that their whole day has to be alert triage when they also have to help their executives log into their systems. There’s all the things they have to do every day, right?
Krista Macomber: Absolutely.
Mike Nichols: So it’s been really a boon to the non-traditional security operation centers as well as of course to SOCs to help them elevate their tier one, tier two analysts.
Krista Macomber: Absolutely. I mean, just you paint this great picture of logging in at home even, right? It’s kind of that easy to use. And you certainly make a great point about needing to wear multiple hats across security and other areas of the business. I certainly can see how that’s impactful. So Mike, the other area where we’re seeing kind of artificial intelligence become useful in the security space is sort of as not quite a chatbot, but more of as an assistant.
Mike Nichols: Right.
Krista Macomber: And I understand that that is also part of the announcement to kind of further support that productivity that we’ve been talking about. So can you walk us through a little bit of that announcement? That announcement?
Mike Nichols: Yeah, exactly. Like I said earlier, everything we try to do is trying to stay within your existing workflow. So the Assistant, which was out last year, was in that alert triage workflow. And it did pull that context using that RAG capability based on what you were looking at. So hey, I’m looking at this thing, and you would ask a question, it would then contextualize that, add it to the model and really provide some great feedback. But now that we have attack view, what’s really powerful about the Assistant is now if you ask a question, depending on where you are, it can contextualize with that attack you’re in or maybe numerous attacks, you could say, “Hey, here’s the two or three that matter.”
And that allows it to take that additional context into remediation steps. It can do things like help you map a visualization of the attack, because a lot of times people might be visual learners, right? So reading it is great, but seeing it, it might be more powerful. So you can create visualizations, you can have human-readable, digestible information. It’s really there to just say, “Hey, I need help. What’s next?” And we found it’s a fantastic addition to the workflow of log in, find the attack, and then ask for help.
Krista Macomber: So can you elaborate a little bit more on that help? I imagine it’s sort of guidance on how to remediate. Can you walk us through that a little bit more as well?
Mike Nichols: Yeah, this is a place where we’re so excited by the rapid development within large language models. One thing we chose to do very early on was have an agnostic approach, allow our customers to utilize whatever model they want to, whether that’s large language models from hyperscalers, Google, Amazon, Microsoft models, or even local models because we have many government customers that want to be fully disconnected. So being able to leverage a fully localized model on their own hardware.
And that seemed to really pay off now because we are getting more and more domain-specific and focused models that are coming out that provide great guidance for different types of verticals, different types of customers, and allows them to really specify for their environment a unique way to remediate. So it isn’t just a generalized, “Maybe do these few things.” But if you’re in a critical infrastructure vertical, you might get specific remediations about how to remediate the SCADA systems that you’re on. So that’s a great point. The remediation steps.
We also found it’s really useful for migrations. One of the hard problems about moving into a SIEM is that no one’s moving into a SIEM for the first time. For the most part, you’re coming off of something and you have a lot of prior knowledge built out in detection rules and searches and dashboards. Everything in the Elastic does is in the open, our code, our models, our rules. It’s just as the ethos of our business. So these models know everything about us. So when you bring in somebody else’s rule that you built and say, “Hey, I made this in this product, can you help me do it in Elastic?”, it can give you the Elastic rule instantly.
So that barrier of entry is much lower. Either it’s your own stuff you developed or even community information, “Hey, I need some help.” And someone says, “Hey, I tried this over here, pull that in, and you can bring that into the system.” So migrations have been really powerful. Remediation has been really powerful. And then we were discovering fun stuff every day. Just on the show floor yesterday, we had a woman from South Korea who was there and we were like, “Oh, let’s try this.” And we said, “Hey, can you translate this whole attack into Korean?” And it did. On the fly, it’s like, “Here you go.” And so we’re just having all these fun discoveries of what these models are doing because they’re just innovating so quickly.
Krista Macomber: That’s wild. And things you wouldn’t even think about, right?
Mike Nichols: Yeah. Honestly, it’s just things that we just try it out. And then one of the things we keep saying is, “Well, I don’t know. Let’s ask the Assistant and see.”
Krista Macomber: Exactly. Exactly. And that’s how we start to see where it can really be useful and impactful. We’re seeing it’s a little bit of sort of crawl walk run when we come to adopting artificial intelligence. It’s you start to kind of trust it in certain areas and really see the impact and then kind of grow your use from there.
Mike Nichols: Oh, trust is huge. Yeah, we fully believe in both being very open about both security and privacy. Early on we created an entire framework for anonymization and redaction of data. So we built in an open schema. It’s one that we donated out to OTel last year. It’s called the OTel semantic convention. And so that schema allows an administrator to say, “Never send these types of fields. Anonymize these ones so you can still correlate, but they don’t know the details, they just have a hash,” for example. And that was really impactful.
And then what we also just did at the show here today or yesterday was we also focus on the security side. We built some integrations to get the invocation data from these models. We’re running detection logic around potential abuse of these models. We’re trying to really help ease the barrier, which is typically both privacy and security because AI is an inevitability. Much like crowd transformation, you can’t say, “No, I’m not going to go to the cloud.” AI is going to start getting embedded in everything we do. And I think it’s a massive net positive. And so if we can remove those barriers and help executives like CISOs understand how they can benefit but not take the risk, it’s amazing.
Krista Macomber: Yeah, that’s a big thing that I’m hearing at the show as well, is how do we harness artificial intelligence in a secure manner.
Mike Nichols: Right. Exactly.
Krista Macomber: And I think this is a great example. If we’re getting down to really some of the meat of how we do that, is really starting to go beyond that buzzword for sure.
Mike Nichols: Yeah. Yeah.
Krista Macomber: So Mike, I’m really glad that you’re mentioning the large language models in particular and the flexibility that Elastic offers. You referenced RAG, retrieval augmented generation, in the ability for customers as they want to either have a private model or to start to integrate some more specific and contextualized data into the large language model itself to then feed the artificial intelligence. That was a part of this announcement for Elastic as well, correct?
Mike Nichols: Yeah, the bring your own model idea is just really powerful. People can take their own cost-benefit analysis and also their own domain information into account. A great example is when Google acquired Mandy and they’re developing a phenomenal model, I think they changed the name. I think they now call it Gemini for Security, but it has all of Mandy’s knowledge in it. And so we found when using that, it even can do things like attribution if that matters for your organization. So if that matters to you and that cost benefits there, then you can choose that model. We also have been using the new Claude models from Anthropic that are phenomenal, really fast and inexpensive.
And so maybe if attribution isn’t important, but those things are, you can then choose that model. So the flexibility for our customers to be able to not just pick one but then tomorrow pick another, right? You’re not locked in. It’s very easy to say, “Well, let me try this one out.” And even in the product, you can do one for discovery, one for a question, one for a different question, right?
Krista Macomber: That’s fantastic. And it’s such a great point, right? So not only are we going to have different use cases, so we might want different large language models for each use case. But also as we use artificial intelligence over time, the large language models that we’re going to want to use are also going to want to evolve. So to have that flexibility and to not have to lock in right up front as well in addition to being more contextualized.
Mike Nichols: Yeah, it is not a one-size-fits-all all. Being able to discern what’s in an attack and then also being able to give sort of response and remediation and attribution could be different things. And you might use different models for either. And so what the flexibility allows is that customers have that ability to experiment change. And as you mentioned, a big one is just using their own on-premise. We found some great local models and of course customers are experimenting on modifying and building their own. Why not leverage those? They’ve been leveraging machine learning models for years within Elastic, leverage these standard of AI models within their systems without having to be sort of locked into what we think is best.
Krista Macomber: Exactly. And that’s a great point too that you mentioned. Your customers have already been working with you in this fashion for a long time.
Mike Nichols: For a long time, yeah.
Krista Macomber: Yeah. Yeah, so-
Mike Nichols: Yeah. We’ve been leading this machine learning side for many, many years, both in creation of supervised and unsupervised models, but also in the importing and usage of those, whether it’s pulling them in from hugging face or if you’ve built your own. The ability, our whole ethos in Elastic is that we want to make you more successful. And we do that by allowing you to leverage what you’ve already done in hopefully a more scalable, efficient, faster way.
Krista Macomber: Absolutely. Well, Mike, I think that’s a great point to kind of conclude on today. This has been a fantastic conversation and certainly a lot of exciting developments coming out of Elastic. Really appreciate you sitting down with us today.
Mike Nichols: It’s been a lot of fun. Yeah, thanks for talking to me.
Krista Macomber: Thank you.
Mike Nichols: Appreciate it.
Krista Macomber: And to our audience, thank you so much for joining us. Again, this has been Six Five Media On the Road here at RSA conference 2024. Please make sure to like and subscribe. And please make sure not to miss our other content coming out of the show here.
Author Information
With a focus on data security, protection, and management, Krista has a particular focus on how these strategies play out in multi-cloud environments. She brings approximately 15 years of experience providing research and advisory services and creating thought leadership content. Her vantage point spans technology and vendor portfolio developments; customer buying behavior trends; and vendor ecosystems, go-to-market positioning, and business models. Her work has appeared in major publications including eWeek, TechTarget and The Register.
Prior to joining The Futurum Group, Krista led the data protection practice for Evaluator Group and the data center practice of analyst firm Technology Business Research. She also created articles, product analyses, and blogs on all things storage and data protection and management for analyst firm Storage Switzerland and led market intelligence initiatives for media company TechTarget.