On this episode of the Six Five on the Road, Krista Macomber and Will Townsend are joined by Microsoft’s Herain Oberoi, General Manager, Microsoft Security for a conversation on securing and governing AI.
Their discussion covers:
- The importance of securing AI, in light of Microsoft Security’s recent announcements.
- Unique threats and risks associated with generative AI and Microsoft’s strategies.
- Strategies to address the concern of sensitive data leakage with AI adoption.
- Insights and recommendations for organizations planning to adopt Copilot for M365.
Learn more at Microsoft.
Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Or listen to the audio here:
Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.
Transcript:
Krista Macomber: Welcome to Six Five Media On the Road. I’m Krista Macomber, I’m joined here with Will Townsend, and we are continuing our conversations here at RSA Conference 2024. We are very excited to be joined here by Herain Oberoi, who is the GM for data and AI security with Microsoft. Herain, thank you so much for joining us. So we had the pleasure of joining the introduction day yesterday, and there was a lot of great conversations around where Microsoft is taking the portfolio, and certainly a lot of topics around generative AI and the impact on security. So Will, I know you wanted to kick it off with some considerations around large language models in particular, right?
Will Townsend: There have been quite a few announcements from Microsoft on this subject. So from my perspective, it’s important to not only protect the data that is used for training and inference, but also the large language models themselves, because there’s a lot of IP tied into that. And I know at the pre-event you spoke to, I think four announcements, and would love to have you go into a little more depth there and what Microsoft is focused on.
Herain Oberoi: Yeah, and even before, I think, getting to the announcements, maybe take a step back and think about, why are we so focused on this? And Vasu said it yesterday, which was, AI transformation requires security transformation.
Will Townsend: Sure.
Herain Oberoi: And so I believe that as customers build gen AI apps and use gen AI apps, the types of attack surfaces that we have traditionally known are going to evolve. And with that means new types of threats, and with that means we need new technology to address those types of threats. So that’s a little bit of the backdrop of, we have four announcements today, but we’ll have four more in a few months and we’ll have many more.
Will Townsend: It’s an iterative process.
Herain Oberoi: Right, it’s going to be an iterative process. And so part of it is Microsoft being at the forefront of what’s going on with AI transformation, helping our customers with it. We feel this responsibility for security for that transformation as well. And so that’s just the backdrop of why I think it’s important. And then I think, happy to get into the specific announcements and what we’re doing. And what I will say is, this is just a start.
Will Townsend: Sure.
Herain Oberoi: There’s going to be lots more to come and it’s exciting for us as well.
Will Townsend: Yeah, great. Is there any one of the announcements in particular that you find very compelling or most compelling?
Herain Oberoi: Yeah, I find all of them compelling. The way I like to think about this is how we spoke to it yesterday as well, is, you are either using generative AI, so using gen AI that someone else built like ChatGPT or Copilot, or you’re building your own custom generative AI. And when you’re building your own custom generative AI, what many people don’t realize is, it’s not that an organization is going to build one or two apps around it. We think they’re going to build hundreds of task-specific gen AIs around these things. And so you’re going to have all these custom built gen AI apps, and those have their own security and governance requirements as well.
Krista Macomber: Yeah.
Herain Oberoi: So when we think about using generative AI, there’s three areas where customers have talked to us about needs and concerns. The first one is just visibility. Give me visibility into what’s happening in my organization. Who is using which apps? Are those apps sanctioned by me? What’s the risk level of those apps? There’s thousands of generative AI apps that you can go online now and start using, and that means I’m putting information into the prompts and I’m getting responses.
So the second obvious concern that comes up is, okay, data leakage. Someone put some information into the prompt, was that information sensitive? How do I get a handle on what information’s going into these different applications? And then the third concern that comes up repeatedly is, what’s going on with AI regulations? These regulations are rapidly evolving.
Will Townsend: And there’s a new kind of group or meeting that sprouts up, it seems like weekly now, in Europe and other parts of the world.
Herain Oberoi: Right. Yeah, every country is grappling with this. We’re grappling it at a global scale. How do we collaborate as countries around AI regulation? But then each individual country has to also implement its regulations and legislations and all of that.
Will Townsend: Sure.
Herain Oberoi: And so different countries are in different stages of maturity. I’d say the EU historically has always been in the front end of this.
Will Townsend: Very highly focused on regulation.
Herain Oberoi: Yeah, the GDPR and all of that. So the EU AI act is one that’s, I would say, furthest ahead. And so regulations as a whole is one that every customer I talk to. And then the great thing about it is, it’s not specifically coming at it from a perspective of, “How do I avoid the regulatory penalties if I’m not compliant?” It’s coming from a perspective of, “No, this is actually helping me reduce my risk of security and compliance and all of that.” So it’s really beneficial for us to collectively, and I know Vasu was here and talking about this as a team sport. It’s a team sport that includes regulatory authorities and governments and different countries collaborating as well.
Will Townsend: Yeah.
Krista Macomber: It certainly does. And to that point, these regulations are not emerging for the sake of having regulations, right? It’s because I think collectively we are trying to navigate this new landscape and identify where are some of these thoughts and vulnerabilities and how do we best position ourselves to really take advantage of AI in a way that is safe and responsible and things like that. So maybe we could talk a little bit more specifically. So I know in our conversations yesterday we were talking about, there are these general purpose AI applications and then there are these custom applications, and really they each present some unique potential risks or threats. So Herain, maybe you could talk a little bit about that.
Herain Oberoi: Yeah, so as I was saying, on the usage side of it, you’ve got the risk specific ones around who is using it, what data can be leaked, and all of that. And so one of the announcements we made was around the AI hub in Microsoft Purview. And the AI hub is effectively this dashboard that gives you visibility into what apps are being used, who is using it, and what sensitive data is being shared and not being shared. It also starts to give you insights such as, okay, something like Copilot is referencing information in your organization that isn’t classified or labeled. Okay, you might want to look into that to say, do you know if that’s sensitive? If it’s not labeled and classified, should it be labeled and classified?
And so giving customers just getting their arms around the problem is step one. And then the other area that we focused on was, as we were talking about, is regulations. And so we have what we call these compliance assessments in Microsoft Purview. And we just released new assessments specifically for the EU AI Act, for the NIST AI Risk Management Framework. And then there’s two ISO global standards. They have long numbers that I don’t remember, but we’ve released assessments for those as well.
Krista Macomber: And that’s really important. I know at Futurum Group we’ve spent a lot of time studying some of these regulations as well, and we do find that customers, it’s difficult to know where to begin. So I think being able to have this assessment is very helpful for helping them to navigate that.
Herain Oberoi: Yeah, it’s a simple framework and it’s a set of guardrails and step-by-step instruction, so you can at least start to evaluate where you’re at on the journey. And customers just appreciate that prescriptive guidance that they get.
Will Townsend: Yes. And I think the NIST framework provides a nice guide or a blueprint, but one size does not fit all. Based on the industry that you’re in, I heard yesterday at the pre-event Charlie Bell speak to financial services companies that are safeguarding people’s bank accounts and credit cards and that sort of thing, critical infrastructure with utilities and that sort of thing. I think those were some examples that were stated. But I’d like to get back to the data leakage point because you touched on that. This is a huge issue, and we’re seeing almost on a weekly basis companies that are experiencing PII leaks exposing Social Security numbers and that sort of thing.
And I believe generative AI is only going to accelerate the sophistication for bad actors to do that. So I’d love to hear it from your perspective and Microsoft’s perspective, what recommendations would you make to address this whole notion of data leakage? Because it’s not going away.
Herain Oberoi: Yeah, this comes up in a lot of my customer conversations as well. I like to say that strong data governance and security is prerequisite hygiene for deploying generative AI in your organizations. Because one of the unique things that makes generative AI so effective is that it has visibility, at least in the case of, let’s say Copilot, from 365 into your graph, your Microsoft 365 graph. So it knows what information you have. Now, of course if your information has all the right access rights and privileges to it, if it has the right classification and labeling, then you can start to apply rules and policies that say, “Hey, only expose certain types of information to certain types of people.”
Will Townsend: Sure.
Herain Oberoi: But if you haven’t done that hygiene, it’s much harder to do that. And so a lot of organizations that have been earlier on in their journey on this are realizing, “Okay, we’d better accelerate getting our preparedness to adopt gen AI by starting with information governance and data security practices.” And so it’s important and now there’s a lot more interest in doing that as well as a result of that.
Krista Macomber: And maybe on also keeping that updated as the data environment evolves within the organization as the regulatory requirements evolve and threats evolve as well. So can you maybe talk a little bit about that too, in terms of how the Microsoft portfolio might help customers with that?
Herain Oberoi: Sure. Yeah, so historically, and the good thing is we’ve had a head start in this problem, because historically we’ve used Microsoft Purview to protect and govern data that’s in Microsoft 365. So that’s everything in Sharepoint.
Will Townsend: That was before the whole gen AI wave, right?
Herain Oberoi: Right, exactly, right. And including other chat interfaces like Teams. So today, for example, if you are in Teams, we can actually detect a policy violation if there’s harassment or collusion or something like that happening. This has got nothing to do with gen AI. We just do that in Teams today and organizations use this. Well, it turns out that the gen AI interface is very similar to a chat interface. And if I’m now asking a Copilot or some other gen AI app questions that might indicate that I might be violating certain policies, well, that same technology that we’ve used can be quickly applied to this. And so we’re able to take a lot of the IP we had in Purview and pivot that very quickly towards gen AI apps as well.
Will Townsend: Solid foundation for sure.
Krista Macomber: It really is. And we were able to see the technology in some of the demos, and it’s pretty slick. So I think maybe one final question to round things up. So Herain, you referenced Copilot for security for Microsoft 365, and there’s obviously so much that goes into play here when we talk about security and governance, especially for AI. So can you maybe talk a little bit about, what are some best practices for customers that are looking to adopt Copilot?
Herain Oberoi: Yeah, so I think similar to the broader use case of adopting gen AI, Copilot for M365 is a very particular, and we think very broad use case for it.
Krista Macomber: Yeah.
Herain Oberoi: And so like I said before, having strong information governance and data security hygiene is an important part of preparedness for it. As part of that, we’ve built out a practice, what we call Fast Track, a preparedness for Copilot for 365, specifically to help customers start with assessing their environments, understanding where their data lives, how much of it’s classified, not classified, and then quickly get going in their Copilot deployments with the right hygiene in place. And so that’s a program that we’ve been ruling out, and many customers have been adopting Copilot for Microsoft 365.
Will Townsend: Is that a self-service tool that you go online, or do you engage with a Microsoft partner to do that assessment?
Herain Oberoi: Yeah, it’s a couple of things. We have a team within Microsoft that’s the Fast Track team, and that’s sort of the white glove service. We’re doing a ton of work to enable our partners to deliver preparedness programs, and many of them are starting doing that already.
Will Townsend: And that adds to their value add with what they’re doing with Microsoft.
Herain Oberoi: Absolutely. In fact, I was in Australia a couple of months ago and met with a number of partners over there, and this was still early before we had fully GA’d Copilot for 365. And it was a big topic of conversation, which is, them building a preparedness practice for the adoption of Copilot M365 is a huge opportunity for them and for customers to get value from it.
Will Townsend: Yeah.
Krista Macomber: So thank you, Herain, that makes a lot of sense. And why don’t we maybe shift gears for a moment and talk about the custom AI applications that customers are creating and using. Because I know that Microsoft had a couple announcements around that this week as well.
Herain Oberoi: Yeah, absolutely. And I love this topic because I really think what’s happening around application development is really exciting. Having gone through the wave of moving from on-premise to the cloud, we saw this shift in how applications were developed. We went from what was this historic three-tier architectures to these microservice-based architectures. And what that did was it caused this complete reinvention of different aspects of application development, including devops and all of it.
And I believe that with AI, with generative AI, a very similar thing is happening. Because the components of a generative AI app are fundamentally different than your regular apps. Obviously you’ve got the large language model, but you also have this thing called the AI orchestration layer. And then you have this huge dependency on data. Having spent a lot of time in the data world myself, I like to say data is the fuel that powers AI. And so you’ve got training data to train the model. You’ve got fine-tuning data to actually customize your application itself, and then you have what you call grounding data or web data to use in these RAG processes to minimize hallucinations or provide more accurate responses.
So all of that data needs to be secured and all of that data needs to be protected against attacks such as poisoning and things like that. And so just having an inventory and a view of all your different AI assets is a great starting point. And so to address that specific issue, we announced posture management for AI assets in Microsoft Defender for cloud. And so with that, you can now both discover and inventory all your AI assets in your organization.
Krista Macomber: And that’s so important. When we look across the security landscape, this concept of posture management has become a growing trend, and it makes a lot of sense because we do have to do everything we can to try to at least keep up with the attackers, if not hopefully be a step ahead of them. And I think that posture management, really understanding where your vulnerabilities are, especially as these new AI applications are being custom built, I imagine is going to be resonating quite a bit with customers from that perspective.
Herain Oberoi: Yeah, absolutely. And that’s the first piece of it. And then the second piece of it is not just what do you do to protect your assets, but the second piece is, in runtime, how do you protect against these new types of threats?
Krista Macomber: Yeah.
Herain Oberoi: And an example, one of them we spoke about yesterday was prompt injection attacks.
Krista Macomber: Yeah.
Herain Oberoi: This idea that someone can use English language and manipulate your model to do things it wasn’t intended to do.
Will Townsend: Your demo was really effective in driving that home.
Herain Oberoi: Right.
Will Townsend: And how it shut it down and then reported it. And the benefit of what you’re doing from a complete Microsoft stack and being able to flow all of that information through a single pane of glass, I was blown away. It’s really powerful.
Herain Oberoi: And it’s important. It’s important for us to be able to connect those dots, because we are in a position to be able to do it. So to be able to detect the attack that comes and reach that information with the threat signals that we have from everywhere else, and then help a SOC analyst correlate that with other alerts that they’re getting inside Defender for XDR or whatever tool they’re using. And so that way you can see the full attack path, you can see the intent, and it makes it much easier to get ahead of these problems then.
Krista Macomber: Well, Herain, we wanted to thank you so much again for sitting down with Six Five Media On the Road. I’m sure we could talk for many hours on all these topics, and we certainly wish you a great week here at RSA Conference 2024. And we want to thank everyone for watching, and look forward to seeing you on our next video.
Author Information
With a focus on data security, protection, and management, Krista has a particular focus on how these strategies play out in multi-cloud environments. She brings approximately a decade of experience providing research and advisory services and creating thought leadership content, with a focus on IT infrastructure and data management and protection. Her vantage point spans technology and vendor portfolio developments; customer buying behavior trends; and vendor ecosystems, go-to-market positioning, and business models. Her work has appeared in major publications including eWeek, TechTarget and The Register.
Prior to joining The Futurum Group, Krista led the data center practice for Evaluator Group and the data center practice of analyst firm Technology Business Research. She also created articles, product analyses, and blogs on all things storage and data protection and management for analyst firm Storage Switzerland and led market intelligence initiatives for media company TechTarget.
Krista holds a Bachelor of Arts in English Journalism with a minor in Business Administration from the University of New Hampshire.
Six Five Media is a joint venture of two top-ranked analyst firms, The Futurum Group and Moor Insights & Strategy. Six Five provides high-quality, insightful, and credible analyses of the tech landscape in video format. Our team of analysts sit with the world’s most respected leaders and professionals to discuss all things technology with a focus on digital transformation and innovation.