Two Trends in AI Regulations and a Look at Microsoft Copilot – The AI Moment, Episode 2

Two Trends in AI Regulations and a Look at Microsoft Copilot - The AI Moment, Episode 2

On this episode of The AI Moment, host Mark Beccue examines two trends in AI regulations and takes a look under the hood of Microsoft Copilot to understand how Microsoft is addressing built-in challenges for LLMs.

The discussion covers:

  • The key Generative AI trends – AI regulations. Analysis of the status and arc of the EU’s AI Act and copyright/IP issues linked to Adobe’s proposed FAIR Act.
  • A company we like doing AI. We take a look under the hood of Microsoft’s Copilot to understand how Microsoft is addressing the built-in challenges for LLMs. How will Copilot navigate LLM issues with accuracy, bias, privacy and hallucinations?

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Listen to the audio here:

Or grab the audio on your favorite podcast platform below:

 

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this webcast.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Transcript:

Mark Beccue: Hello, I am Mark Beccue, Research Director for AI with The Futurum Group. Welcome to The AI Moment, our weekly podcast that explores the latest developments in enterprise AI.

Pace of change in innovation in AI is dizzying and unprecedented. I’ve been covering AI since 2016, never seen anything like what we’ve experienced since ChatGPT launched about this time last year, and kickstarted the generative AI movement. Which is why we call this The AI Moment. With this podcast, we distill the mountain of information, separate the real from the hype, and provide you with some sure handed analysis about where the AI market will go.

In each episode, we’ll deep dive into the latest trends, some of the technologies and generally what’s shaping the AI landscape. That might be discussions about the latest advancements parsing the event landscape, including the big announcements to things like AI regulation, ethics, risk management, lots of different things. So we’re going to cover a lot.

Each one of these is about 30 minutes long. Each show typically is made up of three or four segments. This is episode two, so hopefully you’ll get the cadence to this. But they’re generally one of a few different types of segments. A guest spotlight where I’ll bring in someone to give us a view from what they’re looking at. Typically, a vendor company. We’ll go through the key trends in generative AI. And I like to say that we look at trends and not fads. So that’s an important part of our analysis is not to jump at everything, but to give it a moment to breathe and consider it as a trend.

Then one of my favorite segments I call, adults in the generative AI rumpus room. And with that one, we’ve had this moment just jump up on us, like we said, since October. And it’s produced a lot of chaos and disruption and some impatience. And some organizations have been calm, thoughtful leaders in the midst of it. So those are the adults in the degenerative AI rumpus room, so we like to highlight some of them. And then finally, I have a segment I call a company we like doing AI and I’m going to actually talk about that some today.

So two segments we’re going to do, we’re going to jump in. This is episode two, and today we’re going to be covering two pieces, key trends in generative AI, and we’re going to be talking about regulations here. The AI Act and some interesting developments around copyright in IP. And then we’re going to do a segment on companies we like doing AI, and that’s going to be a little bit of a focus on Microsoft’s Copilot. I got a chance to talk with them about what’s under the hood and how they’re dealing with some classic LLM issues.

All right, everybody ready? We’re going to go right into it. Let’s talk about regulations. So the key trends, regulations. Follow the AI Act and seeing what’s going on there. I’m not going to go through the background of that, I just want to give you kind of an update that’s been moving through the process in the EU, different regulatory bodies and governmental bodies, looking through the language. And I want to follow a writer over there that writes about where the process is and what the status is. I want to read you a little piece here real quick about what’s going on. So this will just take a second.

It says, this is from the EU AI … that’s hard to say. EU AI ACT Newsletter. And this is Risto Uuk and he says, Foo Yun Chee and Supantha Mukherjee from Reuters reported that Brando Benifei was one of the lawmakers responsible for the AI Act negotiations called on the EU member states to compromise on crucial issues in order to secure an agreement by the end of the year. With two more rounds of discussion scheduled next month, Benifei stress the need for greater flexibility among the EU countries.

Some of the most contentious issues revolve around biometric surveillance and the use of copyrighted material by AI, including models like ChatGPT. Lawmakers aim to prohibit AI usage in biometric surveillance, but several EU countries, led by France, seek exceptions for national security and military purposes. Additionally, legislators want AI regulations to encompass copyrighted content used by companies like OpenAI.

In contrast, EU member states argue that existing copyright rules within the block provide sufficient protection. An advisor at the European Commission stated that biometric surveillance would go “down to the wire”. Some voices such as that of Svenja Hahn advocate for banning biometrics facial surveillance within the AI Act and addressing copyright concerns through copyright law, aligning with EU countries on this matter.

So just a little bit of background there, right? So let’s take a look at that for a second. I had a couple of thoughts. One was you read that analysis, you can see one, I want to emphasize that laws take time. This is not something that happens overnight. There really is a good reason for that. And I think that some of us get a little impatient and we’re thinking, what’s going to happen next? Where’s this going to go? Why is this so important? Let’s back up and talk about that for a second.

The EU Act is important because I think it follows a bit in the path that GDPR did. So GDPR really forced most countries that if you’re going to do any business in the EU, you have to comply with that law. So the AI Act is going to be first out of the box with some sort of legislation, some sort of regulation. And US companies if they want to do business in the EU, are going to have to act on it. So it becomes a defacto standard.

So let’s set that aside for a second. And I do want to talk about why these things take time and why it’s important to get it right. And when you hear some of these back and forths that we just saw about copyright and biometric surveillance, we’re going to talk about copyright in just a second here in another piece. But I think it’s important that the lawmakers are trying to construct regulations that cover things, but they’re flexible enough to understand something that they don’t really know right now. Let’s call it framework law.

Really, if you look at how the US Constitution is written, it’s been remarkably flexible. It’s a framework kind of laws. Those are framework laws. They take into consideration something that it’s broad enough to cover things, but it’s not too narrow to exclude things.
So it’s going to take some time to get this right. Some people are saying they may get this done by the end of the year and then if they do, it goes into effect in about a year or so.

But I don’t know if it’ll do that. We’ll see. But I think my point here is we’ve got to be patient. We want frameworks that are flexible. So stay tuned, breathe normally, and let’s just carry on.
The second part of this piece that you saw in here was this interesting fight line. So it was copyright laws, IP and basically privacy.

So that leads us to these rumblings that are kind of going on this side and through what I see is some movement in or some positioning around copyright and IP. And this has really come to a head mostly around this kind of a more mature use case is really image generation. We’ve heard about this through different commercial pieces. Adobe has Firefly, Shutterstock and Getty Images have image generation pieces. And then there’s OpenAI, which has something that consumers can get called DALL-E, DALL-E-2, there’s stability, different places. So this is all coming to a head around this IP and things like this.

I want to note something here. Adobe went out a few weeks ago and said they’re going to propose what they call the Fair Act protections. And I have a quote here I want to read to you, this was from one of their executives in a blog post about the Fair Act. It says, such a law would provide a right of action to an artist against those that are intentionally and commercially impersonating their work or likeness through AI tools. This protection would provide a new mechanism for artists to protect their livelihood from people misusing this new technology without having to rely solely on laws around copyright and fair use. In this law, it’s simple. Intentional impersonation using AI tools for commercial gain isn’t fair.

And so the framework of this kind of idea that Adobe is proposing says, one, artists can go after the misuser directly. They want it to be a federal law, so it supersedes any state laws. They want statutory damages that award a preset fee for every harm, so that minimizes the burden on the artist to prove actual economic damages. So let’s just set that aside and say we were talking about time and these trends and what’s going on here.

So I talked to Adobe about this and got a little more insight into where it is. I said, “Well, where is this? Is it a law? Is it in the hands of somebody? Is a legislative body working on this?” And the answer was, in talking to Adobe, they said they have presented language to the White House and to Congress, particularly a senator, a committee that’s a senate committee. So the short end of this is that this hasn’t moved very far yet. It’s language, it’s a conversation.

But the interesting thing is, and if you notice, I’ve talked about this before in many of my writings, that when a company that’s on the forefront of a technology starts to write pioneers a path. What they do in that sense and what they do in the public space becomes defacto standards.
So it’s important and interesting to note that a company like Adobe who’s invested a lot of money in AI for image generation is moving forward and in their own interests, writing or trying to write laws that will help them sell their products. But that’s normal. That’s something that we will see going forward. The industry helps write the regulation.

So just wanted to set that aside, but let’s also take a look at what’s underneath that a little bit. And there’s some rumblings, right? So Adobe says this about image generation and kind of protecting the right of an artist that may intentionally get ripped off. But there is some rumbling around artists about getting paid and about their stuff getting used for training. And I wanted to share with you, we’ve put together this matrix of these image generators, something we’re going to publish pretty soon.

And what I wanted to do is give you kind of where we assess these. And I mentioned Adobe, Getty Images, Shutterstock, Stable Diffusions from Stability AI and DALL-E. So we just did a little matrix of this and we said, what are the models trained on? Are there any … I can’t even say it. Indemnification available. Is content added to the content library for others to license? Is the content … can it be used for model training? Is there compensation to creators? And is the creator a copyright owner?

Now when we did this matrix, it’s interesting, the answers are all over the place. And you would kind of imagine, I’m just going to give you an example. In terms of indemnification, Adobe, Getty and Shutterstock, which are the commercial offerings, do offer indemnification. But it might not be free. Stable diffusion said no and DALL-E, I didn’t see an answer. But those are the open source pieces, makes sense, right?

We’ll skip over and say, can the content be used for model training with Adobe? It’s yes for people who are Adobe stock contributors. So it depends on the way the contract’s written. And all the others were yes. I’ll skip over and just say, is there compensation to creators? All three of the commercial players are building ways to pay those people to be compensated, but it’s different. And when we put this out there, you’ll get to see this. So I thought that was interesting. But with Stable Diffusion and DALL-E, it’s not applicable.

And then the last one is copyright, which was funny. No real answers on a couple of these except for a flat out, no from Shutterstock and a kind of a question mark yes from Stable Diffusion. So as you can see, these things are wide open. Copyright is going to be challenged. And to the point that Adobe’s talking about and how these laws come out, the fight’s going to be around copyright and IP for a while. I think that’s a longstanding kind of problem. All right, so that’s going on in trends right now.

I’m going to skip over and we’re going to talk about a company we like. I’m going to pull this up and we’re going to talk about companies we like doing AI. I had a chance to go to the Microsoft Copilot launch a couple of weeks ago. Being there and getting to hear their story made me think of a few questions. And I’ll preface this by saying Copilot from Microsoft is possibly written about this before, possibly the biggest way that massive amounts of people will be introduced to generative AI is through Copilot because of the way the Microsoft apps are.

So here we are on September 21st, they unveil all these announcements. And I’m going to just give you a couple of bullet points about what they said, what it does. So if you’re not familiar, what the quote is in the press materials that Copilot will be your everyday AI companion, uniquely incorporate the context and intelligence of the web, your work data and what you’re doing in the moment on your PC to provide better assistance, with your privacy and security at the forefront will be a simple and seamless experience available in Windows 11, Microsoft 365, and on our web browser with Edge and Bing.

It’ll work as an app or reveal itself when you need it with a right click. We will continue to add capabilities and connections to Copilot across to our most used applications over time in service of our vision to have one experience that works across your whole life.

So the general availability of Copilot is to enterprise or commercial customers starting November 1st. And that’s with Microsoft 365 chat and we release the kind of launches that. And that’ll be in Copilot for Outlook and Excel and Loop, OneNote, OneDrive and Word. There’s a separate one on Windows 11, we’re going to skip that right now. So we’re going to talk about where we’re going with this, what this looks like. Let’s take a look.

So I called this research note under the hood, how Microsoft Copilot tames LLM issues. So this is this big challenge, LLMs, they have several built-in challenges and it’s primarily around three or four things. Accuracy, bias and hallucinations. These are challenges that can pose this huge risks for companies that leverage these LLMs and Microsoft’s investment in partnership with OpenAI and ubiquitous nature of ChatGPT. Had me wondering about how Microsoft is going to leverage OpenAI and Microsoft’s own LLM IP or whatever they’re doing under the hood with Copilot.

So I had these questions that I mentioned. And the question that I really asked that we’re going to look at here is, how will Microsoft deal with and solve the built-in challenges? Primarily accuracy, bias and hallucination and the deployment of Copilot across Windows or 365.
So I sent these questions to them and they came back with some cool stuff. And they provided a look under the hood at how they’re addressing these issues specifically for Microsoft 365 Copilot.

So what’s great is they published an article and it’s in a webpage they have called Microsoft Learn. It’s in their documentation and training and certification portal. And it addresses many of these issues. And you can access it, it’s called Data Privacy and Security of Microsoft 365 Copilot. They sent me this. So let’s go through accuracy first.

So let’s set one … just some kind of maybe definitions. LLM access to data that isn’t part of its training data is called grounding. So Copilot combines LLMs with content from what they call the Microsoft Graph. And that’s emails, chats, documents you have permission to access and the Microsoft 365 apps. So the Graph gives Copilot access not only to that content, but also provides Copilot with the context of the content.

And they use the example like an email exchange the user had on a topic, that’s contextual, it’s not just the data. So when an LLM has access to data like this that isn’t part of its training data, it’s grounding. And Copilot generates responses that are anchored from this organizational data and nothing else.

So in essence, the LLM is compartmentalized to a certain tasks for Copilot but not for others. So when you think about this, LLMs, so many of them are trained on public data. And that public data is in a way containerized here. So when Microsoft is using it for Copilot, they’re really focused on a more narrow set of data. It’s the Microsoft Graph data and they’re using some of the public data to help the Copilot talk. Let’s think of it that way.

All right, so not only that, what they also said was that Copilot only uses the organizational data, which the individual … this is a quote. To which individual users have at least view permissions. So it only searches within the user’s tenant. So that’s pretty cool. I thought that was a nice way to address accuracy in one sense. But there’s a caveat. So Microsoft acknowledged that there will be issues with accuracy and their primary suggestion for dealing with that don’t depend on Copilot for … is this, don’t depend on Copilot for fully automating draft writings and summaries.

And I got a quote for you here. This is in that website says, “The responses that generative AI produces aren’t guaranteed to be a hundred percent factual. While we continue to improve responses, users should still use their judgment on viewing the output for sending them to others. Our Microsoft 365 Copilot capabilities provide useful drafts and summaries to help you achieve more while giving you a chance to review the generated AI, rather than fully automating these tasks.”

So in regards to this accuracy issues, there’s another piece, misinformation and disinformation, you ask about that. They kind of say Copilot is a work in progress. And here’s a quote for disinformation and misinformation, “We continue to improve algorithms to proactively address issues such as misinformation and disinformation. Content blocking, data safety and preventing the promotion of harmful or discriminatory content in line with our responsible AI principles.”

So they’re saying for misinformation and disinformation, we’ve got algorithms working on it.
So if you grade out the accuracy pieces, there’s some improvements. And we see something good in the containerized way that Copilot looks at just the user’s tenant. But you still have issues, they say don’t trust it completely, use it as an assistant for drafts and summaries because it might not get it right. And they’re not sure about how to deal completely with misinformation and disinformation at this point.

So let me move on real quickly. We had three other areas, privacy and security.
This was our strength and it makes sense that this would be a strength of what Copilot can do. They did a really good job of thinking through this and protecting user data from LLMs for all these enterprise customers. So it looks like this, Copilot is GDPR and EU data boundary compliant.

So no user data or activity access through the Graph is used to train the LLM. And then within this user data, Copilot only surfaces the organizational data to which the individual users have at least view permissions. They also said you should note that those users have to make sure they’re using permission models that are available in 365 apps. So if you don’t use the permission models, it wouldn’t work.

It only searches for the information from the user’s tenant, like we said, cannot search other tenants the user might’ve had access to. And then it mentioned that user prompts the data that Copilot retrieves and response generated remain within the Microsoft 365 boundary. And it should be noted that Microsoft makes a point that Copilot uses “Azure OpenAI services processing, not OpenAI’s publicly available services.”

So you can see they’ve taken that model and in a way, fine-tuned it into an Azure, possibly IP piece that they control better. So that’s really important. And then they mentioned a couple other things. The grounding process, they use something called semantic index. And that ensures that the grounding is based only on the content that the current users authorized to access. It’s really very thorough.

Let’s just do that. We’re going to move on, bias real quick. According to Microsoft, Copilot leverages the safety system using the content filtering, operational tracking and abuse detection to provide safe search experience. Okay, we’ll see about that, we don’t know. And last piece is hallucination. And this is really an Achilles heel for LLMs. Copilot’s going to hallucinate, they admit it, Microsoft says it. And they approach that they’re going to take to keeping this under control. There are two initiatives they talked about. Prompt design and user rating feedback.

So for the prompt design, this idea is that all of us are going to go forward, we’ll have to adapt a new interaction. We’re going to have to do new interaction techniques to take advantage of LLM based systems. And that was promoted at the launch of it. They said we’re going to start talking. They hinted at, we’re all going to have to learn how to be prompt engineers. And they said they were going to offer these trainings on how to do that, how users should write prompts to get the best results. So that was one way they’re saying it takes care of some of the hallucination is being a better prompter.

Then they mentioned this user rating piece and they said Microsoft may use user feedback to improve the model. And for example, they used an example that said users can rate each response to indicate if the response was helpful or not, and provide additional detailed feedback for their rating. Okay, so you think of it as crowdsourcing. Was this helpful? Did it work? Okay, maybe that’s something. So those are all the pieces. Let’s talk about a conclusion about that under the hood.

Copilot is going to be this just … it’s immense. And it’s really going to touch an enormous number of people with this LLM based AI. It’s the biggest thing that’s going to happen, I can’t think of something bigger that’s going to have a bigger impact, it’s really putting an LLM in front of almost everybody. And what’s interesting is the company seems to be very confident that these inherent challenges that current LLMs have are not going to material impact the outcomes that Copilot produces. But of these main challenges with accuracy, privacy, bias, hallucinations, they really have a solid standing in that privacy and security area. For bias, it’s hard to say if those stated controls are going to be effective and only time will tell.

And then that brings us to accuracy and we’ll hold aside hallucinations for a second. So in terms of this accuracy, they see that grounding the model is the main control. Bringing it down into the tenants, like I said earlier. This should reduce inaccuracy significantly. But it’s interesting to see the company state that many of the Copilot outputs should be seen as draft pieces and not to be treated as final outputs without users really going through it. And this is where the path forward gets a little tricky to me.

Will users really heeded that advice? Will they learn quickly enough to avoid these major issues? So are they going to be better prompt engineers? Are they going to be disenchanted? It didn’t work. I think this would be worse, is that users will simply just go with the automated outputs and they don’t care. They just want it out and they’re going to push it out. And I don’t think that Microsoft doesn’t have a strong deterrent for dealing with misinformation and disinformation, but time will tell. So we’ll see. So that’s your accuracy.

For hallucination, this is going to be an issue. The onus that Microsoft is saying, basically it’s on the users to design good prompts and to provide feedback and response. And I think that the same issues apply for accuracy here. Users participate, will they participate? Will they actively become an educated? And will they provide enough response feedback? So it’s interesting how it’s really pushes this onto users. So at the bottom line, a lot hinges on this, users taking an active role for this to be good.

And I think Microsoft, they know this, they really do know this. And they’re willing to bet that most users will adapt these new behaviors. Kind of like how we all adapted to web search or mouse, the GUI for mouse or texting. So that we want to adapt a new way to do things, to take advantage of these new applications. And they seem to think there will be bumps in the road, they know there’ll be bumps in the road that can be fixed. So I heard a great analogy, they didn’t say this, but I think it works for this, as well. Copilot and a great analogy for these times we’re going through is flying a plane while building it.

All right, so that brings us to the end of our time today. I appreciate you being with us. Thanks for joining me here on The AI Moment, episode two. Be sure to subscribe and rate and review the podcast on your preferred platform and we’ll see you next time. Thanks.

Other insights from The Futurum Group:

Key Trends in Generative AI – The AI Moment, Episode 1

Author Information

Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.

Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

SHARE:

Latest Insights:

Commvault Addresses the Rise of Identity-Based Attacks With Automated Active Directory Recovery, and the Ability to Protect Active Directory Alongside Entra ID
Krista Case, Research Director at The Futurum Group, shares her insights on Commvault’s automated recovery of Active Directory forests.
Marvell Spotlights How Incorporation of Its CPO Technology Capabilities Can Accelerate XPU Architecture Innovation
Futurum’s Ron Westfall explores how Marvell’s CPO portfolio can play an integral role in further demystifying applying customization in the XPU architecture design process, incentivizing hyperscalers to develop custom XPUs that increase the density and performance of their AI servers.
Dr. Howard Rubin, CEO at Rubin Worldwide, joins Greg Lotko and Daniel Newman to reveal how strategic technology investments drive superior economic results.
On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss Meta, Qualcomm, Nvidia and more.

Thank you, we received your request, a member of our team will be in contact with you.