Search
Close this search box.

Microsoft Copilot Forecast, Fairly Trained, Google ASPIRE | The AI Moment – Episode 12

Microsoft Copilot Forecast, Fairly Trained, Google ASPIRE | The AI Moment – Episode 12

On this episode of The AI Moment, we discuss two emerging Gen AI trends: Microsoft Copilot’s AI revenue potential and LLM research. We also celebrate our latest Adults In the Generative AI Rumpus Room.

The discussion covers:

  • With the most used enterprise software and operating system in the world, Microsoft placed a significant bet on AI with the introduction of Copilot to enterprise users in September 2023. Now Microsoft has unleashed Copilot, making it available to nearly every 365 user. What will the impact be? Is Microsoft poised to generate material revenues from AI in 2024?
  • LLMs are evolving at lightning speed, in part due to a copious amount of academic research and what it means for the market.
  • More Adults in the Generative AI Rumpus Room: Non-profit Fairly Trained, Google Research.

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.


Listen to the audio here:

Or grab the audio on your favorite podcast platform below:

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this webcast.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Transcript:

Mark Beccue: Hello, I’m Mark Beccue, Research Director for AI with The Futurum Group. Welcome to The AI Moment, our weekly podcast that explores the latest developments in enterprise AI. We are literally having a moment. The pace of change and innovation in AI is unprecedented. I’ve been covering AI since 2016. I’ve never seen anything like what we’ve experienced since ChatGPT launched late in ’22. That’s hard to say. And kick-started the generative AI era. With The AI Moment podcast, we are looking to distill the mountain of information, separate the real from the hype, and provide you with sure-handed analysis to where the AI market will go. We dive deep into the latest trends and technologies that are shaping the AI landscape and cover things from analyzing the latest developments and advancements in AI technology and parsing the mutating vendor landscape to other things such as AI regulations, ethics, and risk management, whole lot more.So typically, the show’s made up of a few segments.

Today, we have three segments. Two are what I’d call key trends in generative AI, and one is a spotlight on adults in the generative AI Rumpus Room. The name of our show today is Microsoft Copilot Forecast: Fairly trained in Google Aspire. That’s the name of the episode. But what we’re going to do is look at these two key trends for Microsoft Copilot. I have a forecast for what I think they’re going to do as far as revenue goes for this year. And then I have a piece we’d like to talk about around LLM research. And then the second segment is going to be on adults in the degenerative AI Rumpus Room. So let’s get right into it. So you might’ve heard that last week Microsoft had an update for Copilot. On the 16th, they announced that they were expanding Copilot to a couple of different places.

They were also sharing some numbers, which were interesting. One was that they reported, and this is with enterprise users, that Copilot users had, as of the 16th, had more than 5 billion chats and created more than 5 billion images to date. So we’re assuming that since launch in September, so last week of September until now. That’s pretty substantial. But what they announced as far as the expansion goes is the introduction of Copilot Pro, which is a new subscription for individual users. And it’s $20 per user per month, comes with a bunch of things like priority access to their latest foundation models, including GPT-4 turbo. It says during peak times. There’s enhanced and faster image generation through DALL-E, which is an OpenAI model. There’s Copilot in Word, Excel, PowerPoint, Outlook, and OneNote. And they built this ability to build what they call Copilot GPTs, which are these actions that are customized for a specific domain. So that came out.

And the other big piece of the news for the expansion was that Copilot and Microsoft 365 in this enterprise offering, which was $30 a month, is available to businesses of all sizes. So they dropped the minimum. There used to be a minimum of 300 seats, I think it was. So generally available for smaller businesses, you don’t have to have a minimum amount of seats. So there’s some other things in there. They have now put Copilot into Teams and business chat for those offerings as well. So, that’s the launch. That’s what they did, is they put all this out there as basically a massive expansion of where they were, which was essentially, originally, just in the enterprise. So what I thought about there was, given these numbers that they shared, they shared how they’re charging for Copilot now, so you have the $30 per seat for enterprise users. Anybody that’s a business user, it’s $30 a seat if they want to do that.

And then you have Copilot Pro for individual users at $20 a seat per month. And I got to thinking about that, thinking that if we look at Copilot and where it’s sitting, it’s sitting in the most used enterprise software and operating system in the world for Microsoft, right? So they’ve placed a huge bet on this, seem to be all in, moving forward very quickly. And I thought, “What would that impact be?” I think, “Is there material AI revenues that might be related to this for 2024?” So I built a little forecast and I’ll give you some of the background to it and just run through the numbers for you. So, the approach was that they’re going to charge this premium to tap into some more premium types. We’ve talked about what the new offerings give you for the more money you get different more complexity to what Copilot can do. So the $20 and $30 is how this goes.

And if you look at some basics back in the 2022 annual report for Microsoft and their year ends on September 30th. So this would’ve been last year. They haven’t reported their 2023 annual stuff yet. It’s coming out soon, I believe. Yeah. They reported back at that time that the number of paid seats for Office 365 in the third quarter of 2022 was 345 million seats. That’s paid seats. Then in September this year when Microsoft introduced a Copilot, they told Wall Street investors, they felt that the installed base opportunity for Copilot for that enterprise set of customers was 160 million seats. So you have the total, which is 345 million seats, 160 million of those seats are enterprise users. So keep that as our data points there and what it costs per seat. So what I did is I built a forecast around calendar year and Microsoft’s fiscal year. And here’s my methodology and some assumptions. So I developed a low, medium, and high estimate. I assigned a percentage of paid seats per month per category, enterprise seats at $30, all other seats at $20 per seat per month.

And I didn’t assume any growth in total numbers. So I just used the 345 million, the 160 million, didn’t grow that or shrink it, just use it as a static number. I’m going to skip the calendar year and let’s go to what I think they’re going to do, Microsoft for fiscal year, which would be through the end of June. Good grief, I messed that up. Their fiscal year is end of June. So end of June, 2024. So here we go. So for Microsoft’s fiscal year, which is through June 30th, I put together a forecast for that. So let’s look at that. And here’s kind of the notes I would make on how I looked at this. For enterprise seats, I estimated a baseline of 5% of enterprise seats starting in January that will pay this $30, and that growing out to 12% of enterprise seats. So for the low estimate, it looks like that, right? You’ve got five to 12. And then that just gives you an idea of how this worked out. So in the fiscal year that ends June 30th, here’s my estimate. So for enterprise users, I estimate $1.8 billion is my low estimate. My medium estimate is $4.3 billion, and my high estimate is $6.5 billion. That’s fiscal Microsoft year ends June 30th.

So for enterprise users, low, medium, high. The other number is all the other users for the low would be half a billion, got a medium estimate of 2 billion, and a high estimate of just under 3 billion. So those totals look like this. What I’m guessing and estimating is that Microsoft will see at the low end $2.3 billion in revenue from Copilot in their fiscal year. That’s the low end. On the high end, I think they’re going to see more than $9 billion. So somewhere in that range, I think, is going to be material revenue to them based on AI, which is interesting and intriguing. And I’m going to note something. I wrote about this in September. I’ll just read you this, I thought it was kind of cool. Here’s where I think is interesting. They’re going full speed ahead on all of this. And here’s what I said in September. I said, “If Copilot rolls out smoothly, AI will become a mass-market technology within the next 18 months. Microsoft will be tasked with educating the world and training the world how to best use AI tools. If Copilot can seamlessly orchestrate apps as envisioned, work and personal productivity will rise simply based on Microsoft’s users alone and fundamentally change the way we interact with software.

Its success or even the promise of success will spur even greater investment by enterprises to leverage the power of AI. For Microsoft, specifically, a Copilot success will solidify the company’s stranglehold market share for Windows OS and Microsoft 365 applications. It could create greater opportunities for Microsoft to gain market share and enterprise applications they don’t currently dominate, such as sales, marketing, CRM, things like from Salesforce or Adobe, ERP-type software, the types of things you’d see from SAP, Oracle ServiceNow. And Microsoft Teams becomes more powerful and increases Microsoft’s potential to grab more market share and collaboration tools. Finally, Copilot’s success might give Microsoft a chance to break Google’s dominance in search. “So that’s what I wrote back in September. I think that remains true and they’re off to the races with what they’re doing. They feel confident, they feel comfortable, looks like they feel they have the guardrails in place for the OpenAI AP that they’re using. So we’ll see what happens, right?

So come back and see me at the end of the year and see if I hit between 2.3 billion on revenue for Copilot or 9.2 billion on revenue for Copilot, right? So that’s that. Next, we’re going to talk about a trend, and I mentioned this earlier. We’ll talk about a trend I think is interesting and it happened this week. I was looking through and I found a resource that I think is interesting, but this is an interesting statistic.There’s a media company called Marktechpost Media, and they position themselves as an AI news platform. But what they had in their website is they basically track LLM research and they write short synopses of the research so you could go to that place and see what’s going on. And I read through this and I’m just going through all of what’s there and I was stunned by a statistic that I’ll share with you. Since January 1st, they have posted, as of yesterday, 94 LLM-related research papers, 94. And what this is to me, and just this short note about this really, is that I keep saying that LLMs are evolving and mutating at lightning speed. And it’s absolutely true.

There’s an emphasis from the academic community and the research community, which is a combination of academics, but it also includes some huge players that we talk about all the time that are very, very involved in research. Microsoft, Google, AWS, and IBM included in all of those there. They’re all thinking about this, about how to make these LLMs work better. So what to me it points to is that we are early days for language models. There’s all sorts of work being done to make them work better and they’re just going to keep changing. So I wouldn’t put your seatbelt on and strap into one or two different things and think that we know absolutely how we’re going to keep using these or we’re going to get better at it. And it was interesting to see all the different types of research there. I would go check it out.

It’s MarkTech, M-A-R-K-T-E-C-H-P-O-S-T Media. It’s interesting to see that. It’s a good resource and I recommend it. So, that’s segment number two of our trends. So we talked about Microsoft Copilot and the LLM research. Those two trends. Second part of our discussion today is around adults in the AI Rumpus Room. I have two candidates I’d like to talk about today. One is called Fairly Trained. This is a new nonprofit that is just launched last week. They announced they had formed and launched last week. And it’s offering certification for Gen AI companies that get consent for the training data they use. And this is a really interesting and intriguing kind of thought.

And I want to tell you about this company, well it’s not a company, it’s a non-profit. They are saying that the certification is simply that the company confirms that they have the consent for the training data that they use. It’s called “A License Model Certification” and they say it can be awarded to any generative AI model that doesn’t use any copyrighted work without a license. So the licenses involved can be of various types. It could be custom licenses, permissive open licenses, but the certification will not be awarded to models that rely on a fair use copyright exception or similar, which is an indicator that rights holders haven’t been given consent for their work.

So, they’re hoping that this will be a way to get a fairer approach to training data acquisition. In the announcement, I thought it was interesting, they talked about several companies, which I’m going to dig into, some of them I’ve never heard of, but these are the companies that are already, excuse me, These are the companies that are already certified by Fairly Trained. So it’s Beatoven.AI, B-E-A-T-O-V-E-N.ai, Boomy, BRIA AI, B-R-I-A, AI, Endel, E-N-D-E-L, LifeScore, Rightsify, Somms.ai, Soundful, and Tuney. These sounds like music ones. So, one other thing that’s interesting for this. So those are the companies that they’re talking to and what was interesting is that they have the support of a bunch of different organizations including these, which are interesting. One I know of very well. So the Association of American Publishers, the Association of Independent Music Publishers, Concorde, Pro Sound Effects, and Universal Music Group. So that’s an interesting development. We’ll see if this gets any momentum and continues. It’s intriguing. They didn’t say how they do the certification, so they may be depending on these companies to tell them what they’re doing, but the idea is something that’s worthwhile and would be an adult in the AI Rumpus Room. So let’s see where that one goes. It’ll be fun to see.

So the second and final piece today, and we have another candidate in the adults in the generative AI Rumpus Room. They’re a regular contributor and have been an adult. They were my adult of the year last year for 2023. It’s Google. So Google’s back and they came out with a bit of information last week. So Google researchers have published a paper that proposes a new framework called ASPIRE, and this stands for Adaptation with Self-Evaluation to Improve Selective Prediction in LLMs. It’s a good thing they call it ASPIRE. So what it does is it’s designed to help LLMs assess their answers before offering them up. So literally, it’s teaching these LLMs to say, “I’m not sure,” or, “I don’t know.” And this is amazing and it’ll be interesting to see where this goes. So it’s a new paper, new framework. It works with most models. It doesn’t have to be a large one, it can be a smaller model. It actually works better with smaller models. If this gets going…

We’ve always had problems with hallucination, so the idea that models are very confident in the answers they have when they’re not even right answers. So they just think, “Yes, I’m absolutely sure this is the case and have no idea what I’m talking about.” So big, big issue for really moving forward with LLMs, particularly in critical situations where you’ve got more high-priority use cases. And this paper gives some hope that we are finding the answers that will allow us to have more accurate results. And sometimes it’s just something you need to check. So nothing’s always automated, right? If the model returns an answer and says, “I’m not sure about this,” or, “I don’t know,” that puts the onus back on the human to look through this and try and figure out what’s going on. So that’s a really adult move. I love it and think that Google keeps showing up as an adult here. It’s really helpful and we’re all benefiting from their great work, particularly in research.

All right, so that’s it for the week. A few things going on. Always fun. There’s always something in The AI Moment. I want to thank you for joining me here on The AI Moment. Be sure to subscribe, rate, and review the podcast on your preferred platform. Thanks again and we’ll see you next time.

Other Insights from The Futurum Group:

Lawsuits and Probes, How OpenAI & Microsoft Are Impacting the Trajectory of AI | The AI Moment – Episode 11

Watermarking & Other Strategies for Licensing AI Training Data & Combating Malicious AI Generated Content | The AI Moment – Episode 10

2023 AI Product of the Year, AI Company of the Year | The AI Moment, Episode 9

Author Information

Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.

Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

SHARE:

Latest Insights:

Veeam Makes a Strategic Move to Enhance Positioning in Next-Generation, AI-Driven Cyber Resilience
Krista Case, Research Director at The Futurum Group, covers Veeam’s acquisition of Alcion and its appointment of Niraj Tolia as CTO. The move will strengthen its AI cyber resilience capabilities.
Google’s New Vault Offering Enhances Its Cloud Backup Services, Addressing Compliance, Scalability, and Disaster Recovery
Krista Case, Research Director at The Futurum Group, offers insights on Google Cloud’s new vault offering and how this strategic move enhances data protection, compliance, and cyber recovery, positioning Google against competitors such as AWS and Azure.
Capabilities Focus on Helping Customers Execute Tasks and Surface Timely Insights
Keith Kirkpatrick, Research Director with The Futurum Group, shares his insights on Oracle’s Fusion Applications innovations announced at CloudWorld, and discusses the company’s key challenges.
OCI Zero Trust Packet Routing Zeros in on Enabling Organizations to Minimize Data Breaches by Decoupling Network Configuration from Network Security
Futurum’s Ron Westfall examines why newly proposed OCI ZPR technology can usher in a new era of network security across multi-cloud environments by decoupling security policies from the complexities of network configurations and simplifying security policy management.