The Six Five team discusses ChatGPT Coming to Azure.
If you are interested in watching the full episode you can check it out here.
Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.
Transcript:.
Daniel Newman: So it’s been a big week in the Microsoft world as it relates to ChatGPT. It’s been basically all the rage.
The first and foremost has been the 10 billion dollar proposed acquisition, which I think would make Microsoft a 49% stakeholder in ChatGPT, some monumental valuation, but of course, anybody that’s tried it has realized that, “Holy crow! We have technology now that actually does the things that we’ve long been threatening like writing our essays for us and doing it in a way that’s actually meaningful.”
By the way, something our friends on the All-In podcast suggested, I don’t want to take credit for this, but the possibility to deliver a technology that could actually disrupt search as we know it in a meaningful way. Anybody that’s followed the Bing Google evolution knows that Microsoft has never quite figured out how to play the game that Google has in search. Now, there’s two themes here and the main theme is if you look at Microsoft’s portfolio from devices to Azure, to Enterprise and apps, to gaming, the ability to take a technology like OpenAI and ChatGPT and embed it across every part of your portfolio, productivity, collaboration, dynamics, applications, doing some built-in on device AI on surface, what a powerhouse of technology and tools that the company could use to absolutely differentiate itself from every other platform, including Apple. Siri, you still suck, mostly. So very interesting.
I mean, look, this isn’t the first thing. I mean, the partnership with Microsoft and OpenAI goes back I think three or four years. The company’s been very busy working on things like conversational AI. They bought GitHub, which is obviously a platform for developers and coders and that gave the company more leverage to implement and utilize code that could add AI, but basically now in January of ’23, the company is announcing Azure OpenAI service and making it generally available for OpenAI and ChatGPT coming soon. So effectively, what you’re seeing with OpenAI and ChatGPT is going to be able to be overlaid on everything that Microsoft builds. What an absolute powerhouse.
Secondarily, like I said, this to me, Pat, was the number one thing, and I’m not going to take too much out of this one. We could talk forever. I’m just going to say the second I heard this idea of Microsoft being able to use OpenAI and ChatGPT to enable Bing to finally offer search that could compete with Google or some service, absolutely blew my mind because only about one out of a hundred times that I’m searching something on Google am I looking to buy something. Yet the entire architecture of chat, sorry, of search has been built to basically enable someone to sell you something.
So when you search for I want more information about a company’s new product, I get ads fed to me, right? I get something that they want me to click on and they want to create revenue, but when you use OpenAI and ChatGPT and you search something, you put in a question like, “Hey, how does Intel’s gen three versus gen two compare on server chips?” You would actually get a somewhat sophisticated breakdown of all the internets, all the material that’s been fed to this thing that could give you a meaningful answer.
I mean, you have questions being asked that could write history papers, doctorals. You have things that could be answering- I mean, just yesterday, Pat, just as a little example here, I did a search out on the summarizing Google Microsoft announcements at NRF. So this could interestingly feed what we’re going to talk about next. It said to me, I’ll just read this out, “At NRF 2023, Microsoft and Google made product announcements that’ll have an impact on the retail industry. Microsoft announced its new Azure AI for retail platform, which helps retailers create personalized shopping experiences for consumers. Platform uses AI to analyze consumer data and predict what shoppers want to buy in order to give them better recommendations. Microsoft’s new retail strategy is based on three pillars,” I’m going to end after this but, “personalization, convenience, and security. The company also introduced a new suite of tools called Microsoft 365 for retails, which includes features like inventory management and analytics software.”
I mean, my gosh, Pat, one question in and we got something back that, generally speaking, an analyst or someone on our team would spend some time researching, reviewing, being briefed to get our arms around. The implications of this are massive. The fact that Microsoft is getting there first is going to absolutely put the industry unnoticed, Amazon, Google, Apple. They’re all going to have to find their version of this killer app to try to keep up if Microsoft effectively is able to execute with this technology.
Patrick Moorhead: So I feel like I’ve got a different perspective not from you, but I’m looking at this LLM opportunity through the lens of a business perspective, and that business perspective is twofold, and I brought this up before on here, but I think it’s important, which is first off, if you’re going to try to do a knockoff of Google search, you better be prepared to be blocked by the scrapers that come in if you don’t link back to where you found the data and be prepared to be sued if you just rip off the information publicly.
So we’ve seen this time and time again and the value, the reason that you allow crawlers to come and search your site and block Google is because you want people to find you and you want them to link back to your place. I’m really interested to see on what it does on data sets that aren’t public. So for instance, law books or something like that, something that has a copyright to it, that’s where I think we could see some serious game changing.
Second question I have is on cost. I’m very interested to see, by the way, the L in LLM is large, and large means expensive. I mean, with hundreds and thousands of GPUs that have to be intelligently working at the same time now, the amount of resources it takes is based on how difficult the question that you ask it, but we don’t yet really know the cost of a transaction. I’ll call it a transaction or a search. Google search is very efficient in the way that they’ve done it. So I’m not certain about the cost and the longevity of it. Meaning, do the costs go down over time or is this going to be just the Rolls Royce of capabilities?
I also question on, is it really a capability that Google doesn’t have? I think we’ve heard some rumblings of they have been working on a project for close to a decade, that-
Daniel Newman: Deep Mind.
Patrick Moorhead: Yeah, exactly. It operates a little bit differently than OpenAPI, but it is an LLM. By the way, nothing I’m saying takes anything away from Microsoft and Azure, but my question about that is, what is the long-term competitive advantage that Azure has using ChatGPT? Like you said, I agree that wouldn’t it be interesting if Microsoft connected some things on the operating system and the PC platform and all of the AI that they’re driving to the PC desktop? That’s something that competitively Apple just can’t do and they’ve sucked at it for eons. Don’t get me wrong, Apple’s good on device level AI, but it’s horrible on it as it relates to the cloud.
So I think ChatGPT is cool even though it got the companies that I worked for that were wrong. When I queried on it, it doesn’t mean I’m going to throw it out, but I still have questions on its cost, its unassailable moat that it has. Congratulations to Azure being first on it. I think when it starts crunching on some private data sources, things will really get interesting.
Daniel Newman: Well, I also think the ability for companies to bring in proprietary data and then use any of these, but especially as good as … because obviously, I was spilling a lot of the scraping of public information. You brought up a great point about the legal ramifications, copyright issues of people using the data. Obviously, search has had a way of getting around copyright for a long time in terms of sharing and making it available, and that’s largely because people want the data to be found online.
So there are going to be a lot of things to work out, but in large language models, when you have companies that have tons and tons of proprietary customer data, tons and tons of experiential data, of data sets from experiments, that stuff’s going to be unique when you can combine it with the public domain to provide differentiation, but you can’t take it away, Pat. This thing can literally write a PhD essay for you, which doesn’t make it not cheating, but the real question that I believe needs to be asked is, what are the ramifications to the future when you don’t need to actually learn these things in order for it to click?
You can’t leave the education, the institution of higher ed in place when someone can just search a question and have it write a comprehensive dissertation. This is a beginning of a fundamental change in human behavior and humanity and the way we will work, the way we will study, the way we will learn. It’s been happening right under our noses, but this may be one of those moments where it’s become incredibly evident that everybody is going to be affected by it.
Author Information
Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.
From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.
A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.
An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.