OpenAI Announcements On GPT-4 and 4o

OpenAI Announcements On GPT-4 and 4o

The Six Five team discusses OpenAI announcements on GPT-4 and 4o.

If you are interested in watching the full episode you can check it out here.

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Patrick Moorhead: It’s literally been an incredible week and just for a little bit of backdrop here, if it’s not evident, the frothy, generative AI discussions, tech enablement and end user benefits are just alive and well, and everybody’s doing this amazing judo move. You have upstarts like OpenAI who are backed by Microsoft. You have the stalwarts of AI like Google and Microsoft, and on the consumer side, it’s super frothy. We’re going into Build next week for Microsoft where I’m sure we’re going to hear their next play, and then we hit WWDC in June, which shows Apple’s Play. So OpenAI made some major announcements the day before Google I/O, which we’re going to talk about.

First of all, it was just an amazing array, live demos. Sure, there were some recorded ones, but it was literally the demos that were during the event were live, which I super appreciate, and they came in multiple buckets. People were speculating, “Hey, would it be a Google search competitor? Would it be GPT-5?” It was none of those. What it was is GTP-4 making-

Daniel Newman: O, O?

Patrick Moorhead: Well, I’m getting there. GTP-4, we can start with 4.0 if you’d like.

Daniel Newman: No, I just wanted to do the O. It made me think of the movie Office Space. Anyway, never mind. Get back to that.

Patrick Moorhead: So yeah, the biggest news was GPT-4o for Omni model, which I like to call multimodal, and that’s the combination of text, voice, image and video all combined, and there was this incredible voice overlay on top of that, which from the demos is absolutely remarkable. Now, I thought I was using the enhanced voice mode when I chose 4o, but in fact I wasn’t. I was using the logic part of that, but not what some people are considering this flirty and funnier version. Some of the notables here, a macOS desktop app came out, which by the way is a lot harder to shut down than a, actually almost impossible to shut down versus let’s say an iOS or Mac or iPadOS application. Does not currently touch the neural processor on Apple, but maybe we’ll see that at WWDC.

Other things came out. They talked about having a 100 million paid users, which is pretty awesome. So from a monetization point, GPT-4o is free and so is GPT-4, which is a huge delta. What you do get is you get a bigger context window if it’s paid. They also made major enhancements to the API. It’s 50% cheaper and 5X the rate limits. So Net Net, this company is going for it, it recognizes the large company threat and they’re trying to build as many users as they possibly can. And like I said on Yahoo Finance, if you can tap into that monetization stream, I think I said this on CNBC too. Once you can get enough eyeballs on it and people using your product, you can then go and monetize it and as we’ll discuss, Google might be behind, but it’s not a night and day difference.

Daniel Newman: Yeah, you hit a lot of good points there, Pat. I mean, I can’t help but think one is, you know how Satya wrote that thing? We’ve got it up, we’ve got it down, we’ve got it right, we’ve got it left, we’ve got it surrounded. Yeah. Satya is a badass, by the way. Don’t mistake. Anyway, he seems really nice, but he’s a badass. He’s built an amazing company and of course Microsoft’s going for it too because they’re going to build their own. They’re going to have open, they’re going to let everything else in. It’s why they’re getting market share. The interesting thing though is with this stuff on Apple that we’re hearing about them getting Apple natively on the device, think about how OpenAI has made itself a complete moat in between the two biggest companies in the world and they’re creating this codependency.

It’s wild how this thing is kind of evolving, and man, Sam is boring. He is so boring. I listened to him on the All In. I mean, it was the worst guest visit ever. I was listening to it like two in the morning driving between San Antonio and Houston last week, excited because it’s like, oh my gosh, Sam Altman is on the pod and this is pod All In, the one I listened to pretty frequently and it was terrible, but the setup for the surprise was really, really good. I mean, the 4o demos are mind-blowing. They’re very interesting. The rapid innovation is hard to digest how quickly we are evolving. If you look back at the original GPT, it was actually really terrible. The answers were bad, the data was old, but the concept was good, meaning summary abstract. I tell everybody-

Patrick Moorhead: Image creation through DALL-E 3 was good.

Daniel Newman: Yeah, I mean, they’re doing awesome stuff. Sora was trained on YouTube or whatever it was, but New York Times, there’s all these interesting sort of nuances there, but I mean, where we’re going is really interesting. It’s pretty fun and exciting. But listen, I mean they are disrupting. They’re bringing contrived empathy to the platform, and I call it that because it’s not real. Everyone just understand that she might be flirting with you, but she doesn’t love you. The movie Her is a great movie to go back and watch because it’s going on, but it is wild to how quickly this is proliferated from a new way to organize searches into a single to now basically doing your kids’ homework. And so congratulations to OpenAI. I mean, look, the building and what I expect the innovators to do around this will be really interesting. Of course, it’s a lot of closed architecture, so there’s not a lot that others will be able to build in excess, but it does also start to show the possibility, which will create all kinds of energy in the open source communities to figure out how to build some of these capabilities on top of Llama and other open source models, which I expect to grow as well.

Now, nobody’s commenting on the amount of power this thing’s going to use, and so I’m going to keep going back to that. Pat, I have no idea how we are going to power the proliferation of these large language models. I can’t quite figure it out. And it’s like a 100 times more energy when you’re doing a search on these things then when you’re doing a Google search. So how do we have the entanglement of these two things to not use generative for everything until we have some way to deal with that situation? Can we talk about nuclear fission? No, we’re not going to talk about it here, but we’re probably going to need to pursue that at some point. I don’t see how else we’re going to get the grid to support all this.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Brad Shimmin, VP and Practice Lead at The Futurum Group, examines why investors behind NVIDIA and Meta are backing Hammerspace to remove AI data bottlenecks and improve performance at scale.
Looking Beyond the Dashboard: Tableau Bets Big on AI Grounded in Semantic Data to Define Its Next Chapter
Futurum analysts Brad Shimmin and Keith Kirkpatrick cover the latest developments from Tableau Conference, focused on the new AI and data-management enhancements to the visualization platform.
Colleen Kapase, VP at Google Cloud, joins Tiffani Bova to share insights on enhancing partner opportunities and harnessing AI for growth.
Ericsson Introduces Wireless-First Branch Architecture for Agile, Secure Connectivity to Support AI-Driven Enterprise Innovation
The Futurum Group’s Ron Westfall shares his insights on why Ericsson’s new wireless-first architecture and the E400 fulfill key emerging enterprise trends, such as 5G Advanced, IoT proliferation, and increased reliance on wireless-first implementations.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.