Search
Close this search box.

Google I/O 2024

Google I/O 2024

The Six Five team discusses Google I/O 2024.

If you are interested in watching the full episode you can check it out here.

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Daniel Newman: This is really interesting because clearly OpenAI was playing on the fact that Google I/O was coming. They wanted to get ahead of it. They wanted to create a major buzz, a major wave. Mission accomplished, took the I off. By the way, best tweet of Google I/O was Tech Crunch. They put out a 45-second review that showed every executive saying AI, that was it. It was just AI, AI, AI, AI, AI, AI. And it was like 45 seconds straight. I think the Thomas Perry and keynote had like 102 AI comments in it and then obviously across the entire presentation. But Google didn’t just release a couple of things. It was 100 things that was released. So there’s no way we can cover the gambit.

I want to talk about a couple of things that caught my attention, and by the way, Pat, I don’t think GPT-4o got that far out ahead of what Google presented. I think they both presented a lot of the same kind of capabilities in different wrappers, and so they’ve got the next generation of Astra or whatever they call their project Astra, which is the AI agent for life. It’s basically able to reason, respond, live audio, video. It can take a pixel recording, it can handle and engage with a live feed. It’s multimodal. I don’t think they showed it as something that could be interrupted, which was really an interesting thing. Did you mention that by the way?

Patrick Moorhead: No, I didn’t.

Daniel Newman: That was a super trippy thing in my opinion, was the ability to multi-turn these things, but actually change the direction of the conversation midstream. So that’s a really interesting thing that they rolled out. I don’t think it’s actually available yet, though. I think it’s in demo and beta, but it sounded really interesting. Another thing they did show, Pat, I don’t know if you saw this, but do you see Notebook LM? It’s an interesting note-taking app, but it’s helping kids with homework. So it’s basically taking notes and then being able to take the info and then be able to go out to a model and have the model help the kids further understand, transcribe, interact. So it’s not exactly like the tutor mode, which was pretty awesome, but did make me cynically ask a question, Pat, is do we even need college anymore? I am really just about to send my kid to Baylor. I’m not quite sure what I’m paying for. Put a screenshot of the problem and have a personal tutor walk you through the problem-solving.

Patrick Moorhead: Maybe you’re paying for the frat parties.

Daniel Newman: Fraternity, right? You’re not supposed to say frat. I don’t know. We’ll have to ask Connor. I think he was a fraternity guy, wasn’t he? My fraternity, I was in it for like a minute before I had my kids, so I never learned that, but I remember being yelled at for saying frat once. So another really interesting one was in my opinion, was the search video. So being able to basically show an image or show a video search and then have it be able to answer questions about it. By the way, I don’t know about you, Pat, but yesterday someone showed me this was a GPT one, but someone actually showed me a demo.

So I spoke yesterday at this Kearney Future of Product Summit. They took a picture of me on the stage with about five other people and they started asking a question about where are we? Who’s up on stage, what is this event about? And my gosh, it was crazy. Just images, just took one image. That was it, and it was able to do all that work. So this stuff’s moving really fast. And then the last one I’ll say is their Veo video was interesting to me. The Veo video product, which was their take on of Sora. I don’t know how quick that’s going to grow, but Google has the data. Probably less controversy on where their data’s coming from with YouTube, but those are four or five things to me. But overall, I don’t know that they got on parity, but I don’t think they lost their lead in a meaningful way where people were like, holy crap, Google’s out and OpenAI’s in. But the consensus out there is that OpenAI’s a little bit ahead.

Patrick Moorhead: It’s interesting. One thing I pointed out on CNBC is the big difference here between Google and OpenAI is Google actually has to make money. Now, Google can not hide what they do, but they can cover losses from areas that are super profitable. But like you said, and whether it’s a 100X differential like Sundar said, or a 10X differential like I’ve heard between general machine learning, and by the way, I think Sundar, that comparison is generalized indexing and search versus doing the transaction on generative AI. So yeah, they actually have to make money and if they were going to go get rid of search and go all generative AI, their costs would skyrocket. And I think that’s a key here.

And Google did what you would expect is they refined what they have out there, very similar to the way that 4o is more efficient. They brought out Gemini 1.5 Flash that was optimized and it has a longer context window. And then on 1.5 Pro the increased context window to 2 million tokens… By the way, it’s so confusing because Pro isn’t even lit yet. I’ve been a Google backend customer for 13 years and I keep waiting and it seems to me that these features hit a year after Google brings them out, and it’s frustrating. I’m using Gemini Advanced Personal, paid on my own personal Gmail, but I can’t even get that capability on my Workspace account. Yes, I know it’s not all about me, but you can imagine-

Daniel Newman: Wait, wait, wait. Wait, what?

Patrick Moorhead: Yeah, well it’s not always about me, but it could be.

Daniel Newman: Well, there’s less of you to be about nowadays.

Patrick Moorhead: Exactly. And I couldn’t help but it being a really interesting turn here because when Google first started out, they had no advertising and they were giving the best search away for free and investing a ton of money and very similar to what we’re seeing with open AI. And at the time the Google of the industry for things like search and advertising wasn’t Mosaic in the open internet because it sucked. It was America Online that you would pay a service fee to go in and get all of your information. That was very much monetized. Steve Case was the founder, by the way. So yeah, I’m seeing some really interesting parallels here. And the way that I look at this is that Google doesn’t have to be first. They just can’t be too late with their capabilities.

I’ve seen some issues with Gemini as well on recency. And so for instance, I did a search on Google Cloud Next 2024, and it told me it didn’t even happen yet, which is just weird because whether it was Bard or just Gemini up to that point had plugged into real time events. Overview in Google search is this first foray, well second foray into this where once you do a Google search, you’ll get a content block called an AI overview, and that might be at the top, it might be at the bottom, it might be at the middle, it might be interspersed with the BlueLinx, TBD, but I do think Google did enough to take a lot of the worry out of it.

So I think as soon as true 4o gets out there and multimodal, which it’s not, which it competes with what Google’s calling a project called Astra, that’s when I think we’ll see how far behind and the ability for OpenAI to siphon off Google use. By the way, for the record, there’s more paid OpenAI users than there are Gemini users out there, which was a real shocker to me. But ChatGPT is like when Facebook first came out or some new follow-on social media site, it’s gone absolutely viral.

Daniel Newman: All right, we good? We got that one. That was a lot. Hey, these two things, by the way, were so big, honestly, we could have done entire shows on each of them. We’re going to go a little bit quicker here because I mean we don’t have any topics that are quite as cool as those two. Pat, will we ever get to talk about anything but AI again? Is it over for us?

Patrick Moorhead: No, I think we will. These things all come in phases. We were talking about the hybrid multi-cloud forever and ever and ever and ever before generative AI came out and then before that it was machine learning.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Sovereign Cloud Deployments: The Race Among Hyperscalers
Steven Dickens, Chief Technology Advisor at The Futurum Group, shares insights on Oracle’s US$6.5 billion investment in Malaysia's sovereign cloud. This move, alongside strategic hyperscaler partnerships, positions Oracle to lead in AI innovation and regulated cloud deployments.
VAST Data Adds to Its AI Capabilities With New InsightEngine Targeting RAG Workloads
Mitch Lewis, Research Analyst, Camberley Bates, CTA, and Mitch Ashley, CTA, at The Futurum Group share their analysis on the VAST Data’s InsightEngine with NVIDIA announcements.
Krista Case, Research Director at The Futurum Group, overviews NetApp Insight 2024.
HPE Aruba Networking Central: Now Scintillating Yet Smoothing
The Futurum Group’s Ron Westfall examines why the new HPE Aruba Networking Central solution can deliver the purpose-built AI, contextual observability, architectural expandability, and improved configurability key to swiftly improving network management, security, performance, and visibility.