The Six Five team discusses Google Updates Vertex AI.
If you are interested in watching the full episode you can check it out here.
Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.
Transcript:
Patrick Moorhead: Vertex AI is the AI platform for both machine learning and generative AI at Google, and the company rolled out Thomas Kurian for some updates.
Daniel Newman: Yes, we got some updates this week. And look, Pat, I’ve been impressed from sort of day one, day zero on Vertex. They really did set out… You heard my whole diatribe about making it easy, making it simple. Well, look, part of what the challenge is for the OEMs, is that the cloud providers are really ambitiously and rapidly working to make these solutions easy to digest and consume in the cloud. Of course, the cloud providers are making it connected and accessible to hybrid and on-prem. But where the workloads start and end and where AI starts and ends, it’s really shifted the entire cloud space. I said, “The cloud world order has changed in the era of AI.” I’m still assessing exactly where everything lands, but it’s changed. And that’s because multi-cloud is proliferated really quickly in the era of AI, because different clouds have different capabilities. But we are starting to see companies trying to figure out which tools do they want to standardize on, which environments do they want to build on?
And Vertex was compelling coming out of the gate, it’s Google’s cloud development platform. And really what did they focus on this week? It focused a lot on grounding, but they’re also focusing a lot on these enterprise-ready experiences and creating higher fidelity, easier to connect. And there’s two big things I took away from this week, Pat. One is moving beyond that broad internet LLM search, because Google of course has to keep working on grounding and the quality of outputs, they’ve had some stop starts there. But one of the things that they’re doing is they’re bringing these high fidelity important data sets that are outside of Google’s data, meaning Moody’s data from a financial services company, that data being made available. Reuters data, Zoom info data, that can be now part of the Vertex and the search experience so that enterprises can get more value, higher fidelity answers, and then grounded to the type of outputs that can be trusted.
The second thing that they’re working on is a high fidelity moat. And this, Pat, you talked about RAG, but is the ability for enterprises to start tapping into not just Google’s broad available internet search data and the third party data that I just mentioned. But also you can source your information, your own corporate data sets to tie to what Gemini is doing, and tie to what those third party is doing to create the highest quality outputs. Pat, you and I have talked a lot about this, but the winning formula for AI and generative AI has to be a combination of some proprietary data that no one else has coupled with well-designed accurate language models, and then of course to what they just did, coupled with other maybe for sale public private data sets that can then complement to create the best outputs, what Google is calling highest fidelity.
You and I test this stuff regularly, you actually publish these tests from time to time when you’re asking about things like earnings, you’re asking about things like company product launches, trying to get to the right answer, you’re seeing these things are still not accurate enough for us to trust. Hopefully nobody’s writing articles with this crap. I’m kidding, it’s not crap, it’s good stuff. Because it needs accuracy layered on top, Pat, so these are ways to get us to that accuracy faster. Some good steps for Google, appreciated them putting Thomas Kurian forward and sharing some of this with us. Pat, over to you.
Patrick Moorhead: Yes, this was a great follow-up to Google Cloud Next, because a lot of the updates here were, “Hey, we’ve taken it to the next step.” It’s generally available, or it’s in public preview, or if something was teased, it’s in the beta category. But their announcements really were about, again, making the results better through bringing in different data sources. It was also about lowering cost, if you look at Flash as an example, the SLA, which is more like a provision throughput, and that also hits on capacity and price. And also teasing or reinforcing that, “Hey, Google DeepMind, we are keeping the cool stuff coming.” And that might be maybe… I hate to say this, maybe reactionary to what we saw from OpenAI, who still we haven’t seen all of what 4o can do. And I know we’re not talking about OpenAI here, but I do feel a little bit deceived by what OpenAI showed on stage and what is reality right now.
One thing that was not a part of this that I do think Google should consider a victory lap is, as I’ve said very publicly, the front end of my enterprise is Microsoft, like Word, and PowerPoint, and Excel, and even Outlook and OneNote, but the backend is Workspace. And I have multiple modalities. For instance in Gmail, when I want to get through something very quickly, boom, boom, boom, boom, boom, boom, I used a Gmail front end. By the way, of course my Outlook front end is interfacing with the Gmail backend, but Workspace, Gemini hit all of my Workspace applications, I’m very excited to put more thought into this. It’s also into Gmail by the way. And if there’s something that could… What’s the right word? Kick me off of a Microsoft front end and move me to a Google front end, could be if Microsoft takes too long to integrate this capability into let’s say Outlook. Where you can go into… I can go into Gmail right now and tell it to get… I literally went in and said, “What were all of the announcements that I received this week?”
And boom, it spit out most of our topics, Dan. Because I had received emails… By the way, it was after our conversation of, “What are we going to talk about this week?” It was a slow news this week, and they all magically popped in there. I can now put an email in, “Hey, summarize any deliveries that I should have received.” And boom, it’s like, du, du, du, du, du. So man, there’s something there. And Google didn’t talk about that, but they should be doing a victory lap on at least getting out there. I’m a little underwhelmed with the RAG-based capability. And gosh, please just put it in Explorer where all my files are there and let me do this on device.
Author Information
Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.
From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.
A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.
An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.