Search

Why Customers Choose Google Cloud for Generative AI – Six Five On the Road

Why Customers Choose Google Cloud for Generative AI - Six Five On the Road

On this episode of the Six Five On the Road, hosts Daniel Newman and Patrick Moorhead are joined by Google Cloud‘s Oliver Parker, Vice President, Global Generative AI GTM, for a conversation on why Google Cloud is the preferred choice for customers looking into generative AI solutions.

Their discussion covers:

  • The reasons behind Oliver Parker’s decision to rejoin Google Cloud
  • Insights from recent global meetings with customers about their inquiries and needs regarding generative AI
  • The key factors driving customers towards choosing Google Cloud for generative AI projects
  • Additional insights into Google Cloud’s offerings and future directions

Learn more at Google Cloud.

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Patrick Moorhead: The Six Five is On the Road at Google Cloud Next 2024, here in our favorite city, Las Vegas, or the city that, Dan, we find ourselves in a lot. It’s incredible the amount of interest in AI. It probably started, kicked off, it really kicked off a few years back, but this big Gen AI thing hit and we both said in our end-of-year podcasts, if you remember, 2023 was about putting the building blocks together, and then what did we say about 2024? It’s going to be the year of AI?

Daniel Newman: Implementation.

Patrick Moorhead: There we go. Exactly right.

Daniel Newman: Yeah, it’s starting to come to fruition. We knew that you and I, we like silicon and semiconductors and that was part of what we heard here at Google Cloud Next today, but we knew that all this infrastructure, all this spend that’s going on in the marketplace right now was going to need to start yielding returns, so all these cloud providers that are building these big AI data centers, they need to build applications. They need to build tools and technologies that enterprises and consumers can drive the future off of. We got a little bit of that here today at Google Cloud Next in Las Vegas, which I’m going to go ahead and say, not my favorite place, but it is an amazing place to have a conference and this conference so far has lived up to the expectations.

Patrick Moorhead: It is great. A lot of the enterprise that we talk to, it literally is where do I start? Which workloads do we prioritize here? With us, we just happen to have Oliver, who leads GTM for generative AI with us. Welcome to The Six Five.

Oliver Parker: Thanks for having me. Appreciate it.

Patrick Moorhead: We want to learn all about your multiple times around the globe and what you’re learning about customers. We really appreciate that.

Oliver Parker: Yes.

Daniel Newman: Oliver, you left and now you’ve come back.

Patrick Moorhead: He’s back, baby.

Daniel Newman: Give us a little bit of the background. What drove you back to Google Cloud?

Oliver Parker: I joined Google Cloud from Microsoft in 2018. I came over because I saw the opportunity in what Google was starting to build out basically as they were really doubling down on cloud. This is just before Thomas turned up. I worked for four years and built out a lot of the stuff on the West Coast with super exciting customers, many of the ones that are here today and we’ve been talking about. Then I went, I worked for an identity and security company called Okta for the last couple of years and then rejoined in January to lead the AI go-to-market effort and specifically more around generative AI. I came back because I think for any of us that are sort of sitting in this world and I was sitting in it in a security world, you saw the opportunity, but the bigger opportunities with generative AI across all parts of the stack and across all platforms. It was a great opportunity that I couldn’t pass up and excited to be here. Good to meet you both.

Patrick Moorhead: You, too. I can’t imagine a better opportunity, too, to come into. I was working during the internet build-out.

Oliver Parker: Me, too.

Patrick Moorhead: This excitement is literally, it just feels so much bigger and more real. We did the internet build-out, did the dark fiber and then everything went kaput but what was there created literally the next two or three cycles. You couldn’t have had smartphone without the web. You couldn’t have had social without the web. It changed everything. I feel like this is, as analysts, we have to distinguish between trends and fads and this is an absolute trend. You obviously talk to a lot of customers. You and your team, you’re flying around, you’re talking to them. What are you hearing? What are they talking to you about? Maybe talking about their pain points. Why Google? Why do they go with you?

Oliver Parker: I think you mentioned it. I just got back from a world tour. I think I sat with 80-plus customers in I think it was about 40 days. A really, really good experience. Obviously, I came in and my number one priority was to go sit in front of the clients, whether they’re existing customers, potential customers or prospects. There’s a few themes that came back and you mentioned one of them. You talked about the building blocks and production. I’ve got a slightly different phrase, but I think it’s the same thing, which is a huge amount of focus moving from experimentation to production. I’ve even got another phrase that I think of is, there’s some great experimentations moving into production, but I think the next phase after that becomes scale production. You’re actually seeing some really interesting stuff start to go into prod, but it’s still in small parts of the organization or it’s a customer-facing chatbot in digital channel, but it hasn’t been propagated across all digital channels.

I think this phase and this year, to your point, this is very much about production. I think those companies that are more on the leading edge, you’ll start to see more scale production, which I think then this becomes, then this really lights up in terms of blowing the doors off, I think, of where it can go. That’s one bucket. The other bucket is a huge amount of conversation around use case. Obviously to your point, the technology is the foundation, but this moving out of just experimentation and figuring out, where do I spend my time? Lots of engagement with the clients around help us identify use cases and they’re typically ending up being industry-based conversations. A lot of conversations around business processes, depending on your industry are really driving how generative AI can make a difference to their business. Whether that’s on the productivity side, automation on the back end or even very specifically around customer experience, like we just talked about. Those would be a couple of the big themes that I got from my conversations over the last few weeks.

Patrick Moorhead: It’s interesting, too. What we’re seeing is, typically the data domain, and I don’t mean in the data scientist realm, but it’s typically reserved to the data domain like ERP or SCM or PLM, but the future is going to be all about combining those different types of data to get the best output possible. Again, we’re literally just at launch, but we’ve talked to enterprises that are able to increase productivity 40%, 50% in some areas today. That’s not a theoretical number. That’s a real number that’s executed to. Again, we’re just starting to see at the beginning. We have to connect this huge build-out, too, with enterprises seeing the benefit or I think we know what happens.

Daniel Newman: I like what he said about going to prod and going to, I think we used the word implementation, but I think you’re talking the same thing.

Oliver Parker: Yeah, it’s the same.

Daniel Newman: The point is that is really the chasm that companies are crossing right now is they went from hey, we did something very limited in a very small scope to hey, we’re going to do it a little bit bigger, we’re going to deploy it. Then it’s about speed. We’ve got a lot of intelligence on our side as analysts that’s basically saying companies are spending more, but they’re also concerned about the ability for their vendors and their integrators to do the work, competency, capabilities. We’re seeing a lot of pivot. We’re seeing a lot of high expectations, but we’re also seeing a lot of budget shift that way, which is another reason that first question I asked you. Why did you come over here? More budget.

Oliver Parker: It’s more budget, but honestly, if you think, to your point, I was around in the tech build-out at the end of the 90s and you’re building something, although you didn’t know what was going to sit above it. Yes, we’re building out stuff, but we know what’s going to sit above this this time. That, for me, is the difference. There was this big, to your point, fiber infra build. We didn’t really know what was going to happen in 2000 and that. I think we now, it’s almost like we’re redoing it, but we’re redoing the whole stack, not just the lower part of it.

Daniel Newman: From a Gen AI standpoint right now, why are customers picking Google Cloud and how do you see yourself? What’s the thing that’s helping you win against those other very capable cloud providers in this area?

Oliver Parker: I think the first thing, I still think we’re still in a bit of, this is still an early phase. I think there’s a lot of focus on the model. I’ve said in my times with clients, the model, it’s a lot of model game, whether it’s us, whether it’s OpenAI, the great work that the Anthropic team are doing, a lot of the stuff, obviously, in open source. There’s a lot of focus on the model, but there’s as much focus on the model, especially when you get into the enterprise space as there is on the platform. In addition to the great work that I think Demis and the DeepMind folks are doing, there’s a huge focus. You saw the announcements today around Vertex.

For me, when I start to think about the calls that we might get that maybe one of the other providers doesn’t get, and I’ll be as diplomatic as Thomas is, a big part of it is, we have first-party model. We have a really strong commitment to the ecosystem and other models, but we also have a really strong platform where we’re basically taking out a lot of the complexity of building and running AI at scale across an organization. Then you look at our history.

Daniel Newman: Yeah, a ton of provenance.

Oliver Parker: 20 years, you have a lot of the history of building it. I know you met with Mark today from the infra side. You look at what we’re doing with TPU, GPU and obviously, the announcement today with for Axion and the arm, that infrastructure has been built over time to serve at-scale AI-centric applications, which is obviously our own applications. You take all of that goodness and history and culture. You then apply it. I think that’s one of the reasons why we’re getting the call, but I’ll also say Vertex becomes a really important part of that conversation, because that’s the platform which we think becomes really the core AI platform for accessing all these different models over time.

Patrick Moorhead: Yeah, it’s interesting. We get a lot of feedback that talks about Vertex AI simplifying. I think you just talked about removing complexity. There’s so many questions and I feel like what you’ve done on Vertex AI and then adding grounding techniques, rack techniques to be able to leverage some of the on-prem data is going to be super important going into the future. One thing I don’t think you get enough credit for is the fact that Google actually has the largest data state of any company on the planet.

Oliver Parker: Yeah, we’re pretty good with data, we think.

Patrick Moorhead: Yeah.

Oliver Parker: It’s our mission statement.

Patrick Moorhead: Been at it for a minute.

Oliver Parker: To organize the world’s information and make it universally accessible and useful. That’s our mission statement of the company, so we’re grounded in it.

Patrick Moorhead: Yeah. Then applying that to an enterprise, there just is not an enterprise that’s too big for you to service. Listen, I’m not confused. There’s a consumer data state and a commercial enterprise data state, but planning for that large of a data state and whether it’s, again, the underlying compute, memory, storage, networking is a very difficult thing. Then you add on that the complexity of generative AI where latency is everything. It seems like that would be a factor, too.

Oliver Parker: I think so. I often talk to clients that infrastructure is not commoditized. I know you guys know infrastructure and chips better than most. You understand what we’ve built from an infrastructure standpoint, which is highly specialized and honestly, unbelievably performant relative to other platforms.

Patrick Moorhead: TPU is incredible.

Oliver Parker: Yes.

Patrick Moorhead: The fact that you trained all of your Gemini models on the TPU is a mind-blower when I think conventional wisdom is like, oh, you can’t do an LLM without a GPU. Listen, I know Google Cloud loves GPUs, too.

Oliver Parker: We do. We love all platforms.

Patrick Moorhead: Exactly. I still think that is pretty good. There’s even not a knowledge and understanding that you do some of your own networking silicon and that’s custom.

Daniel Newman: I could talk about this all day, but we do have to let you go because I’m sure you have some places to be, Oliver, but talk a little bit about some of the overall most interesting things here at the event, things that maybe, what are the things you’re hoping that the audience out there hears from your perspective about generative AI go to market?

Oliver Parker: I think some of the announcements today were pretty incredible. Not as much on the model piece, but actually on the platform. I’ll always come back to the platform. I spent almost close to 30 years selling to the enterprise. I think you have to have that platform in place to be able to run these systems at scale. To your point, whether it’s the privacy, it’s the safety, it’s the responsibility aspect and all the tooling we’re putting into Vertex, that, for me, is a really important part of our story. I think the other thing is just continue to watch the evolution that we’re doing with Gemini. I’ve just been actually with some meetings with Demis and some clients and just the rapid innovation and how they’re thinking, not just the next iteration relative to OpenAI, but actually way further ahead.

I think you’ll continue to see us show deep leadership around model capability. Then the third thing is, you talked about it, but we are a big believer in open systems and open platforms. Yes, we obviously have our own first party, but the work we’re doing with Anthropic, the work we’re doing with Mistral, all the Hugging Face, our commitment to the ecosystem is as centric as it is to our own first party. I think that’s sometimes what people are now starting to think about Vertex is, really, Vertex becomes your control plane for AI. Then really, we see an ongoing proliferation of models and Vertex is almost the entry point for running and building all your AI across a multiple set of models. We also think over time, there’ll be just huge model proliferation, whether it’s at the foundational level like Gemini or even domain and tasks. Those would be the three areas I’d say we feel very good about from an announcement standpoint today.

Daniel Newman: Yeah, a zero-sum game is not the reality. We, both Patrick and I so often end up saying it’s never going to be a sum of zero, meaning everyone’s going to run on just first-party.

Oliver Parker: Totally.

Patrick Moorhead: Enterprise has wanted choice since enterprise was invented. Even though it’s funny that cloud started 15 years ago with VM size of one, with one processor, one choice of memory and storage, but the reality is that enterprise wants that. They want that diversity.

Oliver Parker: I also think in some of the models, some of the stuff that the open source community is doing and you get into smaller size model, there’s going to be value in lots of different models and I think we want to embrace that.

Daniel Newman: Yeah, I’d love to have you back some time to talk about that.

Oliver Parker: Awesome.

Daniel Newman: Oliver, thanks much for joining us here on the Six Five.

Oliver Parker: Appreciate it.

Patrick Moorhead: Thank you.

Oliver Parker: Thank you. Thanks for having me.

Daniel Newman: All right, everybody. Hit that subscribe button. Join us for all of our coverage here at Google Cloud Next 2024 in Las Vegas. Of course, join us for all of our Six Five episodes but for this one, for Patrick Moorhead and myself, it’s time to say goodbye. We’ll see you all later.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Omar Rawashdeh, PowerEdge Product Planner at Dell, joins Dave Nicholson and Lisa Martin to shed light on Dell's latest PowerEdge Servers, highlighting the innovative T160/R260 PE servers and their pivotal role in modern IT infrastructure.
Jon Siegal, SVP Dell Portfolio Marketing at Dell, and Jonathan Huang, MD/PhD Candidate at Northwestern Medicine, share their experiences on leveraging AI to advance patient care. Their insights highlight significant AI-driven innovations in healthcare.
Dell’s Justin Bandholz joins Dave Nicholson and Lisa Martin to share his insights on Dell's innovative new CSP Edition products — the PowerEdge R670 & R770, designed specifically for the challenges faced by Cloud Service Providers.
Christina Day, Director of DRAM Product Marketing at Samsung Semiconductor, joins hosts Dave Nicholson and Lisa Martin to share her insights on how advanced memory technology is critical for accelerating and enhancing AI capabilities, highlighting the potential of Processing-In-Memory (PIM).