An Inside Look at Google’s Data & AI Cloud Innovation at Next ’23 – Six Five On the Road

An Inside Look at Google's Data & AI Cloud Innovation at Next '23 - Six Five On the Road

On this episode of The Six Five – On The Road, hosts Daniel Newman and Patrick Moorhead welcome Gerrit Kazmaier, VP & GM, Google Cloud Data Analytics and Andi Gutmans, VP & GM, Google Cloud Databases for a conversation on Google Cloud Next, opportunities around Generative AI, and what makes Google’s approach to the data ecosystem unique.

Their discussion covers:

  • Reflections on some of the highlights and common trends surfacing in their discussions with customers at Google Cloud Next
  • The biggest data challenges Google is helping customers solve today
  • What role Google plays in supporting an open ecosystem
  • How Google is helping customers navigate through the changes in this fascinating period of disruption for some industries
  • What customers can expect from Google in 2024

Be sure to subscribe to The Six Five Webcast, so you never miss an episode.

Watch the video here:

Or Listen to the full audio here:

 

Disclaimer: The Six Five webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript:

Patrick Moorhead: Hi, this is Pat Moorhead and The Six Five is on the virtual road at Google Cloud next 2023.

Dan, it was an exciting week. You and I not only covered the event coming in, we were physically at the event in San Francisco for a few days, but a great time.

Daniel Newman: Really was a great time. Pat, what is also a great time is when we can find that beautiful balance of being able to be at the event, be present, attend the sessions, talk to the executives, talk to the product builders customers and at the same time, still get to pod with my bestie in the wake of an event like this. This was a big event, Pat.

Patrick Moorhead: Yeah, the topic Du Jour, which has been the topic I think since November of all analysts is generative AI. One thing that sometimes we forget with all this generative AI magic is that it’s only as good as the data layer. Unsurprisingly, we have two of Google’s top data experts, business leaders, Gerrit and Andi covering analytics and databases.
Welcome to The Six Five.

Gerrit Kazmaier: Good to be on, hey.

Patrick Moorhead: Thanks. Good to see you made it through the event and still smiling.

Daniel Newman: Are you guys fully recovered, feeling Good?

Gerrit Kazmaier: We are still fully living it, was a great week. Lots of good customer conversations. I think Andi and I are still chass up about next.

Patrick Moorhead: Heck yeah. Well, that’s good because that’s exactly what we’re going to be talking about here.

Daniel Newman: Yeah, I came back buzzing. If you looked at my tweet stream and some of the stuff, the team we published this week, it was very compelling. Some really good customer conversations and of course your CEO, Thomas Kurian was very generous with his time, gave the analysts multiple sessions, including some small group sessions where we hammered them pretty hard but the answers came back positive and very assured. Of course as analysts, we do like that a lot.

We hit on the fact that it’s been a busy week. There was a lot going on, there’s customers, there’s partners, of course, analysts most importantly and there were a lot of announcements. I’d love to get just both of your reflections on the event, everything from the trend lines that you saw, opportunities with gen AI, and of course you’re interacting with customers. Maybe any feedback you could provide from that.

Andi, I’ll start this one off with you.

Andi Gutmans: Yeah, absolutely. I definitely think gen AI was top of mind for all of our customers, but I think what got them really excited at the conference was it wasn’t just theory. We really were able to engage them in the things that they could do. We actually were able to show lots of demos, lots of technology, lots of product innovations that could actually help them with gen AI.

I would say there’s really two different aspects of gen AI they’re really excited about. One is how can they leverage gen AI in their own applications to improve the customer experience and employee productivity, and the second one is assisted AI. We talked a lot about DuetAI at the conference and how can we actually help their employees achieve more for the business. In both of those dimensions, we had great conversations. Of course, data was at the center of that because when you talk to enterprise customers, everyone is very interested in the Bard and ChatGPT like experience. However, for enterprise apps, it’s actually all around bringing this kind of gen AI, large language models together with enterprise data to deliver accurate up-to-date experiences.

I think there was a very strong recognition that without having the data side figured out, there wasn’t really as much value in the large language model side. A lot of the innovation that my team kind of showcased, Gerrit’s team showcased was around bringing these two worlds together and truly delivering enterprise apps for financial services, for healthcare that deliver that step up in value for their businesses and their customers.

Patrick Moorhead: Gerrit, anything you want to add to that?

Gerrit Kazmaier: Yeah, I think the red threat for me was really the step from taking AI to enterprise AI in the sense of applying it to real business problems. As Andi has said, the magic is all within the customer’s data when it comes to AI, the most unique, the most differentiated asset.

People start to think differently about the data estate now. In the past, we were talking mostly about structured data. Now, with generated AI, we have fantastic opportunities to work with all sorts of unstructured data to analyze and to expect insights from the video documents, voice recordings, and customers understanding now that this is a real important part of my data landscape for which I can drive real 360 increase degree insights, but I know what my customers are saying about me on the phone, or about they’re writing me or writing about me. That’s I think a big step-up.

Another big step-up was the understanding of how do we interconnect the data landscape knowing that for AI models, the deeply hidden patterns are the most interesting ones. We are not looking for narrow data sets, but very wide ones and really exploring cross Cloud analytics with Google’s omni-technology. Many customers who are saying, “Hey, I lived in the past where I could accept having parts of my data outside in Azure AWS, and being decoupled from their analytics and AI stack lifts now to a world where cross Cloud data, cross Cloud analytics takes a primary role.”

I think lastly, one big theme that we had across many of our customer conversations was how do we bring AI to our data. In the past, it was always bring data to the AI system, and that made every AI project, the data project in disguise. Now, with customers who are on the petabyte range themselves, sometimes in the tens. Maybe you saw on the keynote, Yahoo… Andi, correct me if I’m wrong, but I think it’s at 550 petabytes of data. That may be the very high end, but it shows you that in these data volumes, you cannot bring data to AI anymore. You have to bring AI to your data and you have to differently think about your data and AI stack as a consequence.

Patrick Moorhead: The talk of the show was generative AI, but the reality is that analytics, machine learning, deep learning, those aren’t going away. In fact, what generative AI did is it just made a tremendous opportunity, but it’s also more complex, particularly when it comes to data.

Gerrit, maybe holistically, when you talk to customers at the show, what were some of the biggest data challenges that you’re trying to help solve right now today? You talked about a few of them I think in relation to generative AI, but I’d like to hear holistically.

Gerrit Kazmaier: I think one of the big challenges that most customers are living in still is that they have sometimes 10, sometimes hundreds of data silos. They understand that now in any sort of AI activation, they need to break through and interconnect all of these data sets. We had examples from customers like SGB Bank who moved to BigQuery and interconnected 176 banks on a single data platform to really basically drive that interconnected data landscape with. I still think, be it data silos within legacy applications, be data silos across Clouds, be it data silos across on-premise and Cloud, it’s still a pretty big theme for companies to finally move past that. That’s a key stepping stone in their data maturity.

I think the second big challenge that customers face today is how they can make the unstructured data a real strategic part of their managed data landscape. Also, data formats are increasing. Data landscapes are not anymore a data warehouse system. You have open source file formats like Iceberg and Hudi and Delta, and you have many different ways of data streams now coming into your company and they’re all, whether it’s an open lakehouse, a data warehouse system, you still need to have a well-managed and well secured data landscape, with the compliance with data residency, with all of the increased trust expectation that comes with AI that a company wants to give to their consumers and customers in return and bringing that together. What we say at Google, connecting it through the ultimate simplicity of BigQuery to bring it down in a one line statement is, I think still for some of the companies, the key step to take. For others, maybe moving one step beyond that, it really is an AI activation.

It’s exactly to your point, we have hundreds of machine learning operations today in BigQuery alone, and customers are using TensorFlow, onX, XGBoost for various cases, be it customer sentiment analysis, cohort segmentation, propensity to buy models and really benefiting from the secrets that data hold to understand their customers better. Now, layering generative on top of that, of course just makes the toolbox so much more wider and so much more interesting.

Andi Gutmans: By the way, one thing I would add on that is you mentioned deep learning is not going away. That is absolutely the case. But I think what large language models are enabling is they’re enabling developers to also drive some of this innovation. There’s probably 10 times more developers than there are data scientists out there, so the other thing we’re helping customers with is how do we equip the developers to really go and build these gen AI experiences? That also means giving them vector support and databases, vector embeddings, connecting their unstructured data that Gerrit mentioned, two vector embedding models and two operational databases.

I would say that we are definitely seeing a change in some of the personas that are also contributing to the innovation and then as they really think about going into production, then you also have a lot of challenges you got to figure out, such as cost and latencies and so on and so forth. I often say the best model is not necessarily the best, because the best model may be high latency and very expensive, and it may actually be sufficient to use a model that’s smaller that gets the job done and can deliver much more interactive experiences.

I think there’s also an element here of how do you use your data to really make sure you understand the model quality and then, compare that back with your data and so, you can optimize and the customer experience and the cost elements of this, which is still a pretty hard problem and has not been completely solved yet.

Daniel Newman: Well, let’s drill down a little bit into differentiation, Andi because I hear what you’re saying. The developer ecosystem is very, very important, that’s why you have a developer keynote at Google Cloud next.

But Google of course, is sort of well-known for search and it’s data-

Patrick Moorhead: Kind of well-known.

Daniel Newman: Kind of well-known. Humble. We’re a humble crew. The minute you think you’re bigger than… Well, they’re pretty big. I don’t know.

But anyway, but this data estate that you have, it does present some pretty unique opportunities. A lot of what was being built DeepMind brain, these projects that were being done was the world becomes a benefactor that Google can invest in R and D and work on these interesting things. Of course, now that this is all in the mainstream, this broader portfolio, you’re building this Cloud business on top of it, how are you thinking about the unification of data because that’s something every company is struggling with and trying to work around. Of course, how does your sort of data unification and your data estate play into a differentiation story that enables you to say, yeah, we truly are different than hyperscaler A, hyperscaler B, hyperscaler, D, C, E, et cetera?

Andi Gutmans: Yeah, that’s a great question. I would say that we think about the unification data of data at a variety of layers at the stack. Let me talk about the nuts and bolts side and then, we’ll talk a bit about the higher levels. What’s very differentiated with Google is that we have a complete compute and storage disaggregated architecture, and we also own our own global networking. What that really means is whereas other providers will typically anchor their data state around the blob storage that has tens of milliseconds of latencies, we are desegregated compute and storage infrastructure actually gives us latencies in the 400 microseconds or so in our ability to get the data. Plus, it’s completely desegregated.

What then actually allows us to build is it allows us to build systems that are very, very unique in the performance, the price performance, the latencies and availability versus any other provider out there. The reason why we’re able to do that is we actually had to solve that problem for ourselves. We built very unique technologies like Spanner that needed to do OTP. We have databases at over 100 petabyte level OTP transactional databases, I’m not talking about analytics. We had to solve these really big problems for ourselves internally. Of course, BigQuery is also one of those technologies that has come out of solving our own internal problems. It’s just been very exciting now that we’re actually able to take those technologies and externalize them to customers and so, customers are realizing a different level of scale and availability and performance with us that they can get anywhere else.

That baseline unification does allow us to then deliver very differentiated features. For example, at next we announced Cloud Spanner data boost. That is a way to go and process data that is real time transactional in a workload isolated manner. Because of this compute and storage disaggregation, we can go directly from a different part of compute to that storage, and we tightly integrated with that with BigQuery and also Spark on Dataproc. Basically, a customer now can go to BigQuery or Spark on Dataproc and do a very large scale, heavy kind of analytical query on maybe a petabyte of data that’s in Spanner and it actually works. Not only does it work, it’s workload isolated, meaning it has no negative impact on the production offering.

Those are some of the unique things that I would say only we can build because of how we built the system bottoms up. Now at a higher level, we’re also adding a lot of the out of the box integration. A lot of our focus has been on really integrating the platform for customers so they don’t have to do all the kind of pre-integration themselves and can really focus on innovation. We’ve announced capabilities like data stream for BigQuery, which automatically ingest data from operational databases to BigQuery, BigQuery Federation to our operational databases, of course, something like Spanner data boost that I mentioned. We announced the reverse ETL from BigQuery to big table at the conference. Of course, in Gerrit’s area, there are things like Dataplex that also give you full cataloging and management of your data state.

I would say the differentiation starts at the nuts and bolts, but definitely, we’ve moved all the way up to the top of the stack to make sure that we’ve integrated this platform for customers in a very differentiated way like no one else has really been able to do. That’s I think where customers really derived the maximum value because in many other environments, they spend 60 plus percent of their time just on the plumbing. Really with us, most of their time can be spent on building business features and business value.

Patrick Moorhead: Plumbers need jobs too. I’m just kidding.

By the way, I’ve written a lot on BigQuery and Spanner, and I’ll go one… Google has plant scale capabilities that don’t get discussed a lot. It was nice to see the company talk a little bit about more of those as opposed to, “Hey, this is just what we’re doing in Google Cloud,” because I think that what you’re doing in areas even like open source, the fact that you’ve started open source projects, a lot of people talk about, “Hey, I’m a participant in an open source project.” Google creates a lot of these.

Andi, I know you’re very passionate about that, but can you give me just maybe a summary or some sort of positioning as it relates to Google and open source? Gerrit, I want to make sure you get a bite at this apple too.

Andi Gutmans: You want to go first, Gerrit or…

Gerrit Kazmaier: You go first and I go.

Andi Gutmans: Yeah. I think we’re very passionate around being an open platform and really making sure that when our customers choose Google Cloud, they don’t feel locked in. A lot of our focus is really on open standards, open APIs, really supporting open source well, and our customers really appreciate that. That’s everything from really focusing on delivering the best experiences on managed open source, whether that things like Kubernetes, which we invented, Cloud SQL with MySQL, and Postgres memory store with Redis all the way to, I would say open source compatible services like Spanner has a Postgres interface, AlloyDB is Postgres compatible to deliver some of our differentiation, but in a way that gives customers the comfort that if at any point in time they need to run on premises, they need to show the ability to do a stressed exit to a regulator, they’re able to do that.

I would say our engagement is both on open source and on open APIs, but really, the intention is to make sure that our customers have this open ecosystem, whether it’s data formats, APIs or so on that we support them with.

Gerrit Kazmaier: As you said, biting from the apple too, I think the keys to understand data is not a solution, rather, data as an ecosystem, and it’s ultimately for our customers, really important to facilitate that ecosystem. Our job at Google really is to create that open platform that Annie was talking about, which they can build their data ecosystem with. Innovation many times is driven from various angles. It’s sometimes how to predict what a good solution would be for a certain challenge in a data processing space, which is why we have commitment to open standards, we are supporting Apache, Iceberg, Hudi and Delta, for instance, as primary persistency formats in our machine learning. We just talked about gen AI. We just announced a support to import XGBoost, onX, and TensorFlow models directly into BigQuery again, to really benefit, have our customers benefit from pre-trained models and the open source frameworks and libraries around us. We truly believe that if we are the ones helping this ecosystem to thrive at Google the data ecosystem, ultimately, this is what’s going to help our customers the most.

Daniel Newman: I want to talk about innovation and rapid onset change. Gerrit, I’m going to throw this one your way, but Pat and I actually, one of the great things about our time is we actually talked to some of your clients. I don’t say actually as if it wouldn’t always be great, but I’m saying actually, a lot of our vendor don’t necessarily give us lots of one-on-one PR and comms free exposure to talk quietly to customers. We talked to some really big customers of yours and we’re able to ask the hard questions.

We’ve heard about digital transformation for two decades, three decades now, and it’s been an endless thing, but I’ve never seen change onset at this pace. The change that’s taken place with each industry, one, two, three, four digital one happens faster and faster and faster but talk about that part because it feels to me like the appetite, just look at the GPU sales numbers, the appetite for change is substantial. Every company, it’s a gold rush, whether it’s the hyperscalers, the enterprises they’re buying, all the AI capacity they can buy but what we always forget about is digital transformation comes down to how much can people adapt? Can customers adapt, can the employees adapt, IT people, can we have enough data scientists, do we have enough developers, do we have the right applications built. How are you dealing with that part of actually saying, “Okay, we’ve got the capacity and the technology, but we can actually help you keep up and do this.”

Gerrit Kazmaier: In the past 12 months, maybe, it was one of the fastest innovation cycles ever in the industry. If you just followed Google Cloud from IO earlier this year to next, the amount of launches and innovations that we have brought to the market is of course, it’s amazing. It’s amazing to see, and it’s certainly also for our customers right now looking at all of this coming to next, one of the question is how do we adopt this, how do we apply this, how do we upskill ourselves with all of these new capabilities? I think A, to a large degree, there is a mindset change that goes alongside with this because this rapid innovation cycle, where is it driven from? From improvements in models, not improvement of software systems. Models iterate so fast and once they do, they can scale their new capabilities so quickly that I think a big part of that is also making this step from being a functional company to a data and AI driven company and really understanding what that operating model means.

Actually you’re not outpaced by innovation, but every time AI capabilities develop and models develop, you’re immediately benefiting off it. I think that that’s a big part of it. There is a big cultural and skill-based aspect to it, which really goes to how your company understands itself and that will drive the value generation from.

A great example of that is Wendy’s, I hope you could check out their showcase on the show floor from the click service industry. Wendy’s burgers, amazing example, because their AI innovation did not start with gen AI driven customer service. It actually started a year ago when they started to build their customer data platform, their customer profile, their customer segmentation on BigQuery. That basically gave them all of these ingredients that once the Gen AI models came available to adopt them so quickly.

I think that that’s a big part, really configuring yourself to data and AI-driven value generation and understand models will improve so fast that you want to build your services and process in a way that you benefit from that is immediately versus building scaffolding around it too much.

I think the second piece is give it a go. There is no such thing like experience, and I think the biggest mistake that you can do right now is saying, “I want to get it right, so I go slow.” There are some aspects that you have to get it right, which is the data foundation, we talked about it, but then, it now comes to application. There is a certain element of experimentation and testing out ideas and really finding the right match to your business problem and gen AI technology. I think there are some really fantastic concepts from how you do incubation inside companies, how you start in critical aspects of your value chains, such as the customer experience and really gain experience by applying it to your space and with that, build up culture skill and confidence to scale it out further. Yes, it’s a lot, and yes, you want to decide where you start, but I think the biggest change is to make the first step and making the first step, the time is right now.

Patrick Moorhead: Yeah, I remember, I’ve been tracking Google Cloud since its existence. I think the first next I went to is at a pier on a boat, or not on a boat, but near a pier and boy, have things changed and your ability to turn technology into enterprise benefit has been substantial, especially when TK came in and made so many of those investments.

This has been a great conversation so far. Hopefully, you think the same, but I thought I’d like to wrap this up and I want to make sure we hear it from both of you. I want to talk about, you’re making big bets today. You talked the whole show about what you’re doing today. You put things on the roadmap, but can you talk about 2024 on what customers can expect from Google, maybe beyond the, “Hey, we’re going to go GA with the stuff that we said was in preview.” Maybe something a little bit deeper and then, maybe talk a little bit about the elements of why people should pick Google Cloud and maybe Andi, we’ll start off with you.

Andi Gutmans: Yeah, absolutely. I think one of the anchor points for how we think about 2024 is the big interest not just in Gen AI, but really thinking about these data-driven applications and how to innovate faster, as Gerrit mentioned. There are a variety of areas that we’ve seen really be top of mind. The first one are things like AlloyDB AI, how do we bring that AI development experience closer to their data, closer to their operational database so they can truly build enterprise apps? We had some announcements in next, but we definitely think the sky’s the limit and we can really do a lot to enable developers and practitioners who are innovating with AI and that’s a really big focus area of us, and especially at that intersection between data and AI.

The second thing is we heard consistently from customers that Duet AI really opened their imagination around how their employees could get AI assisted experiences to move faster or innovate faster, get their jobs done in a better way. We think that the AI assisted aspect of this is very, very important and one where we think we have some real differentiation on how we can make impact on behalf of customers.

The third thing, by the way, which what’s interesting, customers have been talking to me for quite some time around wanting to get off legacy databases, wanting to move to more modern and open platforms but I think what we’re seeing is the urgency to really have the flexibility in the Cloud to run where and when they want, having these new Gen AI capabilities has really accelerated the desire of customers to get off legacy databases like Oracle and SQL Server. I’d say that was definitely one of the big messages I heard consistently at next and so, there’s a real opportunity there. We of course, just jade our Oracle to Postgres migration service. We do think that assisted AI is another way to really boost that experience for customers, so that’s another opportunity.

Last but not least, I would say I would end with the boring stuff that is probably the most critical to customers is availability, security, governance. We considered that as our job zero is we have to make sure we continue to raise the bar there. I was really excited at this next where we demonstrated Cloud SQL, planned maintenance in about five seconds for a database. AlloyDB does it in under one second. That’s really unheard of. We will also continue to just raise the bar in all these in what I call table stakes, but most critical dimensions of our customer experience and ready to continue to make sure that we are delivering the industry-leading availability, security, data protection, and so on.

Patrick Moorhead: Well, Gerrit, how about you? I want to make sure you get the final at bat. I said the bite of the apple, I’m going to use the at bat. I know that’s an American slang, but sorry, sports analogy.

Gerrit Kazmaier: I love it. I love it. I think actually, it’s going to change dramatically in 2024. As I said, we are living in a big innovation cycle, not in a small innovation cycle right now. Just take a look, we have today billions of queries and exabytes data analyzed a day in BigQuery. You said it’s a global system. Yes, it is.

In Looker, the business analytics side, we have every month 10 million give or take, users logging in, distinct users logging in. How about we change the life of all of that, in a profound way, in the big query way? Yes, we talk right now about SQL and Co-generation and Python, but what Duet AI and BigQuery really enables, it tells you how you could analyze your data and that’s a big deal, because at least the four of us here in this room, we are all humans. We only find the stuff we are looking for. You only search the keys in the light, but with AI, by analyzing all of your metadata, all of your usage data, all of your query logs can actually give you entirely new ways how you can cheaper faster in new ways, analyze your data that you have never thought of yourself. I think that’s amazing.

The second piece, quick test with the two of you, you know how to use Google, right?

Patrick Moorhead: For a few years.

Gerrit Kazmaier: Quick question, but if I would just say, using Google to search the public web, that feels normal and it’s like me asking a question. It’s a silly question. Yes, everyone can do it but if you would now say, well, can you build a dashboard on your internal data with a reporting tool. Basically, how quickly could you search your enterprise data? I don’t know for this group here, but in a representative set, it’s a specialist role. Imagine a future where you could chat with your enterprise data the same way you chat with the public web through Google. This is exactly what we are doing with DuetAI and Looker and I think this is going to change the world. It’s going to make insights and knowledge so much more accessible and I think ultimately it’ll companies to run better, innovate faster, do all of these amazing things that they thought they would achieve when they started digitization. I think in 2024, a lot of this will come to reality.

Daniel Newman: Gentlemen, it’s been a lot of fun to chat. I see the future and the future of bringing together that public open internet dataset, language, and then of course, all that proprietary high use, high value data that enables you to build narratives, content stories, marketing, analytics, intelligence, and so much more. We could talk about this a lot more, but we’re not going to do it right now, so I want to say thank you. I want to say great Google Cloud next show. Appreciate you both taking the time here to spend with Patrick and myself, and I hope we’ll have you both back again soon.

Gerrit Kazmaier: Thanks for having us.

Andi Gutmans: Thanks for having us. Thank you.

Patrick Moorhead: Thank you.

Daniel Newman: All right, everybody, hit that subscribe button, join us for all of our Google Cloud next coverage here on The Six Five. Of course, join us for all of our episodes. They’re all really good. But for this show, for this hour, I want to thank you all for tuning in. We’ll see you all very soon.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Brad Shimmin, VP and Practice Lead at The Futurum Group, examines why investors behind NVIDIA and Meta are backing Hammerspace to remove AI data bottlenecks and improve performance at scale.
Looking Beyond the Dashboard: Tableau Bets Big on AI Grounded in Semantic Data to Define Its Next Chapter
Futurum analysts Brad Shimmin and Keith Kirkpatrick cover the latest developments from Tableau Conference, focused on the new AI and data-management enhancements to the visualization platform.
Colleen Kapase, VP at Google Cloud, joins Tiffani Bova to share insights on enhancing partner opportunities and harnessing AI for growth.
Ericsson Introduces Wireless-First Branch Architecture for Agile, Secure Connectivity to Support AI-Driven Enterprise Innovation
The Futurum Group’s Ron Westfall shares his insights on why Ericsson’s new wireless-first architecture and the E400 fulfill key emerging enterprise trends, such as 5G Advanced, IoT proliferation, and increased reliance on wireless-first implementations.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.