On this episode of the Six Five Webcast – Infrastructure Matters, hosts Camberley Bates, Keith Townsend, and Dion Hinchcliffe deep-dive into the worlds of cloud and AI, questioning if major hyperscalers are capable of meeting the exploding demand for AI capabilities.
Their discussion covers:
- The examination of recent cloud and AI earnings reports from tech giants like Amazon, Microsoft, and Google, questioning their capacity to support burgeoning AI demands.
- The debate on whether the hefty capex investments in AI infrastructure by these companies reflect genuine customer need or speculative hype.
- An inquiry into the real value and efficiency of current AI services such as Gemini and CoPilot amidst skepticism from industry CTOs.
- The vital role of data quality and preparation in AI deployment success, with a commentary on common obstacles enterprises face.
- The potential for achieving super-intelligence through AI, the quest for more cost-effective AI model development, and the critical importance of efficient data storage and management in the AI era.
Watch the video below and be sure to subscribe to our YouTube channel, so you never miss an episode.
Or listen to the audio here:
Disclaimer: Six Five Webcast – Infrastructure Matters is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Transcript:
Camberley Bates: Good morning, folks, and welcome to Infrastructure Matters episode 70. We are back again with your favorite friends, Keith Townsend and Dion Hinchcliffe. We are going to be talking all earnings. We’re going to be talking earnings as it really pertains to Google Cloud … Or the cloud, not just Google Cloud, but the cloud stuff and AI. Also, we’re going to get into some really cool stuff that’s happening with the AI, in terms of where the cost is and what’s been going on. With that kicking off, I think Dion, you had some things to talk about in terms of the earnings, what we’re hearing about?
Dion Hinchcliffe: Well, we heard Amazon’s earnings yesterday. They came in just a tiny, tiny bit under estimates and getting really punished for that. We saw similar things with Microsoft and Google. They’re just struggling to meet cloud demand is the story that we’re hearing from folks like Andy Jassy. From Microsoft, CFO Amy Hood saying that they just don’t have the space, the capacity. AI demand is off-the-charts and they simply can’t build enough. Amazon’s reporting yesterday said they’re going to put $100 billion into capex this year. That’s the biggest number we’ve heard. Microsoft is saying 80, Google is saying 75 billion. That’s a quarter-trillion dollars in capex, which as I go really digging into it, about half of that is AI infrastructure. There is this narrative emerging that the hyperscalers can’t keep up. But we talk to a lot of practitioners and it’s not clear that it really is that demand there. This is maybe very speculative. Keith, I know that you had a sharp point of view on this.
Keith Townsend: Yeah. I’ve used both Gemini and CoPilot as desktop assistants. I’m going to say not good. The promise, the capability isn’t meeting the promise. Both of these companies are asking quite a bit for it. When we talk in our CIO/CTO panels, most of the CTOs I talk to question the value for the overall capabilities. Then there’s still the debate around agentic AI and where is these workloads going to run. David Linthicum, who’s a longterm cloud watcher and management consultant, pretty famous in the space-
Dion Hinchcliffe: Very famous, yeah.
Keith Townsend: … was taking a victory lap this morning. He was saying a year ago, he warned that AI isn’t going to be the growth engine that these companies expect because they’re not building what customers want and how he assumes they will consume AI. There’s a little bit of debate. Is the demand truly there? Or is the numbers disappointing because the services aren’t meeting the expectations?
Camberley Bates: Well, if I go and quote directly from their earnings calls, the Google CFO said, “New capacity issues impacts revenue.” They were off 30% year-to-year in terms of their Google Cloud announced specifically on the AI, I’m not sure which one’s the numbers were. Then if you go to the Microsoft, Amy I believe is the name is, is “the AI capacity constrain Q3.” When they’re giving guidance for this next quarter, even though Azure, they’re expecting a 31 to 32% growth on Azure, the AI has some impact. This is where they’re pulling it back. I guess what it is is because the market may be just over-valued right now, or something along those lines. Because you think about when Microsoft talks about their AI growth, it’s $13 billion is what they’re equating to that and that is up 175% year-to-year. There is growth there, it’s just a question of how big is this going to be. I was about to say a customer.
Dion Hinchcliffe: Well, is it strong growth? We’ve done some other work for other cloud vendors and with CIOs, and what we’re learning is that they’re giving away large amounts of promotional credits for some of these AI cloud services. In hopes that you’ll build something around it, and then have to keep paying for it because you’ll like what you build and then you’ll keep it. That is what’s driving this incredible demand. We know how big the cloud market is now. They’re saying they can’t build fast enough and they’re going to put a quarter-trillion dollars, which is by far the highest watermark ever. Is it strong growth though? Are they really giving away a lot of services right now on the books as cost of sales and hoping to hook enterprises on these AI services and it’s a total speculative bubble? I think that’s the real risk.
Keith Townsend: Time will bear this out. We will start to see the results in the market. Just like when early cloud providers, Microsoft leveraged credits early on in their cloud Azure journey and we saw the numbers, but we didn’t see the use. That eventually changed as the services got better. We could see a little bit of both. That they’re seeding the market by giving away the credits, booking that revenue and they’re getting ahead of the demand from a capacity perspective. We don’t know, but time will tell.
Dion Hinchcliffe: Yeah, we’ll find out soon enough.
Camberley Bates: Well, that balances back to the research work that you’d, Dion. And also, that we’re seeing both from the CIO and the CEO Insights work and the conversation that has been having, even coming out of Davos. The ability to really move true impactful AI technology is dependent upon on that data, and that data preparation, that data cleansing, all the things that go around making it safe and secure.
Dion Hinchcliffe: Absolutely.
Camberley Bates: That slows the process down. Even if the capacity is there. Which is it? Is it capacity that’s not there to be able to do the training that you’re looking to do, or is it the data that is having problems getting to the point that we can do something about it? Or is it purely being able to see, as you’ve talked about, as the ROI of these particular applications? I think all three of those areas have potential to slow this down to what I would think is a normal pace.
Dion Hinchcliffe: Yeah, exactly. Well, it is breakneck pace. Well, we did a see a clear signal in our CEO survey in particular that those reporting a significant number of stalled or failed AI efforts, 60% of those were reporting data as the primary issue. It’s the leading issue is holding back AI in enterprise is the data story is poor in many organizations, have been under-investing. They may be trying out all these AI services and discovering that their data isn’t there. Yeah, that will have to be our next line in inquiry.
Camberley Bates: Yeah. Then coming up in this month, we’re going to have, maybe the end of this month, maybe the end of February, we’ll see the Dell announcements so we’ll hear more about where they’re going in terms of the server. We saw at IBM’s announcements on their earnings and what they’re talking about in terms of their growth, it’s been fabulous. I’m more or less expecting the same thing coming out of Dell, in terms of how they’ve been performing in those areas. When you’re saying the capacity is not there, and if we’re looking at what’s happening with the shipments coming out of the folks that are going to on-prem and they’re shipping as much as they can, if that’s what we’re seeing now, I’d say that the issues around the cloud, Keith, would be more along the lines of what they’re saying, versus they’re not building the right things.
Keith Townsend: The other thing that’s hidden in the numbers, because I think so much of AI is focusing on training right now, we don’t know how much actual inference and agent adoption is happening.
Camberley Bates: Correct.
Keith Townsend: How much of the dedicated AI services are being consumed, versus people coming to cloud and building their own and just using EC-2 instances and CPUs to do the lightweight inferencing that they need to do? We do need a lot more clarity and a lot more data to actually make this decision.
Dion Hinchcliffe: Well, you bring up a good point. How much of this demand is actually artificially driven by venture investment? And these are not necessarily your traditional enterprises, these are actually AI companies trying to build quickly in the cloud and creating artificial demand that may not hold, depending on all it’s got. It’s all very interesting what’s going to happen here.
Camberley Bates: Well, that gets me to this next topic. We’ll come back to a little bit on OpenAI and that kind of thing. It’s under benchmarks. Stanford and the University of Washington released a paper on their development of doing fine-tuning through distillation on Google’s reasoning model Gemini 2.0, one of their models. For 50 bucks. I don’t think Stanford Research were being paid, so you have to add in the cost of labor, which can add a lot on.
Dion Hinchcliffe: Yeah. Well, I’m still going to go with my promotional credit story. I’m hearing this a lot.
Camberley Bates: It was 50 bucks. Somebody else said somebody else did it for 450, or whatever. Maybe one of you guys could do a little rift on what is distillation, what does that mean? Because that I bet you 90% of the people that are listening to this don’t know what we’re talking about.
Keith Townsend: Yeah. Well, I’ll be honest, I’m one of the 90%. What is distillation?
Camberley Bates: Well, the distillation is taking some of the data that you have there and using some of the techniques to do the analysis on it. It’s not using as much processing power, and this is where OpenAI came from. Not OpenAI, rather the DeepSeek came from, in terms of what they were doing is a distillation of somebody else’s model. That’s the model they’re assuming that was-
Dion Hinchcliffe: Well, that’s right. That’s the accusation, as some people would say, is that they distilled OpenAI’s model down into a more efficient model.
Camberley Bates: Yeah.
Dion Hinchcliffe: My understanding is distillation is taking a large model and saying, “All right, let’s create a smaller, more efficient model.” It doesn’t have everything in it, but maybe aimed at a specific use case or domain, or something like that. That way you get-
Keith Townsend: Ah, distillation. I feel like Homer Simpson when we looked at the word gym and pronounced it gym. Then when he figured out what it was he was like, “Oh, gym.” Distillation. I’ve read this OpenAI paper and it’s really interesting from what they did. They mentioned that they’re taking larger models. They’re not saying that they did not a larger model and distill it. What I don’t think that they do in the paper is call out OpenAI directly. I think this is the amazing feat. Whether they did or not, I have no … Personally, if they take my data and OpenAI uses my data to inform their models, I don’t have a problem with it. I think generally speaking, none of us should have a problem with the ethics behind what this process does. But what’s really interesting, the innovation is the reasoning. There was a lot of news, and we’ll get to that, around the reasoning capabilities that they then come out of with. You come out with better models. With this Stanford research and this Stanford project, we’re starting to see the fruit of just the advancement of AI reasoning.
Camberley Bates: Where do you see that impact then?
Keith Townsend: Yeah. One of the biggest challenges with AI is getting it … I think I talked to this about a year ago, during my 100 Days of AI, is getting AI to think like your organization, if that makes sense. The manufacturing organization thinks differently than a healthcare organization. As a matter of fact, two organizations within the same business unit might think differently. Or, as they’re calling it, reasoning. How do you get a model to reason to the needs of your organization? When you send through a prompt of your have an agent working, how do you get it to make decisions that are consistent with your organization’s strategy? And on top of that, do it at a reasonable cost? We’re getting to that point where techniques such as distillation and advanced reasoning, the lower cost of retraining these models is going to have, my opinion, really great advancements around getting smaller models that are able to be positioned in a way to make better decisions within your organization.
Camberley Bates: It’s also, Arvind, the CEO of IBM, had a post on LinkedIn, which I thought was an interesting thing. It was fairly long. Talking about how there are … Reflecting on DeepSeek and how these models were finding new ways, and I talked about this last week, new ways to lower the cost of development. He reflected on how we have done this over, and over, and over again in our industry. Call it Mors law, call it whatever you want to call it. He cites, “All the other technologies that we have been faced with that initially were extremely expensive, and then innovation brought us to this place.” This seems to be moving faster there to that place than we expected. Perhaps. But I’m going to go back to the same thing. We still are running headlong, even though we get more efficient and possibly need to use less energy, et cetera, to get to this space, possibly being able to do all the things on somebody’s AI PC as opposed to an entire data center of those things. We’ll probably end up with the big, huge, massive language models, those language models, and then some unique other models that are unique to the industry, which we’ve been talking about. And then the distillation of what this is talking about which is very cost-efficient and reasoning. Then the question’s going to be how is the quality of the reasoning coming out of these guys? We’ve seen some of the reasoning coming out a little bit interesting.
Dion Hinchcliffe: Well, everyone right now is talking about something called Jevons paradox. We’re talking about the big hyperscalers. And saying that all of this leads to more compute and storage. You never use less compute and storage. If you get much more efficient and it’s cheaper, then you find all new applications and all new things that you can do with it. This is the problem also with energy. When you create cheap energy people are like, “Wow, I’ve got all these new applications I can do now that was not affordable before. I can now run off and do it.” The demand never goes down, no matter how much you try and conserve. Not matter how much you try to get more efficient, it just creates more demand because you can do more things.
Camberley Bates: Closets are the same problem.
Dion Hinchcliffe: Yeah, right. Yes, exactly.
Camberley Bates: Don’t forget the closet. You’ll always fill them up, whatever you have.
Okay, have we hammered on this one enough? If not, we can go to a couple of other topics.
Dion Hinchcliffe: Yeah. One of my predictions is is that we are on the cusp of what’s called super-intelligence. I’ve been promoting this idea since it seems clear, if you look at the benchmarks, if there was suites that measure the IQ, the human IQ of these models-
Camberley Bates: For super-intelligence, is cloning you, is that what that is?
Dion Hinchcliffe: No, no, no. By no means. This is when AI is smarter than any human that has ever lived and that’s where we are. We are on the cusp of that. Proof of this is OpenAI just released a new service called Deep Research. This is a model that does extended in-depth inquiry, formal standard in-depth inquiry. It doesn’t just consult the models, it goes out there and looks at research, it queries the internet. It can take dozens of minutes to hundreds of minutes to run a single query. It’s only available in OpenAI’s highest tier of service. What’s interesting is it blows the doors off of the hardest AI benchmark, which is called Humanity’s Final Exam. Sorry, Humanity’s Last Exam. This is an AI benchmark created and all of the questions on it are extremely difficult. Take this very hard to read, very obscure archeological inscription of a dead language that almost no one in the world knows and transcribe it in these different languages. That’s the kind of test it has on there. It has math and geometry questions that most humans can’t solve. Most LLMs, like GPT-4 passes at a 3.3%. Things like o3, OpenAI’s model, passes at 10%. Well, Deep Research can pass it at 25%. By far, the highest scoring model on that benchmark. This is something that probably none of us can answer any of the questions. We probably couldn’t answer a single question on that benchmark no matter how much time we were given.
Keith Townsend: Yeah. When I saw this announcement I’m like, “Oh, well, it looks like my career pivot to doing something other than research has started.”
Dion Hinchcliffe: Exactly.
Keith Townsend: We’re not quite there yet. Shriram Subramanian, who is a fellow independent analyst, a pretty sharp guy, spent some time with it trying to get the model to write a report type of output, not just answer questions. We see a little bit of a capability in the lower-end model, the o3-mini does a little bit of this on the light side and I’ve played around with it. It is absolutely great at answering questions and giving you a start. Shriram claims that he wasn’t able to get it to do anything more than a 1000-word blog post and the quality wasn’t that great. It was still hallucination, it went off on tangents, it didn’t keep a consistent methodology. I think it is a great indicator of what it will go to be one day, but right now it seems like much more of, again, a human in the loop type tool that gives you a good starting point of where to start your research but not conduct deep research as it’s called.
Dion Hinchcliffe: It also raises the question of how much do these AI benchmarks really tell us about what it will do, what the AI model will do for us in the real world? That’s the thing.
Camberley Bates: Yeah. Because what you were citing were equations and getting answers to an equation that you said very few humans can do. But those are known equations as well. The question is is can it solve the equations that we can’t solve?
Dion Hinchcliffe: That’s my theory around super-intelligence is that well, yeah, you can solve … What they’re arguing is that those equations are known but most people can’t actually do the math. It’s too involved with too many variables and we can’t come up with the answer. Whereas, supposedly this can. The question is, again, how useful are these benchmarks in telling us what these models are going to do for us in the real world? It’s going to be interesting to see. But nevertheless, we see models like o3 are coming in consistently at about 135 IQ. We’re going to see models that can do in excess of 160, which now puts them well outside of Six Sigma, in terms of how many people can perform at that level. This is something that anyone can use. We’re all going to have certifiable geniuses in our pockets here soon.
Camberley Bates: On your iPhone.
Dion Hinchcliffe: That’s right.
Camberley Bates: There we go. Okay. Let me switch to a couple of other things, of course on the data side. We have to talk about that because it’s me.
Dion Hinchcliffe: Yeah, of course.
Camberley Bates: Weka. We saw some of the companies that are on the data side, we’re still in the AI piece of it, that have some layoff on some people. At the same time, we have a flip side of Hammerspace, which is another software data offering that announced 10-times growth. Granted, they are a fairly small company, but still. That’s a very, very significant number to grow on and can really strain the company. Keith, Weka. You’ve been plowed into that one. I didn’t get plowed into that because of my travel.
Keith Townsend: Yeah. Weka’s an interesting story. If you follow HPC at all, you know Weka. They do storage fast. They are a fast storage system. You would think that would naturally translate to being a good system for AI, but according to reports, the overall challenge is that they didn’t pivot quick enough into AI. They have strong revenue, $100 million run rate. They’re valued at about $1.6 billion based on their last round of funding. Fundamentally seems well, but they’re part of this larger debate, how much to rotate to AI and can you over-rotate to AI. Weka seems to be the case where they didn’t rotate to AI enough.
Camberley Bates: Well, there’s a couple things that we’re seeing that the traditional files systems, like the NetApp, the Dell, VAST leading the charge as well, have gone to. Which is announcing some work that they’re either integrating with a Starburst or some other companies that are doing vector databases, et cetera. They’re integrating with all the pieces that need to come together to do all the data management that we’re talking about, in terms of private and those kind of things. They’re developing methodologies to important data more easily into there, so they would be the holders of all the data. Varying strategies, VAST and Meta are building it in the systems. Dell right now is partnering. Now take that with where Weka has been. Weka has been partnering, but they haven’t been partnering on that kind of level, which is probably where they need to go. That does take some engineering and design work. That’s what I suspect is going on with them, although I haven’t had a briefing with them so that’s going to be my next thing here, to get some time to look at where the roadmap is going. That could be where they’re going, because Weka is a parallel file system and competes very well with BGFS, and GPFS, which is IBM Scale, and some of the others. And Luster. That’s where they traditionally have played out. Now they’re shifting to playing out and saying, “Am I going to compete? How do I compete with VAST?” Which is what I suspect is what they’re feeling.
The second part of that is as I mentioned Hammerspace has taken off and is doing very well. The reason why they’re doing well is this tier zero strategy that they have. Which is not DRAM for them, it’s tier zero for them. What they’re doing is they’re capabilities and integrating their metadata management into bringing this very, very fast tier on top of a regular file system. They’re acting like a parallel file system and they’re bringing in the speed of what Weka brings to the table, along with some of the capabilities of a regular file system and an ease-of-use of those environments. That seems to be really taking off. Also, with our partnerships that are out there. We’ll watch this space. As I said, my prediction is that this year is going to the year that we’re in talk about how do we do scale out and what are the differences between these kind of technologies. Why Weka is doing this, why Hammer is succeeding, why we’re seeing all these investment areas that are going on.
Dion Hinchcliffe: Interesting times in infrastructure, for sure.
Camberley Bates: Yes, very interesting. With that, guys, do we have anything else? I think I’ve got my list here, we’ve been through it. We’re at the bottom of the hour.
Dion Hinchcliffe: Sorry. Keith, asking real quick. Only that Google dropped Gemini 2.0, their most advanced model including-
Camberley Bates: Oh, I forgot about that one.
Dion Hinchcliffe: It includes also Gemini Flash, which is a smaller model. Again, it’ll be able to do things cheaply. This is where IBM, for example, competes with their Granite is being able to provide high quality answers at much lower price points. The key point is Gemini 2.0 is at the very high end of the benchmarks but hasn’t been out yet. This puts Google now with a shipping AI that is in the top three slot on AI benchmarks.
Keith Townsend: Yeah. Just from a tactical perspective, I’ve played around with Google’s runtime engine, so GKE, and using that to call the services. It is a really interesting dichotomy of approaches. You can use Gemini to run queries like you would ChatGPT, but less functional from a capability perspective. But the platform to develop applications, top-notch. It’s really interesting how easily you can call Gemini and other models from Google’s runtime. Again, the competition of how to provide these platforms and how to enable them, I think in some of my internal discussions, we’re supposed to call LLMs commodities. I don’t know if we’ll go that far yet.
Dion Hinchcliffe: No, not yet. It’s heading in that direction.
Keith Townsend: But the platforms definitely are not in this ability to consume these models. I think that is a battle that we should probably talk about one day.
Dion Hinchcliffe: Yes, indeed.
Camberley Bates: Maybe next week.
Dion Hinchcliffe: Maybe.
Camberley Bates: While I’m on vacation.
Dion Hinchcliffe: There you go.
Keith Townsend: Ah, I liked how you did that.
Dion Hinchcliffe: Keith and I will have to duke it out.
Camberley Bates: Yeah, you’ll duke it out. All right, guys, thank you very much for joining us on this Infrastructure Matters episode number 70. We will be seeing you … Well, I won’t see you next week, the two gentlemen will see you next week. I’m on vacation. With that, thank you very much. Have a great week.
Author Information
Camberley brings over 25 years of executive experience leading sales and marketing teams at Fortune 500 firms. Before joining The Futurum Group, she led the Evaluator Group, an information technology analyst firm as Managing Director.
Her career has spanned all elements of sales and marketing including a 360-degree view of addressing challenges and delivering solutions was achieved from crossing the boundary of sales and channel engagement with large enterprise vendors and her own 100-person IT services firm.
Camberley has provided Global 250 startups with go-to-market strategies, creating a new market category “MAID” as Vice President of Marketing at COPAN and led a worldwide marketing team including channels as a VP at VERITAS. At GE Access, a $2B distribution company, she served as VP of a new division and succeeded in growing the company from $14 to $500 million and built a successful 100-person IT services firm. Camberley began her career at IBM in sales and management.
She holds a Bachelor of Science in International Business from California State University – Long Beach and executive certificates from Wellesley and Wharton School of Business.
Keith Townsend is a technology management consultant with more than 20 years of related experience in designing, implementing, and managing data center technologies. His areas of expertise include virtualization, networking, and storage solutions for Fortune 500 organizations. He holds a BA in computing and an MS in information technology from DePaul University. He is the President of the CTO Advisor, part of The Futurum Group.
Dion Hinchcliffe is a distinguished thought leader, IT expert, and enterprise architect, celebrated for his strategic advisory with Fortune 500 and Global 2000 companies. With over 25 years of experience, Dion works with the leadership teams of top enterprises, as well as leading tech companies, in bridging the gap between business and technology, focusing on enterprise AI, IT management, cloud computing, and digital business. He is a sought-after keynote speaker, industry analyst, and author, known for his insightful and in-depth contributions to digital strategy, IT topics, and digital transformation. Dion’s influence is particularly notable in the CIO community, where he engages actively with CIO roundtables and has been ranked numerous times as one of the top global influencers of Chief Information Officers. He also serves as an executive fellow at the SDA Bocconi Center for Digital Strategies.