On this episode of the Six Five Webcast – Infrastructure Matters, hosts Camberley Bates, Keith Townsend, and Dion Hinchcliffe dive deep into the latest trends and shifts in the technology landscape, focusing particularly on the infrastructure that powers today’s AI models.
Their discussion covers:
- The current state of the market, highlighting earnings and market share with insights on companies like Pure Storage, NetApp, and Nutanix, and the contrast between the European market’s softness and the robust US market.
- The impact of AI on enterprise storage, analyzing shifts in hyperscale environments, and the role of data lakes with perspectives from NetApp and Pure Storage.
- A review of the latest AI model releases including ChatGPT 4.5 and Google’s Gemini, capturing expert reactions and strategic giveaways to developers.
- Insights on IBM’s acquisition of HashiCorp, discussing the strategic moves around Terraform licensing and its broader implications for the IBM Cloud ecosystem against other infrastructure-as-code tools.
Watch the video below and be sure to subscribe to our YouTube channel, so you never miss an episode.
Or listen to the audio here:
Disclaimer: Six Five Webcast – Infrastructure Matters is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Transcript:
Camberley Bates: Welcome, everyone. We are back again for Infrastructure Matters, episode number 73, with my buddies here. Dion Hinchcliffe, and of course, Keith Townsend, who is coming in from Tennessee. You got your jet stream or whatever that thing is called? Your-
Keith Townsend: The Airstream is-
Camberley Bates: Airstream.
Keith Townsend: … parked in the middle of the forest. I posted earlier this week that if you want to learn anything about site reliability, engineering, try living off grid for any period of time and you’ll have a crash course, master’s level, in keeping a website up.
Camberley Bates: There you go.
Dion Hinchcliffe: Are you on a 5G or are you Starlink?
Keith Townsend: I am Starlink. So our platform will cool, it’ll make up for it. I’ll break in and out in our real time recording of this but it records locally and uploads it. So, it’s much better than nothing.
Dion Hinchcliffe: Yeah. A whole lot better.
Camberley Bates: The wandering man out in the wilderness. Okay. So guys, we’ve got quite a bit to go through this morning, so I’m going to jump into it. First, a short little piece on some earnings and reflections. We’ve had lots of earnings this week. I just spent the time this morning going through NetApp, Pure and Nutanix. I haven’t gotten to Dell yet, so we’re not going to talk about that one, they may know and where they were at. All in all, everybody’s had good quarters. Pure has an outstanding quarter, Pure Storage. They’ve just blown out their money, their revenue 12% year to year growth. The subscription business is up 21%. Just all kinds of really good numbers that are coming from them. And then NetApp had a slight miss because of some delayed of some transactions but they were 2% year to year up in terms of where they’re looking at and still decent quarters. And Nutanix, of course, is up as well.
A couple of common themes that went through all of them. They’re experiencing the softness in the market over in Europe. So there is some softness going on there because there’s certain uncertainty and the U.S. is probably carrying it all. The second piece, they all commented or had questions to them about VMware transitions. All of them have had, of course, Nutanix is double downs on that. That’s a key piece of their market and that is happening but it’s slow. I mean it’s happening, is slower as you would expect because you’ve got licensing issues, you have hardware issues to migrate, although they’re commenting about how fast you can move up in the cloud like with an AWS just because there’s no hardware stuff there. And the third one is an AI and we’re going to get into that. Since these guys are all storage people and they’re primarily storage that has to do with the enterprise, that’s a slow moving ship right now or train or whatever you want to call it.
Because most of that work is still going into the hyperscalers, the maybe AI factory companies that are putting up CSPs, et cetera. But they are seeing, definitely NetApp has seen a couple of big data lakes being put into place that people are taking where they’ve already got their environment and expanding it. A little bit less conversation about this from Pure. So that’s a bid on the transactional piece. And I will stop there and then we’re going to go on to some other things that are even more interesting, so… Okay. Let me go on to my partners in crime here because we have a bunch of things going on with ChatGPT 4.5, Claude pre-0.7, and Google giving away Gemini codes. So Keith, why don’t you take it off first and then Dion can come into it.
Keith Townsend: Yeah. It’s been a busy week for AI product and model development. Dion put, I think both of these in, we both hit that ChatGPT 4.5 hit yesterday of this recording and Amazon hit with Alexa+. I think ChatGPT 4.5, it’s probably the more, I think a bigger story in the sense that we’ve been waiting for 4.5 for quite some time. And I have to tell you that some of the early assessment from the AI experts is disappointment. It’s not as big of a jump as it was from 3.5 to 4.0 or even 4.0 to 40. So Dion, I’d love to hear your thoughts on both of these.
Dion Hinchcliffe: Yeah. Well, it’s the last big release before the much-expected GPT 5 and OpenAI saying they put a lot more pre-training work into this to find more connections between all the data. The scuttlebutt though, the word on the street is that it takes a hundred times more resources to answer a query in GPT 4.5 than it does in DeepSeek-R1. That’s two orders of magnitude. And can they be profitable on this? We’ll see. Right now, only GPT Pro users can use this. It’s available today unlike Alexa+ which we have no idea when it’s actually going to ship. You have to get on a wait list for that and some weeks in the future. But plus users which is the bulk of our subscribers will have it next week. So we’ll get more insight into it. It’s an important bump. It still keeps them at the top of the leaderboards. So it is a highly capable model, it’s not that it isn’t. And it has somehow they found a lot more training data to throw at it. So they must be licensing, they must be spending to get access to data sets that are not openly available. But we’ll see a lot of testing now that it’s in people’s hands as of yesterday morning. So it’s going to be interesting to watch.
Camberley Bates: Okay. So how does that work? If I’m going to license training data because it’s not publicly available and now I’ve trained my model on it, it’s now publicly available, isn’t it?
Dion Hinchcliffe: But through the license. So they’ve paid to make it available, is the argument. So hopefully, the licensees know what they’re up against because you can… The techniques are emerging to extract almost the original data set if you know how to query the model to get that out. So we’ll see how that works.
Camberley Bates: I mean it feels once it’s in the wild, it’s in the wild. I mean-
Dion Hinchcliffe: Exactly.
Camberley Bates: … you’ve put it out there and that’s the big issue about why-
Dion Hinchcliffe: Well, that’s what model distillation is all about. You can actually get all the information out of a model in a way that you couldn’t out of a search engine. It’s very interesting.
Camberley Bates: Okay. Well what about the Google giving away Gemini?
Keith Townsend: Yeah. So a lot of these models are put into action, probably the biggest areas in AI coding. I’m looking forward to having a really great discussion with Brian Lau who’s a Senior Principal Architect or a developer at Amazon on, not on the AI side but he’s been a big proponent of using AI in development. And Google has given away its, this Gemini 2.0 code assistant. So I can just light up my favorite IDE and this code assistant is fully free. There is, I think some ridiculous token limit that the average individual developer probably shouldn’t reach. The enterprise version still needs to be licensed for multiple users and work groups but if you’re an independent developer, Gemini 2.0 Code assistant is actually pretty good. So it’s amazing.
Dion Hinchcliffe: It was at the top of leaderboards, right?
Keith Townsend: Yeah.
Dion Hinchcliffe: Gemini 2.0 is a super capable model. It’s pretty brave of Google to give a huge number of completions away. So you can do a lot of work for free and I think it’s really smart to go after developers because they are the Kingmakers. They made AWS what they are and I think this is right out of that playbook to say, let’s get Google Cloud adoption, get people hooked on Gemini 2.0. Get in the hands of lots and lots of developers because it’s going to be the cheapest option for most to be able to get their hands on a really powerful coding AI.
Camberley Bates: So the assumption that if I’m going to code with Gemini, that code is going to run on Google Cloud?
Keith Townsend: Yeah. And I think that’s a strong correlation Camberley. We’ve had Google at Cloud Field Day a bunch of times and the delegates were always surprised at how well integrated the platform is with the model. So you can use Google’s runtime, Google GKE, its various serverless platforms to actually call Gemini and it’s really easy to do. Matter of fact, I would say it’s probably easier to use Gemini from the Google Cloud platform than it is to just go to gemini.com or Google whatever the website it is and use it similar to how we will use ChatGPT. Google has made it a much better developer experience than maybe an end-user experience, if that makes sense.
Dion Hinchcliffe: I think that comes from, they were third to cloud, and that means they learned a lot of the lessons from everybody. And they have the, I think the best cloud architecture, product architecture of them all. It’s the most modern. It’s just not as used as the other two because again, they came a little bit later to the party. So yeah, this will I think, key to help them get an extra level of adoption both in AI and in cloud.
Camberley Bates: Yeah. And these announcements don’t surprise me because as we’re coming into GTC which is the big GPU Conference that it’s put on by Nvidia. And that was the 21st or something like the third week of March. We are now right now getting briefed on announcements that are going to be piling in over the next three weeks. And I’m not going to be at the conference but I think you guys are going to be, that thing is going to be crazy. Absolutely crazy.
Dion Hinchcliffe: No doubt.
Keith Townsend: Yeah, it’s become the de facto AI conference of the year.
Camberley Bates: Yeah.
Keith Townsend: It’s going to be up there with supercomputing as just noise.
Camberley Bates: Yeah. And getting into that. So Dion, you raised a conversation for us today, a debate around whether or not AI, the technology of AI is a layer on top of the current infrastructure, IT infrastructure or whether or not it is a, what I see people talking about, which is this thing called the AI factory. And Dell coined that term last June when they, or GTC, at their big conference. They called it the Dell AI factory and then they’ve had the 18 wheeler rolling around all over the United States talking about the AI factory. And then I’ve seen that term picked up by multiple companies. So it’s no longer a Dell term, it’s a actual term that the market seems to be using for some reason. And that’s just recently happened and it’s part of actually in the briefings, a couple of briefings that I’ve had just recently coming into GTC. So let’s talk about that. Why would it be part or why would it be separate?
Dion Hinchcliffe: Well, and there’s arguments being made for both but I think this is what’s coming up is we’ve traditionally kept our data in sequel databases. We then moved to no sequel and graph databases and document-oriented databases and so on. Then vector databases arrive that says, “all right. No, you really need to understand much more unstructured information needs to be actually more deeply understood and more easily accessed to retrievable.” And then of course, now we have foundation models and large language models, diffusion models and they store data. There was no question about it. We were just talking about how you can extract some of the data that’s under the covers. And so, is it just part of the data layer that we’ve always had in our infrastructure? Is it something new because AI does things that these technologies didn’t do before. If we look at AI ages, they actually take autonomous action in a way that we didn’t predetermine. They determine how things will happen. So there’s an argument that we now have an AI layer, a new AI layer on top of our infrastructure layer. So I was just wondering if you guys what your take was on that.
Camberley Bates: Keith?
Keith Townsend: Yeah. So I’m going to say AI is just another version of compute. I think it does blur the lines in between data and compute a little bit but if we look at the vectorization of data, if we look at how models are trained, models don’t keep all of their knowledge within the model forever. You don’t have the same level of clarity around your data when you’re talking about the model itself. Now, when you’re using your own data to an adjacent to a model, that’s a different layer. But again, that’s compute. That you’re just saying, “I’m going to apply this compute, this application layer against my data.” So I don’t see this as yet a new layer, I just see it as an advancement in compute.
Dion Hinchcliffe: Yeah. I view it as a new spike through the layer. It has both compute implications and data implications, I think. And it’s a new column in the infrastructure layer.
Camberley Bates: So I’m going to go back to some of the earnings calls that I was sitting through because these are all data people. I’m not necessarily the vector people but if you looked at NetApp and George Kurian talked about a couple very big wins of people creating a data lake. They were already currently a NetApp customer. They’re broadening that base to be creating a data lake capability and then talking about either how you’re bringing in both your file, your block which is your databases, and then also the next piece is a multimodal piece, the videos, et cetera, that have to go into the training. So they’re creating the separate system here. However, once you train that, that training data, depending upon what the application is, is going to go against potentially a transactional system.
So if I’m going to use AI to present information on my website based upon maybe a retail transaction or whatever, or if I’m going to use AI, let’s say, I’m going to use AI on insurance submissions, that thing. So due to that analysis, so you have this connection between the transactional traditional systems, processing systems with this AI analysis that goes through. Think also customer service. Customer service that’s part of that application within that application. So it has to be integrated with that layer. You guys would known better than I do because you’re coders or you’ve coded in that stuff and I haven’t done that.
Then I’m thinking about, okay, VAST just came out and added block to their file and object capability. So that’s recognizing the data layer and why they’re doing that is because they’re recognizing the transactional data has to come into that training piece of it and do the training as part of it. So they’re expanding that piece of it. And you have, okay, so VAST and NetApp have talked about bringing in… They’re building vector databases within their data management system. Dell has chosen a different way that what they’re doing is they’re just integrating with other vector databases, not incorporating into their data management. That’s a strategy difference but you still see this integration of these pieces here. So it’ll be interesting to where this turns out. I get your comments, blow a hole in what I just said or whatever.
Dion Hinchcliffe: Yeah-
Keith Townsend: Yeah. It’s all-
Dion Hinchcliffe: … I mean quite frankly, yeah.
Keith Townsend: Yeah. And I think that’s representing a huge shift in the market. Just a few years ago, I’d be in briefings with HPE, Dell, NetApp and asking about the data layer. Not the storage bits, the zeros and ones and deduplication and all of the storage level services they offered but the actual data and helping to make data easier to process. And none of those players wanted anything to touch with the data. They said that was left to up to ISVs, database providers and basically SIs, and they wanted to focus on the bits and bobs. Now the conversation has really changed because we’re seeing again, to the earlier comment, to Dion’s earlier comment, this explosion beyond a single layer and this blurring of what’s needed. If I need to retrain my model on my latest data or my latest transactional data, what’s the fastest and easiest way to get to that? If my AI model and my data exists on the same storage system, isn’t it best to do it at the storage layer?
Camberley Bates: Mm-hmm.
Keith Townsend: We’ll see.
Camberley Bates: Yeah. And that’s the data management layer that we’ve been hearing them all talk about. For NetApp, it would be Blue XP, for Pure, it would be Fusion. I’m not sure the name of what VAST is calling it but it’s probably just VAST capabilities environment that they’re doing. But bringing that out to be able to manage and then having a separate actual storage plane that has all the traditional capabilities that you’re down there. So interesting. Thanks for bringing that up. It’s a good conversation. So we’ll see where that pans out over the long haul. And then you have to also think about how this connects with some of the people that got their data in the cloud. So that’s all-
Dion Hinchcliffe: Well, I try to look at what’s different in AI that wouldn’t normally be found at the data layer. And the only example I can really come up with is what we’re seeing is the safety layer that it’s appeared in so many AI infrastructure that does and make sure that there’s no inappropriate information being generated, no private information is being revealed. That the results are accurate, reduces hallucinations, and that’s not something we’ve ever seen before in a data layer to that degree. So that’s something new but I still think it’s something that we actually probably need in our data layer. So it still goes back to, for now, AI is a new element in our compute and data layer in the infrastructure stack and that we haven’t seen anything quite yet that rises to something that would require us to create an entirely new layer because we have some new third or fourth entity in the stack.
Camberley Bates: Well, and we’re really early stages into the enterprise architecting this. I mean we heard that very much so from the calls that we’re on, the earnings calls about this is still… We’re looking at 2025, 2026 in terms of this really rolling out to the point that it’s in application. Much of the money that’s going out right now is still into the big foundation models, the people that are building, cloud service providers that are building GPU as a service, those or the other ones that are already research labs of some sort. Maybe it’s Harvard Medical that was cited by Vast or it’s some other pharma that’s already has that but they’re expanding that environment to not be an HPC but an east-west architecture to drive looking at new drugs and sort of things. So anyway, all interested in me going. So our next topic, HashiCorp. Big acquisition by IBM. Who wants to take that one?
Dion Hinchcliffe: I kick it off real quick. I’d love to hear what Keith has to say though. He might be spending more time with it. But yeah, IBM made another very expensive acquisition last year, 6.4 billion for HashiCorp Infrastructure as code security firm driving with developers as well. But also during the acquisition, around the same time, HashiCorp made a major licensing change to Terraform which is their main product that really left a bad taste with a lot of developers. Developers really value the attribute to open source. And licensing is a religious topic. And so, it is a question of is IBM going turn this into another Red Hat acquisition or how’s it going to work out?
Camberley Bates: So for our listeners, let’s briefly say what is Terraform? Terraform is the open source, HashiCorp is the distribution.
Keith Townsend: Yeah. So I guess it’s important to understand why is HashiCorp getting acquired? Why is this unicorn six point something billion dollar now publicly traded company in a position where they can’t grow organically? So HashiCorp has a series of applications, Terraform being its most popular, then followed by Consul and a bunch of other developer, Kubernetes new web type applications. Terraform is by far the most popular of all of their offerings and projects. Its role is for you to programmatically describe and deploy infrastructure. So whether you’re talking about AWS, Google Cloud, on-prem infrastructure, VMware, vSphere, you can orchestrate your infrastructure as code. So I can say consistently-
Dion Hinchcliffe: And doing across multiple clouds. I mean I think that’s one of the big biggest attractions. It abstracts cloud infrastructure.
Keith Townsend: Yeah. And they went the open source route and frankly, it grew. We saw what the same thing happens but most open source, it grew to a point and they couldn’t grow it beyond that and they couldn’t really tell a great cohesive story around platform. I said two or three years ago that HashiCorp needed to sell itself to an IBM or VMware. Did it make sense for one of those two companies to buy them? Yeah, we’ll soon see, because IBM has made the purchase. This is in order to help CTOs and CIOs understand the value of HashiCorp you your white glove service. You need the Salesforce, you need the account penetration. Terraform was one of those things. Either you bought it or you wanted it free. There was no in-between. So a lot of folks that are angry are the folks that wanted it free. So Terraform and the Terraform product team would tell you it’s mainly the competitors that are complaining that it’s no longer open source in a traditional manner and free in the traditional manner. But developers, some developers especially Dion mentioned it, that this is a religious debate for a lot of developers. It’s either open or it’s not.
Camberley Bates: Okay. So and I understand, their primary competitors would be people like a Puppet or an Ansible or something like that, correct?
Keith Townsend: Yeah. So I don’t even know if they’re competitors as-
Camberley Bates: Much as complimentary-
Keith Townsend: … looking at the same problem in a different way.
Camberley Bates: We just-
Keith Townsend: Cloud formation-
Camberley Bates: … Keith here.
Dion Hinchcliffe: Sorry. Well the Starlink is probably switching to a new Starlink segue, so they’ll be back.
Keith Townsend: The old reliable Starlink. So I’m back. Okay.
Camberley Bates: Okay. I’ll start the competitive question. Okay. So I think Terraform competes with things like Puppet or Ansible or maybe they’re just more complimentary, Keith?
Keith Townsend: Yeah. So IBM is going to have some work on their hands rationalizing Ansible version-
Camberley Bates: Okay, we’re losing him again-
Keith Townsend: … or/and Terraform.
Dion Hinchcliffe: I wonder if we’re actually losing him, the upload might work just fine, right?
Keith Townsend: Dion, I’ll just let you answer that question because I go through these periods.
Dion Hinchcliffe: Yeah.
Keith Townsend: All right.
Camberley Bates: So then Terraform just as positioning, competes with Ansible and Puppet. Dion is that?
Dion Hinchcliffe: Well I think first, some of what HashiCorp does, that’s true. It started out really a secret management. So if you look at CyberArk or Azure Key Vault or Beyond Trust, those are often considered more the original competitors with HashiCorp. But Terraform has become super popular as for infrastructure management and so HashiCorp now does multiple things like so many cloud vendors do. So they have, I think an array of competitors at different product levels.
Camberley Bates: Yeah. So I mean many years ago, IBM and the current CEO did this one. He wasn’t the CEO which drove the purchase of Red Hat at that time with the Evaluator group, we were all scratching our head at $34 billion and going, doing the back of the napkin about how long it would take to return the investment on this thing, is like it’s never going to happen. I think it was 34 billion, maybe it was 43, I can’t remember. One of those two.
Dion Hinchcliffe: It was a large number. It was-
Camberley Bates: It was a big-
Dion Hinchcliffe: … a huge number, yeah.
Camberley Bates: … big number and this is not that big. But then again, it’s not complete platform play but it is the platform play if what you’re talking about is managing across clouds infrastructure code. And they gave a lot of cred to Red Hat. They brought them into the… When you have client executives that walk into the CEO and CIO capability, you’re walking in Red Hat. And so I think there would be some similarities in terms of the go-to-market on this one or is this just going to get rolled into the IBM stuff as opposed to what happened with-
Dion Hinchcliffe: Well, I think a lot of people are hoping it doesn’t get rolled into an IBM-only story. It’s really valuable for IBM to be part of a bigger story. The CIOs really want their IT to work with all the rest of their IT. They don’t want these silos in their organization. So from a standpoint that HashiCorp provides IBM with credibility across clouds, that’s a great story and that’s one that they should keep. They shouldn’t mess with that and I hope that they don’t. But the IBM of old, this is something they may not have really focus on preserving but the current IBM and they’re very much on the upswing. People have almost written IBM off, they are back, there’s no question about it. And if they can do with HashiCorp what they did with Red Hat, this is going to do really well for them even at this price.
Camberley Bates: So we won’t have Red Hat Summit now, we’ll have the Red Hat plus HashiCorp event or something?
Keith Townsend: Yeah. I’m really excited to see what the IBM Cloud folks do with this because we don’t talk about IBM Cloud enough. The IBM Cloud does an amazing job working with some of the other hyperscalers. The augment your capabilities of running some of your traditional workloads in public clouds in a way that enterprises accept. And that’s one of HashiCorp’s original stories was how do I take my mainframe and modernize it and have a connector, this is what console did, have a connector from my new world applications into my mainframe applications and then run that in the cloud-like operating model. This is… The IBM will be able to expose some of the more interesting capabilities on the enterprise side of a HashiCorp.
Camberley Bates: Yep. Well, we’ll see where this pans out. So thank you very much guys and thank you for listening in. And I think we got everything covered here that we’re going to do today. Did I miss anything?
Dion Hinchcliffe: No. That was a sweep.
Camberley Bates: Okay. There it is. That’s a wrap guys and we will see you next week. Don’t forget to like, follow, share, all that stuff because you know what? Even the guy that is in the background that’s doing all the video work is now listening to our Infrastructure Matters and that’s really cool. Have a good day.
Dion Hinchcliffe: Thanks everyone.
Author Information
Camberley brings over 25 years of executive experience leading sales and marketing teams at Fortune 500 firms. Before joining The Futurum Group, she led the Evaluator Group, an information technology analyst firm as Managing Director.
Her career has spanned all elements of sales and marketing including a 360-degree view of addressing challenges and delivering solutions was achieved from crossing the boundary of sales and channel engagement with large enterprise vendors and her own 100-person IT services firm.
Camberley has provided Global 250 startups with go-to-market strategies, creating a new market category “MAID” as Vice President of Marketing at COPAN and led a worldwide marketing team including channels as a VP at VERITAS. At GE Access, a $2B distribution company, she served as VP of a new division and succeeded in growing the company from $14 to $500 million and built a successful 100-person IT services firm. Camberley began her career at IBM in sales and management.
She holds a Bachelor of Science in International Business from California State University – Long Beach and executive certificates from Wellesley and Wharton School of Business.
Keith Townsend is a technology management consultant with more than 20 years of related experience in designing, implementing, and managing data center technologies. His areas of expertise include virtualization, networking, and storage solutions for Fortune 500 organizations. He holds a BA in computing and an MS in information technology from DePaul University. He is the President of the CTO Advisor, part of The Futurum Group.
Dion Hinchcliffe is a distinguished thought leader, IT expert, and enterprise architect, celebrated for his strategic advisory with Fortune 500 and Global 2000 companies. With over 25 years of experience, Dion works with the leadership teams of top enterprises, as well as leading tech companies, in bridging the gap between business and technology, focusing on enterprise AI, IT management, cloud computing, and digital business. He is a sought-after keynote speaker, industry analyst, and author, known for his insightful and in-depth contributions to digital strategy, IT topics, and digital transformation. Dion’s influence is particularly notable in the CIO community, where he engages actively with CIO roundtables and has been ranked numerous times as one of the top global influencers of Chief Information Officers. He also serves as an executive fellow at the SDA Bocconi Center for Digital Strategies.