In this episode of the Infrastructure Matters podcast, we cover the evolving landscape of IT infrastructure consumption and how traditional models have shifted due to the rise of cloud computing and hybrid approaches. We cover pricing, procurement, performance, availability, scalability, security, and compliance, which all play a crucial role in determining the most suitable consumption model for different workloads. We discuss considerations such as end-of-life planning and the decision-making process making sure the process aligns with the business needs.
Topics include:
- Recent news items, including earnings releases from Microsoft, Google, Intel and Seagate
- The increasing presence of AI in earnings reports with the growth of generative AI,
- N2W Software, a data protection provider focusing on AWS and Azure
- Evolving IT infrastructure consumption models due to cloud computing and hybrid approaches, including offerings from Dell, Hitachi, HPE, IBM, Kyndryl, Lenovo, NetApp and Pure Storage.
- Strategies for balancing business and technical requirements
You can watch the video of our conversation below, and be sure to visit our YouTube Channel and subscribe so you don’t miss an episode.
Listen to the audio here:
Or grab the audio on your streaming platform of choice here:
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this webcast. The author does not hold any equity positions with any company mentioned in this webcast.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Transcript:
Announcer: This is the Infrastructure Matters podcast, brought to you by The Futurum Group. We explore the latest developments in hybrid cloud computing and the technology that underpins it. In each episode, we’ll dive deep into the latest trends and technologies that are shaping the hybrid cloud computing landscape. The Infrastructure Matters podcast is for information and entertainment purposes only. Please do not take anything reflected in this show as investment advice. Now your co-hosts Steven Dickens, Camberley Bates, and Krista Macomber of The Futurum Group.
Krista Macomber: Hello, everyone, and welcome to this next episode of the Infrastructure Matters podcast. My name is Krista Macomber, and I’m joined, as always, with Steven Dickens and Camberley Bates. Steven and Camberley, good morning. How are you both today?
Steven Dickens: Fantastic.
Camberley Bates: You know it’s a Friday, we’re recording.
Krista Macomber: Exactly. You know-
Steven Dickens: Always a highlight of the week to record this show with you two.
Krista Macomber: It’s a great way to close out the week for sure, and we do have a great discussion today. As always, we’re going to kick it off by discussing a few news items that occurred this past week. We’re recording this session on Friday, July 28, 2023. And then we have what I think is going to be a really cool discussion regarding the various consumption models for IT infrastructure. We’ll touch on what exactly that means as well as some considerations, so looking forward to digging into that. But first, to kick it off, Steven, why don’t we maybe start with you? I know that you covered a couple of news earnings releases this coming week, so any color you’d like to contribute on that?
Steven Dickens: Yeah, a big week for earnings. The big two names in the hyperscale cloud marketplace, Microsoft and Google, both announced earnings, solid sets of results for both. I just don’t know how the market reacts to these. One, the shares went up after trading. The other, the shares went down after trading. It makes zero sense to me. But I think the key points, I think, people were focused on GenAI, and lots of mentions of the words AI in their earnings, announcements, unsurprising. I think there’s a buzzword bingo that CEOs and CFOs have to take the word AI at least 20 times in an earnings call these days.
But I mean, all joking aside, really solid numbers from Google, still too early. I got some chance to be on TV this week and talk about some of those numbers, and I published a Forbes article just this morning that we’ll put in the show notes that kind of summarizes my take. But I think the high-level message for me is great set of numbers for Google, could argue they may be taking some share. I think while everybody obsessed over Microsoft numbers slowing, 26% growth is still fantastic numbers, by any measure, at their scale. Whilst the trajectory might’ve slowly tailed off, we’re talking about a tail off from north of 40% to 26%. So let’s all just get real and say a behemoth like Microsoft growing at 26% is still fantastic set of numbers.
Krista Macomber: Absolutely. Steven, looking ahead over this coming quarter, what maybe is one thing that you’re keeping an eye out for each company?
Steven Dickens: Well, I think it’s going to be more and more from an AI and how that starts to reflect itself in their numbers. We saw some product announcements at AWS’s New York Summit event that I managed to attend on Wednesday. I mean, think this is going to be the consistent trend for maybe the next six months. Every set of announcements is going to have something in it from an AI point of view. We saw that from Dynatrace this week. We saw it from Splunk last week. I think every enterprise vendor’s going to be thinking, “How do I infuse more AI into our offerings?” I don’t see it as a standalone service. I think it’s just going to be infused in everything that enterprise vendors are doing.
Camberley Bates: Yeah, and that goes into… Like this week, we had the briefing with Dell and their latest generative AI items. While they announced that at Dell Tech World, Project Felix coming, server work they’re doing with NVIDIA. This was another progression of them getting out into the world. Hopefully, what they’ll be doing… I mean, they’re going to be on a pace here to kick all these things out, and this added services, this added an edge device to the offerings that they have that’s looking at where we’re going with the generative AI space. You are absolutely right, it’s going to be this constant drum roll for a while. At the same time, though, and I’ll say, is that we’ve got so much work to do to get to where we can be really productive delivering and trusting of the generative AI that we’re going to deliver. There’s a lot, a lot of work to do. This is not a quick, “Let’s go, and flash in the pan and we’re there.”
Steven Dickens: I think we’re in the early innings. This feels like the first…I mean, it’s interesting. I saw a post from IBM today on social that they’ve been doing AI for decades now. We can think back to Jeopardy, and we can think back to them beating chess masters. But I think, from a deployment of generative AI and these large language models, we’re still really, really early.
Camberley Bates: Right. Generative AI is a different game.
Steven Dickens: Yeah, for sure. For sure.
Camberley Bates: The other thing about the earnings this week that was interesting that we were watching and going back and forth with the analyst team talking about it was both the Intel and the Seagate announcements of both all their earnings. What I love is here, their earnings are down, their numbers are down, and Seagate’s number is 39% down or whatever, and you can blast away what those numbers are, but at the same time, the market was up 7%. Anyways, it’s like, “Okay, so how does that work?” I said something, it’s like, “Well, both these firms have been forecasting to say, ‘This is how bad it’s going to look guys. This is how bad it’s going to look.'”
It’s kind of like, to me, going into taxis and thinking you’re going to own a zillion dollars in addition to what you own, and the accountant comes back and goes, “You only own half of that,” and you got to go, “Yay, I just won. I just won.” There’s a little bit of that. There is some of that going on. So with the Intel announcement, the big news there is that the trajectory of some of the slow-down is not going where it was before. It’s definitely slowing down and feeling like maybe, just maybe, especially on the PC side…
And I don’t track the PC side, so I’m not an expert there, but that’s what they were talking about, is that we’re hitting a bottom of a trough here, and can pick up on the Seagate side, which is all about hard drives, and who’s absorbing most of the hard drives right now is the public cloud. So that goes in line with what you’re talking about, the Googles, and the Azures, and that kind of stuff, and seeing where their numbers are at, and settling in. Now, they’re starting to absorb the technology that they’ve had, maybe sitting on racks or whatever, and bringing that to market.
Steven Dickens: So on the Intel earnings, it was interesting. I think for the Intel piece for me, they’ve reached the bottom. This isn’t stock advice, but the feeling I got away from it, and Daniel commented on this in some of his social media posts… He spoke to Pat Gelsinger this week after earnings. Kind of has a feel of they’ve reached the bottom of the bad news, from an Intel point of view. There’s a lot of catch up to do, especially in GPU for those guys. But I think it feels like the end of the bad news, from an Intel point of view.
Krista Macomber: They’re not going anywhere from that perspective, right? Yeah, that’s definitely interesting and a little bit of a correlation into our bigger topic. So not related to earnings, but I did, this week, have a chance to catch up with N2W Software. They are a data protection provider focusing on AWS and Azure environment. So it’s interesting that they’re sort of picking their spot, choosing instead of going up against some of our household names, if you will, and data protection that are long-established in more traditional workloads, really trying to carve out that niche. They had a number of announcements with their updated software. The two that really jumped out to me, one, is that they have the ability to replicate data for long-term retention from AWS into Azure, and vice versa as well from Azure into AWS.
So when we think about parlaying our conversation into consumption models, it’s really a great option for customers because they have a new option for long-term retention and maybe a little bit more flexibility when it comes to placing their bets in terms of what is going to be most cost-effective for them over the long-term. And then another feature that they added, among others, was the ability in Azure environments to have essentially disaster recovery playbooks that can be executed for testing perspectives to prove that ability to recover, which is becoming very important, especially as we think about being relieved against ransomware, and other cyber attacks, and to be able to automate that recovery as well.
So with that, why don’t we turn our attention now to starting to talk about what exactly is a consumption model. I can start by maybe giving a few comments, and then Camberley and Steven, I’m sure that you’ll want to chime in and add your view as well. So this has obviously been going on for quite some time now. We’ve been talking a lot about the shift to the cloud. I know in one of our earlier podcasts, we had a conversation regarding possible repatriation from the cloud and some dynamics that are going on there. So really, when we think about a consumption model, from my perspective, it breaks down into a couple of key areas.
The first is really centered on the pricing and procurement. So when we think about software, typically that’s moving away from a perpetual license that’s deployed on premises to subscribing to something on more of a subscription basis. Increasingly, we’re seeing that hosted in the public cloud. From an infrastructure perspective, it’s pretty similar as well. It’s really looking at, “Okay, I’m either buying a box and deploying it on-prem, using that upfront capital expense, or maybe I am shifting to using some of these infrastructure as a service offerings that are hosted in the cloud.”
The only other thing I think I would comment as well is that there is sort of a managed consumption model where maybe the infrastructure is either co-located in a secondary data center, or maybe it is still hosted on the customer’s site, but they’re working with a managed service provider for day-to-day management and to streamline some of those capabilities. So again, my thoughts on how I’d kick it off, but Camberley, Steven, I’d love to hear what you might have to add. You can go.
Camberley Bates: So I look it as the phasing. We started out, as you were saying, with initial ones were I wanted to buy my compute, my storage, by the drink. So I could potentially have, and this is where Dell… I’ll rattle off some of the names so people understand who… Dell has got their APEX, Hitachi has got their EverFlex, HP has the GreenLake, who is the people that started with some of this on on-prem. IBM has got their services that they bring. Kyndryl has their kind of services, which they’re well known as outsources. You have TruScale from Lenovo. NetApp is Keystone. Pure, they’re as a service as well as their Evergreen products. All of them have this wide variety of consumption models because there is not, “One size fits all,” with everybody’s financial books. One of the latest ones that we’ve seen is end users want to own the hardware technology, and then the software that’s on top of that allows them to do the subscription model, consumption model, if you will.
Where we first started out was this managed service piece of it, where much like the outsourcing… No, not much like the outsourcing. But we did roll in a box and say, “Okay. So you’re going to use… Let’s take storage for it initially. I needed 500 terabytes, and I need it at this performance level, this LA, et cetera, and that’s what I signed for, and I paid by the drink.” I usually signed a couple-year contract, or a three-year contract, and as I needed more or I true up every month with how much I use so I could burst and come down. So since that, that was the original view of it that was totally managed, we’ve come back to where there’s… In some instances, it’s less and less of the fully managed, but it’s just by the drink thing, or it’s this slice and dice between the CapEx and subscription kind of tech area that we have. So it’s lots of flavors in there. So we’re getting back to that very creative financial selling that I am sure that all the sales reps are doing when they’re representing their products to the customer.
Steven Dickens: I think to echo some of Camberley’s points and go back to your original framing, I think we used to talk about on-premises and public cloud. It was very much, “Oh, I buy some infrastructure and I’ll cap exit on-prem. And then my only other option, if I don’t want to own infrastructure, is to go to the public cloud.” I think if you look forward now, and to echo some of your comments, Camberley, there’s a plethora of options. On-prem doesn’t mean what on-prem meant five or six years ago. You’ve got the ability to put that in an Equinix, or a digital realty, or another co-lo providers data center and get, “Kit you own.” I’ll put that in quotation marks for people who are listening to this on audio only. Yes, you have got a relationship with the hardware provider and you sort of technically, maybe in some way, shape or form, are either owning, or renting it, or consuming it by the drink, but it doesn’t have to be in your data center.
A lot of the early conversations around public cloud were, “I don’t want to be in the data center business, therefore the answer is public cloud.” I think it’s a lot more evolved than that now, and you can get out of your data center but still be having a sovereign in cloud or a private cloud in a co-location. And then you go into the other piece that you mentioned. I think there’s two components of this for me, and Camberley was talking about it also. There’s the hardware and software component of this, but the other thing that customers and enterprises need to be thinking about is the financial model. It’s as much about what the flex up and flex down, what the commitments are, whether they’re length of time, the sizes of the increments, how much you’re paying on a regular basis. Can you truly flex down?
I think that’s a restriction with a lot of the models. It’s, say, flex up with a baseline. But can you truly flex down if you go back to your example of, “I need 500 terabytes, but maybe next quarter I only need 300? Can I go down from the 500 base?” That’s always an interesting discussion. So I think there’s kind of two components of it for me. Well, maybe look at it in three ways. There’s the, “Where is the kit?” And that’s a vector of, “Is it in my data center? Is it in a co-lo? Is it managed?” There’s, “What is the infrastructure itself?” And then there’s the financial model that wraps around it.
Camberley Bates: Yeah. If you think about it, we saw this definitely on the data side of the house where, when you typically buy a system, you would look at what your requirements are for usually three or five years. About five-year depreciation schedule that you wanted that box to last for. You buy a certain amount upfront, you always want to have some headroom. You’re not going to run… If you’re crazy, you run up to 80%. If you’re less crazy, you’re running at 60% capacity so you have that bump kind of thing. In these scenarios, what you’re doing is you’re buying just when I’m eating. “I’ve gone to the grocery store, I buy dinner tonight. I’m not buying for the entire week or the entire year or whatever.” So this allows you to do that.
Now financially, you got to run the numbers to see if this makes sense for you, depending upon how predictive your numbers are, because of course, once I buy a system, I can usually always add shelves to it or something like that so I’m not necessarily buying everything up front. So those numbers have all got to be run by my financial tools. I know one of the things that we’ve been doing with some of our IT end user clients is running those models for them, helping them understand where the trade-offs are on there and sort of things.
So one of the other considerations here is, if you’re going to look at managed-type services, and these are usually called outcome services, it’s like, “What are the outcomes and the commitments that you are getting from the particular vendor that’s providing it?” For instance, “What’s my SLA?” So we all know the SLA up in the cloud is what? Three nines. But if you’re doing an outcome-based managed service, whatever that outcome-based managed service is, it’s got to be more than three nines on premise. So you’re looking at a five nines kind of consideration. That’s what most of the folks are doing when they’re bringing those kind of things, as well as, “What is the management services that are behind it? Who is doing that work, et cetera? How responsive are they, et cetera?”
Steven Dickens: I think that’s an interesting point to make here. As enterprise architects look at, “Do I put this in the public cloud? Do I put it on-prem in my own data center? Do I buy the kit? Do I rent the kit? Do I go with a consumption model? Do I put it in a co-lo?” I think some of the rubric for that… You talked about SLAs and availability, I think performance is another rubric. We talked about some of this with workload placement. I think you can apply that same rubric and decision-making process that we talked about in workload placement, whether it’s performance availability, scalability, security, economic factors or environmental factors.
There might be a case for putting a database workload on infrastructure that you own in an Equinix data center cross-connected across the data center to a AWS point of presence because you want an AWS front end, but you want the database, and the data, and the security. I was going to ask you this question as well, Krista. Maybe you need performance, maybe you need availability on the data layer, and maybe you need some enhanced security on that data layer. So I think it’s more evolved than, “I’m either doing all of it here or all of it there.” Are you seeing that from a data perspective, Krista, as well in some of the security conversations you’re in?
Krista Macomber: Absolutely. 100%, Steve. You really hit the nail on the head, and I’m glad you mentioned it because that was going to be the other component that I was going to bring up, was both security and compliance. I think as long as the public cloud has been around, security and compliance have been two potential barriers to adopting the public cloud. Of course, as usage has evolved, we’ve run into a number of these other factors that we’re talking about again, things like overall cost economics for example, that have, as we touched on in our other episode, in some cases encouraged actually workloads to be brought back on premises. But certainly, we’re seeing that customers are having more fun grade options in terms of using some of these multi-hybrid cloud approaches to not only meet some of those areas that you’re mentioning, like performance and availability, but certainly from a data security and a data protection standpoint matter.
You need to be able to meet your backup windows, you need to be able to have appropriate recovery times, depending on what application or piece of infrastructure it is that’s being protected. Those things certainly make a difference in terms of the decision making, but also it is the security and compliance end of things. We have customers, depending on their industry, where they say, “I cannot utilize the public cloud for either a backup repository for a long-term retention, storage implementation, or even for an air gap or data isolation type environment simply due to the regulations of my industry, or due to the concerns from the business that this infrastructure is not going to be secure.” So certainly, we’re saying that can play a significant role in the decision making there.
Steven Dickens: Yeah, agree completely.
Krista Macomber: Absolutely.
Steven Dickens: The challenge with these enterprise architects is all of these platforms are valid, you just got to make… And all of these options are valid and often make a lot of sense. The decision making about which one you use is the crucial piece, and when you use it, and for what workload. We’re back to workload placement but it’s exactly the same for these consumption models.
Krista Macomber: Absolutely.
Camberley Bates: One of the other things, this is kind of… Go ahead, Krista. I’ll bring another setup. Go-
Krista Macomber: No. Please go ahead.
Camberley Bates: Well, one of the other things that we were looking at, we do create a matrix of considerations or features, functions of capabilities when we evaluate some areas. One of the other things that was on there is, “What is the consideration for at the end of life?” So when you lease a car, that end-of-life kind of situation is important about what your mileage may vary. So how does that work in terms of when you want to flow into the others?
Because very often, especially when we come from the data side, the IT guys would take, “I’d buy the best and the brightest for my top applications. And as I came to end of life, that older gear would then move into another life, if you will, that’s handling some older data that I didn’t really care as much about. I don’t have to have the highest high availability, reliability, speed, or whatever, kind of things.” So that changes maybe what’s on the footprint, what you have over the long haul and how I manage my technology tiering, if you will. I think that happens the same, and I’m not as quite as familiar as… Steve, you probably know it. Does that happen the same way with the servers as well that they-
Steven Dickens: That end of life, sort of what goes on with the server. It was interesting. I was out in Lenovo’s factory in Hungary, and they’re taking a lot of kit back in, refurbing it and second life for service maybe in emerging markets, maybe for different workloads. Obviously, there’s an eco component of that. If we make these things and they only have a three-year life, and then we’re taking them to landfill, that’s not good for anybody when that server could have maybe another four or five year life doing something else. Maybe that’s emerging markets, maybe that’s less critical workload. So I think I hadn’t put that in the equation, but I think it’s definitely something to think about, especially as ESG concerns come in.
Krista Macomber: It’s growing. We have conversations with clients on the vendor side that say, “This needs to be a part of the conversation with customers because they want to know what will the overall economic impact be.” The only other thing that I might add here too is the ability for the customer to access the data, especially as we move into managed services or hosted services. One thing that we encourage IT operations to look at when they’re looking at these contracts is their ability to access their data once the contract has terminated, because sometimes, there are some stipulations around that. So really understanding what access will there be to the data after that contract is concluded is important.
Steven Dickens: Exactly.
Camberley Bates: So it reminds… Thinks about is, “Okay. So it’s no longer just a technology decision. It is a financial decision that you’re going to have to plug in the numbers and figuring out how you want to get involved with your controller to figure out what’s my budget look like.” Although that’s the other piece of this and that why some people are looking at is because it’s… Let’s say, as an IT guy and I’m running out of capacity someplace, I’m running out, I need more, I don’t necessarily have to go back to the well and ask for the CapEx. I have a consumption that ticks up and, “Wow, that is a whole lot less pressure on me as the guy that’s trying to manage that, and that’s why one of the reasons why this is really super attractive, because I just don’t have to go through that process every time.”
Steven Dickens: The flip side of that is FinOps because that can run away with itself very quickly.
Camberley Bates: As a person that’s run a business, I keep saying, “Death by 1,000 cuts.” So it’s like every time someone runs their credit card for 100 bucks, or 1,000 bucks, or 2,000 bucks, or whatever it is, it’s kind of like, “You know what? Pennies do matter, we’ll go back to Rockefeller.”
Krista Macomber: Right?
Steven Dickens: Yes. Watch the pennies and the pans look after themselves.
Krista Macomber: Exactly.
Camberley Bates: That’s a good one.
Krista Macomber: I think that could be its own can of worms for its whole separate podcast.
Camberley Bates: Yeah.
Krista Macomber: So with that, I know we’re getting close to time here. I think that might be a great place to wrap up our conversation for today. We do want to thank everybody for watching, for listening, whatever your preferred platform choice is. Please make sure to like and subscribe to this channel, to this video. That way, you can make sure to receive updates on a weekly basis when our new episodes come out. And of course, Futurum Group, as a team, we’re very active on all the major social media platforms. So we, again, appreciate you joining today, and we look forward to having more conversations with you on those platforms. I want to thank Camberley and Steven, as always, for a great conversation.
Camberley Bates: Great.
Krista Macomber: We’ll see you next time.
Camberley Bates: Thank you, Krista.
Krista Macomber: Thank you.
Author Information
Camberley brings over 25 years of executive experience leading sales and marketing teams at Fortune 500 firms. Before joining The Futurum Group, she led the Evaluator Group, an information technology analyst firm as Managing Director.
Her career has spanned all elements of sales and marketing including a 360-degree view of addressing challenges and delivering solutions was achieved from crossing the boundary of sales and channel engagement with large enterprise vendors and her own 100-person IT services firm.
Camberley has provided Global 250 startups with go-to-market strategies, creating a new market category “MAID” as Vice President of Marketing at COPAN and led a worldwide marketing team including channels as a VP at VERITAS. At GE Access, a $2B distribution company, she served as VP of a new division and succeeded in growing the company from $14 to $500 million and built a successful 100-person IT services firm. Camberley began her career at IBM in sales and management.
She holds a Bachelor of Science in International Business from California State University – Long Beach and executive certificates from Wellesley and Wharton School of Business.
Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the Vice President and Practice Leader for Hybrid Cloud, Infrastructure, and Operations at The Futurum Group. With a distinguished track record as a Forbes contributor and a ranking among the Top 10 Analysts by ARInsights, Steven's unique vantage point enables him to chart the nexus between emergent technologies and disruptive innovation, offering unparalleled insights for global enterprises.
Steven's expertise spans a broad spectrum of technologies that drive modern enterprises. Notable among these are open source, hybrid cloud, mission-critical infrastructure, cryptocurrencies, blockchain, and FinTech innovation. His work is foundational in aligning the strategic imperatives of C-suite executives with the practical needs of end users and technology practitioners, serving as a catalyst for optimizing the return on technology investments.
Over the years, Steven has been an integral part of industry behemoths including Broadcom, Hewlett Packard Enterprise (HPE), and IBM. His exceptional ability to pioneer multi-hundred-million-dollar products and to lead global sales teams with revenues in the same echelon has consistently demonstrated his capability for high-impact leadership.
Steven serves as a thought leader in various technology consortiums. He was a founding board member and former Chairperson of the Open Mainframe Project, under the aegis of the Linux Foundation. His role as a Board Advisor continues to shape the advocacy for open source implementations of mainframe technologies.
With a focus on data security, protection, and management, Krista has a particular focus on how these strategies play out in multi-cloud environments. She brings approximately 15 years of experience providing research and advisory services and creating thought leadership content. Her vantage point spans technology and vendor portfolio developments; customer buying behavior trends; and vendor ecosystems, go-to-market positioning, and business models. Her work has appeared in major publications including eWeek, TechTarget and The Register.
Prior to joining The Futurum Group, Krista led the data protection practice for Evaluator Group and the data center practice of analyst firm Technology Business Research. She also created articles, product analyses, and blogs on all things storage and data protection and management for analyst firm Storage Switzerland and led market intelligence initiatives for media company TechTarget.