In this episode of Infrastructure Matters, hosts Kirsta Macomber and Camberley Bates walk through the current perspectives of VMware transitions (or not) plus announcement from NetApp on Cloud and Druva’s offerings.
Key topics include:
- NetApp announced Blue XP Workload Factory aimed at automating and managing cloud workloads like SQL, Oracle, and SAP for VMware environment.
- Other announcements covered from NetApp included expansion of services with Cloud Providers, including Block Storage offerings with AWS and data protection for Azure and GenAI enablement.
- GenAI highlights with NetApp Ontap leveraging capabilities like FlexClone and Snapshots for efficient data management and training.
- Druva’s expansion of threat hunting capabilities and the introduction of managed detection and response services within their backup environments.
- The critical decisions enterprises must make regarding VMware, from potential migrations to competing platforms like Kubernetes-based solutions, and the implications for business continuity and competitive edge.
- The strategic challenges enterprises face when considering VMware migrations, including the complexities of managing costs, operational impacts, and infrastructure readiness for alternative platforms.
You can watch the video of our conversation below, and be sure to visit our YouTube Channel and subscribe so you don’t miss an episode.
Listen to the audio here:
Or grab the audio on your favorite audio platform below:
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this webcast. The author does not hold any equity positions with any company mentioned in this webcast.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Transcript:
Krista Macomber: Hello and welcome to episode 48 of Infrastructure Matters. I’m Krista Macomber, and I’m joined by my co-host, Camberley Bates. How are you doing today, Camberley?
Camberley Bates: Doing pretty good. I’ve been running around on a plane and trying to stay healthy. There’s too many people catching COVID right now on the third round around the world or something.
Krista Macomber: I know. I know. I’m hearing a bit about it going around. It’s very unfortunate. I think Steven, our third co-host is on a beach somewhere, so hopefully he’s having a healthy and relaxing vacations. I’m sure we’ll hear all about it when he gets back.
Camberley Bates: Yeah.
Krista Macomber: Yeah, but so just to jump right in here, I know we’ve had a few summer announcements here. Camberley, you and I were both on a call earlier this week with NetApp regarding some updates to their portfolio, so why don’t you kick us off and give us an overview of what they’re investing in these days?
Camberley Bates: Sure. There was a huge amount of announcements in this payload, line items kind of thing, which is we’re finding that that’s a lot is happening, so it’s tough to keep track of everything. Chuck Foley did the briefing with us, and then I also got on the phone with him this week to go through, okay, so exactly, really what does this mean and that kind of thing. So we’ll have a research note coming out on it, but give you the highlights. The big item that they announced is this thing called Blue XP Workload Factory. And Blue XP is more or less as a control plane to enable you to manage your workloads on prem. And people have said, I’d like to use this for my cloud solutions, because they’re first party up there, their cloud solutions. So what they are doing is that this is Blue XP Workload Factory is specifically for cloud workloads and specifically to help customers deploy and automate some of the operations of cloud resources for very, very specific workloads.
And those are being databases such as SQL, Oracle, SAP for VMware environments. So deploying in those environments. And then for some GenAI environments, and I’m still digging into that one, but more or less what’s happening here is you’re templatizing the workload best practice deployments for these environments. And because they are complex, unlike yes, we have, it’s supposed to be easy to spin everything up in the cloud, but if you’re going to spin it up for high availability, be cross zones and all that kind of thing, it’s not just falling off the chair. It’s not that easy. This is a way to make sure that you’ve got consistency, that you’ve got everything buttoned down, that it’s automated. Once you select some of the items that you want to have, it automatically goes out and does the work. Right now it’s just for AWS, it will extend Azure and Google by the end of the year, but this mini control plane that they’re doing to doing up there. So that’s pretty slick stuff. And I also think they have some stuff going on in some protection. Your area, right?
Krista Macomber: Yes. Yeah, and that was one thing I was going to mention is that I thought it was interesting in Chuck’s presentation that, so what we’ve been seeing from NetApp the last couple years has been this emphasis on the fact that they bake in their data services as a part of on tap, which is their operating system. So when we think about their Snapchat capabilities, for example, when we think about some of the recovery operations, that was one thing that they talked about in the context of this kind of workload factory, was not only is everything templatized, but that includes like I mentioned, your data protection capabilities. That’s important of course when we think about ransomware and all the cyber attacks that are happening, but also any other disasters that might need to be recovered from plain old user error and data deletion. I was certainly… Again, it fit with the strategy that we’ve seen from NetApp certainly, but also again, I was just kind of glad to see that emphasized there.
Camberley Bates: So they also use this time to, like a lot of other people, emphasize the capabilities that they have for a GenAI, specifically RAG where you’re doing training, and talking about, okay, so I want to use my own data obviously sitting on a NetApp file system, to train my model and what are they bringing to bear to enable that? And they have their FlexClone, their snapshots, their highly super-efficient capabilities, non-impactful on production. So that’s the ability to do FlexClone, Snapshot, et cetera, that data, and to be able to use that data to rapidly, train, take another snapshot, train and also log those snapshots so you’ve got a record for where you’re creating the data. The other one that was kind of interesting was this thing called FlexCache that they have and that you’re able to mount a volume into a different location. So why do you really care about that?
And what you’re doing is, let’s say my data is sitting in cloud in the UK and the GPUs, because we have less GPUs around the world that are available, are sitting in LA and I want to move. So what I can do is I can use the FlexCache to mount that volume to a different location and use the GPUs over into that other location. So it’s kind of slick. I mean it’s kind of a unique space of where it’s at and I think it’s a temporary situation to address the GPU shortage, but it’s where is your investment kind of thing.
Krista Macomber: And I think from my perspective being our data protection person, the ability to use the snapshots for training, it’s funny, I’ve had so many conversations, especially over the last year regarding protection of the data that’s being used to train these models, protection of the models themselves, things of that nature, but not so much around using a snapshot or something like that to train the data. I think it’s, and to your point regarding GPU shortages and things of that nature, it’s an interesting use case and I wouldn’t be surprised if you start to hear a little bit more of that perhaps from other vendors moving forward.
Camberley Bates: Right. And if you’re in the cloud, which they have and they’re talking about using the data in the cloud, having super efficient space efficiency up there is important because.
Krista Macomber: Absolutely.
Camberley Bates: Another one they, I felt like they snuck this one in there and I think it’s pretty significant actually. They announced on tap, FSX on tap that which is AWS’s first party offering is a file-based environment. It’s NFS offering. They announced NVMe over TCP, which means it’s block storage offering up there. And so what they’ve done is they’re delivering the capability to really deliver the hard-to-please performance-based database systems, VDI applications, maybe even VMware environment that they’re operating on. So nice announcement for them, expansion of the market. As I said, this announcement was primarily looking at what I’m doing up in the cloud environments and how I’m operating up there. So it was a nice wrap up there. As I said, a lot of little pieces, but they weren’t… Actually, when you unwrap it, they’re not that little. They’re pretty big.
Krista Macomber: I know. I know. And it’s kind of just reflecting back in the call, it’s kind of like, oh yeah, they mentioned this. And another thing I was just thinking that they were talking about from the call was around their data classification capabilities. And when we talk about the integration of these capabilities into Blue XP, for example, to apply them to these cloud workloads, there’s a lot that can be said there for the ability to not only identify your sensitive and your compliance data, which is obviously important, but also we were talking about efficiency and so potentially identify redundant data, especially if you are thinking about running AI in the cloud like we were just talking about. Another thing that there is implications for, I know we’re going to get to this a little later in the conversation, but for anyone for example that’s thinking about migrating perhaps from VMware to the cloud, we’re hearing a lot of buzz about that. Obviously those data classification capabilities are going to be critical there. Anyway, like you mentioned, more to come. A lot of little pieces there, but certainly a powerful announcement
Camberley Bates: Very much so. And you had some, I mean it’s July, so we don’t get a whole lot of announcements going on, so maybe you want to talk about one of the other ones that came out this week.
Krista Macomber: Yeah, so this one is dropping, I believe the day before this episode is going to air. And this is a very interesting development that I’m seeing across the data protection space. So specifically what we’re seeing is that, so Druva is expanding its threat hunting capabilities in terms of the indicators of compromise that it supports, and it’s also offering managed detection and response for Druva backup environments. I wanted to call this out because this is a trend that I was alluding to in terms of data protection companies getting not only more into the ability to detect attacks that are occurring in that protection environment, but also the ability to support incident response. A great example is we saw Veeam acquire Coveware in late April of this year, and it’s a very smart-
Camberley Bates: Remind the viewers what that does.
Krista Macomber: Yeah, so Coveware is an incident response company. So what Veeam is doing is Veeam is integrating Coveware’s intelligence into threat actors and their behavior as well as Coveware is consulting with clients on how to best approach incident response. The Coveware side is, it’s on one part an information and technology that’s going to be integrated into the Veeam platform, but it’s also the ability to again, consult around incident response. For Druva, they’ve long been driving this as a service delivery model. So what I’m seeing come out of this announcement is more of these capabilities baked into the platforms. So kind of 24/7, 365 monitoring for the threat actors.
Having run books and automation there to sort not only maybe lock down backups or snapshots, but also maybe to kick off automated recovery processes to support those incident response workflows. I just find it very interesting because to me, I mean incident response is, it’s very strategic. It encompasses many different organizations in the company and of course it’s something that when we think about reducing the amount of downtime coming out of a cyber attack, it’s again really just critical. And these announcements are just showcasing the role that data protection can play there and it’s really helping these companies to again really evolve into that very strategic relationship there with customers.
Camberley Bates: Yeah, and that kind of glides a little bit into this next discussion. Randy Kearns and I were traveling this week and talking had opportunity to really talk about some of the client engagements he’s involved with. Many of you guys know Randy from Evaluator Group and he’s still involved with some of our customers and IT end users and kind of thing. And part of that discussion was talking about what’s on the minds of the data guys in the IT operations places and the data infrastructure guys. And we were chatting about this, about what’s critical to them because that’s who he’s talking to all the time. And while we have spent so much of our time talking about GenAI and AI applications, which is really at the BU side of the house and the executive side of the house. What’s hitting the guys on the floor and they’re dealing with as he sees it is two primary issues.
Number one, cyber security and the constant work that’s there. And I think you have been hearing it because you’ve been talking to CISOs, et cetera, about the constant pounding that’s going on. It’s just over and over and over. And the other one was VMware and the consternations around… And so it’s not only him talking about it, but the last, I don’t know, it just seems like every conversation I’m having and we need to talk about VMware and we need to talk about VMware. And the question is what are you seeing? And so I thought I would talk about what I’m seeing right now where our predictions are because they haven’t really changed from when all of this hullabaloo started in the spring.
Because most enterprises are looking at what am I going to do? My bill just went up by 30%, 40%, or whatever it’s going up to, so I need to address the issues that are going around there. And so I’m holding true to a couple different things. One is that we’re going to see the small business guys probably migrate off of some of the VMware environments. Scale competing. Newtonics have got some fabulous offerings that are there. They are virtualized, they’re integrated systems that can be implemented by these guys. So we’re probably going to see that migrate off of VMware. The next guys up, the larger companies are going to make some decisions and they have some hard decisions to make. And the biggest areas they’re having to make the decisions is around people, processes and procedures.
Krista Macomber: Interesting.
Camberley Bates: Well, the amount of investment, the bigger the company is, the amount of processes and procedures you have and how much you’ve invested in those people, the cost is huge.
Krista Macomber: Oh, absolutely. I mean, I’ve been talking about it from the standpoint of just like you mentioned the complexity, the risk, the cost of migrating off of VMware for these large shops. But in my mind I wasn’t necessarily framing it from the standpoint of the people behind the scenes and sort of the processes. So it’s very, very true.
Camberley Bates: I mean, it’s the same thing we looked at when we’ve done any other big migration that comes over.
Krista Macomber: Yeah.
Camberley Bates: And they’re big migrations. So when we looked at it, we said, okay, so a lot of the new development… Maybe the new development is going to happen on the A Kubernetes platform, OpenShift, CEC, whatever, Rancher. So I started developing up there. But the issue on that side is while we’ve done an awful lot of work on the data infrastructure side for persistent data capability, is it really still ready to take on some of these higher availability database applications that have to deliver or is it still really designed for those more of the web apps, maybe the GenAI stuff is all going to get developed there. So we’ll still have a set of items that are going to stay on a virtualized server environment that are database oriented, very block oriented, et cetera, in that space.
And so those decisions about where we’re going to have… I still believe that we’re going to end up with multiple platforms also that what is available right now on maybe OpenShift or SUSE from an open source virtual virtualized server environment is not a complete solution that could compete really well with an entire VCF kind of environment. It’s not as easy to use. So we’re going to still… Those choices that they’re having to make are extremely difficult. Then the last thing that I think will happen is there will be some cost or some pricing negotiations going on. Question is that a one-time negotiation with VMware to get maybe for the next two years, not to have to face it? And what happens after the two years after that? Where does that go? Frankly, I wouldn’t want to be the CIO having to make these decisions.
Krista Macomber: I was just thinking that Camberley, it’s almost, we have the easy job, right? In this scenario we can kind of analyze, maybe provide some guidance, but your CIO has the difficult job, right? Absolutely.
Camberley Bates: And it’s not just a TCO decision, it’s not a total cost of ownership decision because usually when you look at total cost of ownership, I’m looking at, okay, if I build this stack of systems, this is what it’s going to cost me to do this virtualization, this network, this data storage, this whatever. Maybe, maybe not. Maybe it’s somewhat less than the VCF stack. Okay, great. Whatever. But how do you put, and we often call it soft costs, but they’re not. How do you put a value on there? And then you have that conversion or conversion costs that you have and what that takes in terms of if I take my eye off the ball on the critical applications that I’ve got to deliver in order to enable this migration, do I fall behind competitively and how do I put a… You have to put a dollar on that.
And that reminds me many, many years ago when I was working for IBM and I was dealing with a client that had another mainframe, that’s how long ago it was. And they were having to, even though they knew that company eventually would probably be out of business in terms of the mainframe business, which they are, and we were talking about having them migrate to IBM mainframe, this is the eighties. And we’re dealing at the board level in this situation and the board comes back, the executives come back and they say, “We know we should be on the IBM mainframe, makes all the sense in the world strategically. It’s where we need to be, but we cannot stop development because we’ll be too far behind.” Competitive situation. And for as a twenty-year-old sales rep working for IBM at the time, it was devastating, but… It’s all about me, right?
But then you think about what has to happen strategically, their business decision was probably the right decision for the company. At least for right now, I can kick the problem down the pike about five more years before I actually have to make any kind of decision on it. So that is my long, soliloquies on where VMware is and those decisions that are having to make. But the recommendation to the IT guys is you have to look at all of the factors in terms of the business impact, the operational impact, as well as the infrastructure costs impact. And there’s no easy answer here. There is no easy answer.
Krista Macomber: There is not. There’s definitely not. All right. Well Camberley, unless you had anything else, I think we’re awesome. All right, well I think we’re-
Camberley Bates: Well let everybody just dust know, tune off, kind of dust the sand off your feet, go jump in the water again and then come back next week and listen to our next podcast.
Krista Macomber: Absolutely. A great conclusion. Thank you Camberley, and thank you everyone for listening. Like Camberley’s mentioning, please make sure to and subscribe so that way you don’t miss any of our future episodes. We are producing these every week. Please leave us some comments. We love the engagement, we love to take your feedback and we will see you on the next one.
Camberley Bates: Great. Thank you.
Krista Macomber: Thank you.
Author Information
With a focus on data security, protection, and management, Krista has a particular focus on how these strategies play out in multi-cloud environments. She brings approximately 15 years of experience providing research and advisory services and creating thought leadership content. Her vantage point spans technology and vendor portfolio developments; customer buying behavior trends; and vendor ecosystems, go-to-market positioning, and business models. Her work has appeared in major publications including eWeek, TechTarget and The Register.
Prior to joining The Futurum Group, Krista led the data protection practice for Evaluator Group and the data center practice of analyst firm Technology Business Research. She also created articles, product analyses, and blogs on all things storage and data protection and management for analyst firm Storage Switzerland and led market intelligence initiatives for media company TechTarget.
Camberley brings over 25 years of executive experience leading sales and marketing teams at Fortune 500 firms. Before joining The Futurum Group, she led the Evaluator Group, an information technology analyst firm as Managing Director.
Her career has spanned all elements of sales and marketing including a 360-degree view of addressing challenges and delivering solutions was achieved from crossing the boundary of sales and channel engagement with large enterprise vendors and her own 100-person IT services firm.
Camberley has provided Global 250 startups with go-to-market strategies, creating a new market category “MAID” as Vice President of Marketing at COPAN and led a worldwide marketing team including channels as a VP at VERITAS. At GE Access, a $2B distribution company, she served as VP of a new division and succeeded in growing the company from $14 to $500 million and built a successful 100-person IT services firm. Camberley began her career at IBM in sales and management.
She holds a Bachelor of Science in International Business from California State University – Long Beach and executive certificates from Wellesley and Wharton School of Business.