On this episode of the Futurum Live! From the Show Floor, The Futurum Group’s Camberley Bates is joined by Nutanix’s Aaron Delp, Cloud Native & AI Marketing and Luke Congdon, Sr. Director Product Management during KubeCon Chicago for a conversation on how Nutanix is delivering simplified Kubernetes and AI-anywhere solutions with Nutanix GPT-in-a-Box.
Their discussion covers:
- What Nutanix has done to bring its solutions to the Kubernetes market
- How the Nutanix Cloud Platform simplifies operations for cloud-native and AI applications, supporting most of the major Kubernetes distributions
- A closer look at Nutanix GPT-in-a-Box, and how it encourages customers to bring their own model and their own data to create generative AI applications
- How Nutanix’s AI solutions approach data privacy as a driver of on-premise AI deployments
You can learn more about Nutanix Cloud Native Solutions or Nutanix AI Solutions, on the Nunatix website.
You can watch the video of our conversation here:
Listen to the audio below:
Or grab the audio on your streaming platform of choice here:
Disclaimer: The Futurum Tech Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.
Transcript:
Camberley Bates: Hi folks, it’s Camberley Bates. We’re live on the show floor here at KubeCon 2023 in Chicago. I am with Nutanix. Thank you gentlemen for joining me. Let me introduce you, my folks here. I have Aaron Delp, who is the cloud and AI guy. Thank you.
Aaron Delp: Yes.
Camberley Bates: Welcome. And Luke Congdon, who is the Senior Product Management for the Nutanix Cloud Platform. Thank you very much for joining us.
Luke Congdon: Thank you for your time.
Camberley Bates: And before we get started, I need to say one thing. You guys have got cool socks here.
Aaron Delp: Yes.
Camberley Bates: Aaron just ran the Marine Corps marathon. Did you wear these?
Aaron Delp: I did. I did not. I should have, but I did not.
Camberley Bates: Okay. You guys need to come by the booth and get the socks. So there you are. So I’m going to start with, since we’re here at KubeCon and we’re talking about Kubernetes and everything about that, also AI, I’m going to start with a little bit, a couple trends that we found with doing some of the research work that we did last year or this… it was not even a year old actually right now, in which we found that about 85% of the customers were looking to do something along standardizing of their systems and what they’re putting in place, which is pretty amazing to me given how free for all CNCF can tend to be. And then they’re also looking… 50% of them are looking for on-premises deployments, which is maybe it’s kind of a reasonable number considering governance and privacy. As a result, we’re seeing more offerings like what Nutanix is doing, which is Kubernetes in a box, if you will. It’s not exactly what you’re doing, but it’s kind of like cloud-like offerings that you’re bringing to the table. Luke, my first question for you is can you talk about what you’ve done to bring Nutanix to the Kubernetes market? Because I know you’ve done a lot of work with Red Hat and SUSE and the long list of guys that are at distributions that are out there.
Luke Congdon: Absolutely. Thank you. We’ve done a lot. We think that Nutanix Cloud Platform is the best platform for AI and Kubernetes all in together.
Camberley Bates: Okay, so I got a challenge on you that. Why is it the best? Because that’s a big, big huge claim.
Luke Congdon: Big, big claim. Because we’re on premises, because you are behind your firewall where your data is safe. And this is a particular concern for people running AI in ML. We have a model, we’re able to deploy it. We can give you full stack turnkey infrastructure, but your data, data inside your company must be private and must be secure. And there’s a really big concern amongst customers when they go to public cloud with a easy to deploy service with an inference endpoint. I can do that easily, but is my data safe? Customers are very, very concerned about this and we have a very strong claim that with our data services with Kubernetes, whether it’s our own Nutanix Kubernetes engine or as you mentioned partnering with OpenShift, great platform and products, we also support many of the other primary Kubernetes engines in the industry. So customers ultimately have the choice. They can choose their hypervisor, they can choose their Kubernetes, they still get turnkey infrastructure with those baseline technologies of the data services that come in, file block volume and objects and files in particular for cloud native and AI.
Camberley Bates: I want to dive into the data services, but before we do that, AI lead on the keynote stage, because Kubernetes basically is what is the platform is used for a lot of the stuff that’s going on in terms of the ML AI space as well as the large language models. I know you guys are doing a whole lot of stuff in that arena. So the stuff is being a technical term that I use all the time.
Aaron Delp: Of course.
Camberley Bates: But you’ve got something called GPT-in-a-Box.
Aaron Delp: Yes.
Camberley Bates: Not GPU.
Aaron Delp: Nope.
Camberley Bates: GPT-in-a-Box.
Aaron Delp: Correct.
Camberley Bates: So let’s talk about that.
Aaron Delp: Yes, absolutely. So first of all, to build on what Luke was saying, what we’ve done is introduced in a opinionated stack. And what I mean by that is there’s lots of ways you can build an AI stack. There’s lots of options. You can kind of plug in the pieces and at the end of day, get a solution. We developed this internally first in our AI development and then brought this to market and it was the best fit of all of the puzzle pieces, if you will, to develop this stack. So what is it? As Luke mentioned, Nutanix, hyper-converged infrastructure at its core, then Kubernetes, then CNCF project Kubeflow for AI and ML operations at scale. And then combining that with what we call the foundational models. There’s a couple open source foundational models in the industry, Llama’s Meta 2, Mosaic, MPT, and Falcon. We’re encouraging the customers to bring their own model, bring their own data and create generative AI applications based off of this opinionated stack.
Camberley Bates: So they’re bringing their data, you’re bringing the Llama 2 on-premise allowing them to do the training of those environments.
Aaron Delp: Exactly.
Camberley Bates: Awesome. Awesome. So talk about the data services because what you do with the AI, I mean data is the bane of AI’s existence sometimes. I’m sure not everybody would say that, but data is often a sticky piece of it.
Luke Congdon: And this is what we’re finding. People need persistence, whether it’s with Nutanix database services, with stateful databases or Kubernetes. Kubernetes first came out, everyone said stateless. What we’re finding in enterprise, people need state, they’re running applications, they need to save the data, and this is where the data services come in. Often with AI and ML, we’re going to lead toward object storage and also file storage. And file storage may be where you store the model, objects is where you store your additional values. And there’s different ways to do that. You can fine tune the model. You can do retrieval-augmented generation to point to another data store, which again can be Nutanix, that could be your object store with your company’s private data on it. And still the fundamental important thing for customers is still keep it on-premises where it’s safe. That’s the number one conversation starter or opener.
Camberley Bates: You also have an offering in the cloud. So how does that play into this space then if you keep on talking about on-premises in the cloud? What are you guys expecting customers to do with the cloud environment?
Luke Congdon: Well, so for a long time now we’ve been on premises. We can go hybrid mode, we can go in cloud,
Camberley Bates: Right.
Luke Congdon: It’s still a VPC. So it’s still your infrastructure in cloud secured by you, maintained with a single simple operational model. So you may be in cloud but in a VPC, your data is still safe considered on-premises. Whereas if I go to another popular vendor with an inference endpoint, I’m on somebody else’s cloud. I don’t know if they’re maintaining it. They probably are, but I don’t know they’re maintaining it. Are the CVEs getting fixed? Are the patches regularly getting done? There’s reason to be concerned.
Aaron Delp: Yeah. And if I could add to that too, another thing we’re also seeing is when it comes to, especially with AI, the fine-tuning and training of the model, it’s with company specific data. Yes, there are cloud services out there you can put your data into that, but then what happens? And so what we’re hearing from our customers, they don’t want that data especially in an AI context to go out to cloud. So again, bringing it back to the GPT-in-a-Box offering, what customers are telling us, there’s three big advantages. Number one, simplicity. So this whole idea of this opinionated stack get up and running, go very quickly because a lot of IT operations don’t necessarily understand ML and a lot of data scientists don’t understand IT operations. So we’re bridging that gap for them. Then number two, security. We’ve mentioned that, but the security in the cloud and the data services.
Camberley Bates: Well specifically what have you guys done on the security? Because that’s such a high… It just explodes kind of when you talk to people about that.
Luke Congdon: Security is at every level and it needs to be baseline patching CVEs from the Nutanix Cloud platform, whether it’s the AOS Storage fabric and operating system, the hypervisor, Kubernetes, it’s incumbent upon us to make sure customers get updates, it’s secure, it’s locked down, it adheres to STIGs and compliance. So we make sure all that’s done.
Camberley Bates: So when you update on-premise, is your cloud also updated at the same time that you have up in whatever cloud provisioning that you have?
Luke Congdon: Our cloud product is not a managed service, so customers do still need to update that as well.
Camberley Bates: They have to do it their own. Okay.
Luke Congdon: That’s right. And the cloud is just another cluster. I can have clusters-
Camberley Bates: Like any other cluster.
Luke Congdon: I’ve got another cluster. It happens to be in cloud. It’s the same operational paradigm. It’s equally as simple. Customers just get to choose to use that instead.
Camberley Bates: Okay. Depending where they want to go and take advantage of whatever’s going on up there and that kind of stuff. So let me slightly shift. You guys are well known for early on when you guys first released, it was all about vSphere, VMware, vSphere, the integration of the systems, your hyper-converged systems and for those of you don’t know hyper-converged, this is when you take compute and storage and networking and the VMware and bundle it all together for a super easy to use environment. You also rolled out your own VM environment, AHV, which is what it’s called. All of that stuff was really, really super simple to use, and then you kept on adding services on top of that. What are you doing to make this simple? That was with vSphere, a lot of stuff going on that we can talk about that all day long with the Broadcom stuff that’s happening. But specifically people are looking at SUSE and Red Hat OpenShift and some of these other distributions to deploy. What are you guys doing in integrating those systems onto your area to make it as simple and easy to use as you did with vSphere?
Aaron Delp: You want to start and I’ll finish?
Luke Congdon: I’d be happy to. One of the philosophies I really love about Nutanix is the notion of customer choice. We have all the baseline products with AOS, your storage fabric, all the data services. On top of that, if you would like to run ESXi, we are perfectly fine with that. We think that’s a good combination if the customer chooses it, we do have our own hypervisor environment and when it comes the next level up, and we’re going to start building Kubernetes clusters in these HV VMs or on ESXi if you choose. We also support customer choice there. So we’ve done a lot for integration with Cluster API, putting that in open source integrations and installers for Red Hat OpenShift into open source. So we’ve really made sure that customers can choose any Kubernetes platform they want, and if it’s CNCF compliant, they can use our data services, they can integrate it any way they want and it doesn’t disrupt their business model. It’s good for customers. So we like to lean into that.
Aaron Delp: And if I could add, it is of particular benefit in a edge and core model. And the reason why is because you have the same benefits, the same operating tools, the same management of all of it. And if I take it back to AI and ML for a second, how do we think about this? Well, you’ve got the building of the models, which tends to take more GPUs like the day zero get up and running. Well, you can do that in core. Well, you don’t need as many resources to do inferencing or running the applications. You can do that at edge. So we have a lot of customers very, very interested in the benefits of the same operating model, core to edge, take that data, go back and make it better. And then doing the full data services, lifecycle management of the models over and over.
Camberley Bates: And what I like about what you guys are doing in terms of the edge, there’s much more smaller devices. You want to be able to control that from the corporate space in terms of upgrading new versions, et cetera, to keep the security locked down, et cetera. And also as we’re looking at these areas where we’re getting into the manufacturing and mining, especially when we have edge devices that are out there, those are the ones that come to top of mind. Although we have utilities, we have all kinds of those areas that we’re playing out.
Aaron Delp: Yes, absolutely.
Camberley Bates: So we’re going to see them coming out with trained models to go and roll out there. And there’s some great spaces for you guys to go play beyond that space. I’m going to shift to the little bit of the controversial stuff that’s going on right now with the VMware space and Broadcom. There’s a lot of anxiety about that acquisition and pricing and that kind of thing. You guys have had AHV for a long period of time out there. I understand that it’s about now about 50% of your shiftings that you have in terms of what you’re rolling out. Can you talk about what’s going on with the transition or what we’re possibly seeing as the transition in the market?
Luke Congdon: Yeah, this is especially where our customer choice as a philosophy really comes to help us and ultimately help our customers. If customers prefer to stay in ESXi, they can. No trouble for us. But since we do have HV and customers are seeing FUD, they have uncertainty, they’re not sure what’s going to happen next. There’s some well-documented concerns people have about Broadcom buying VMware. They have the choice to come to us and stay where they are or decide we’re going to shift over. And by shifting over, they’re changing the hypervisor engine. We support all the same workloads. So customers really have the opportunity to say, “I can make a new decision and I’m going to get the same thing I got before while now also having a single vendor to handle all of my support queries and interests and concerns,” which I think is really inspiring a lot of the extra large customers to knock on our door, and we’ve definitely been seeing that.
Camberley Bates: Very good. Any other comments?
Aaron Delp: No, I think Luke summarized it perfectly for that one.
Camberley Bates: Great. So what I want to do is wrap it up with a couple of things here. What are the things that customers don’t know about you that maybe they’re not customers today, but they should really know about you?
Luke Congdon: I would say we have the best NPS score in industry, 90 for eight years running. 90 points.
Camberley Bates: Congratulations.
Luke Congdon: Thank you. We’re proud of that and we invest in that. It’s very important to us because it’s important to customers. We’ve got a full stack solution that goes all the way from the operating system all the way up into virtualization all the way up into Kubernetes. We have products announced for AI and ML with GPT-in-a-Box, and there’s really not much customers can’t do on us. So where customers may have thought of us eight, nine years ago as a VDI solution, of course we still do that. We do everything else now as well.
Camberley Bates: And I want to also emphasize that early on, the earliest stages of HCI, there’s a difficulty in scaling. And today, you’ve broken apart some of the pieces to be able to scale compute and storage separately, which makes it a whole lot in terms of the offering, in terms of its cost-effectiveness in the environment, you don’t have stranded compute or stranded storage there as well. So there’s another kind of point there. Sometimes that’s kind of old heritage and that kind of thing. What about you? What do you want them to know about you guys?
Aaron Delp: Yeah, I think the biggest thing is the evolution into a hybrid model. As Luke pointed out, the early days in Nutanix, it was more of a point solution, but we are really able to cover cloud to core to edge and cover a broad spectrum of use cases as well from high performance databases all the way to AI and ML, cloud native architectures, and of course your virtualization environments as well. So I think for us personally, there’s never been a better time for Nutanix in the industry.
Camberley Bates: You guys having fun?
Aaron Delp: Absolutely.
Luke Congdon: I love it.
Aaron Delp: Yeah.
Camberley Bates: Okay. Awesome. Luke, Aaron, thank you very much for joining us as we talked about Nutanix, as we talked about Kubernetes and all the stuff that’s going here at the KubeCon Show.
Aaron Delp: Yes, always a pleasure. Good to see you again, Camberley.
Luke Congdon: Thank you, Camberley.
Camberley Bates: Thank you very much, guys, for tuning in to Live On The Show Floor here at KubeCon with Nutanix.
Author Information
Camberley brings over 25 years of executive experience leading sales and marketing teams at Fortune 500 firms. Before joining The Futurum Group, she led the Evaluator Group, an information technology analyst firm as Managing Director.
Her career has spanned all elements of sales and marketing including a 360-degree view of addressing challenges and delivering solutions was achieved from crossing the boundary of sales and channel engagement with large enterprise vendors and her own 100-person IT services firm.
Camberley has provided Global 250 startups with go-to-market strategies, creating a new market category “MAID” as Vice President of Marketing at COPAN and led a worldwide marketing team including channels as a VP at VERITAS. At GE Access, a $2B distribution company, she served as VP of a new division and succeeded in growing the company from $14 to $500 million and built a successful 100-person IT services firm. Camberley began her career at IBM in sales and management.
She holds a Bachelor of Science in International Business from California State University – Long Beach and executive certificates from Wellesley and Wharton School of Business.