The Future Of Kubernetes On AWS – Six Five On The Road at AWS re:Invent

The Future Of Kubernetes On AWS - Six Five On The Road at AWS re:Invent

AWS re:Invent 2024 included big updates for Kubernetes on AWS.
Host Daniel Newman is with Amazon Web ServicesBarry Cooks, Vice President of AWS Kubernetes, for an insightful conversation on the future of Kubernetes on AWS on this episode of the Six Five On The Road at AWS re:Invent. They explore the latest innovations and the strategic vision shaping the growth of Kubernetes services within the AWS ecosystem.

Tune in for more on ⬇️

  • The announcements of Amazon EKS Hybrid Nodes, Amazon EKS Auto Mode, and other cutting-edge developments
  • Open-source collaboration: How AWS champions the Kubernetes community and contributes to its growth
  • The roadmap for Kubernetes on AWS: A glimpse into the exciting future of this powerful technology

Learn more at Amazon Web Services.

Watch the video at Six Five Media at AWS re:Invent, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript:

Daniel Newman: The Six Five is On The Road here, at AWS re:Invent 2024. It’s been a very busy week. Everyone that tracks AWS knows they like to deliver a lot of innovation at every one of these events. And this is a company that’s constantly building, innovating, adding services, and doing it through, what I like to call, as an analyst, the fire hose. It’s been a great week, though. We knew AI would be in focus, but we also are seeing this big trend line of AI evolving to more practical enterprise decision-making for IT leaders and businesses. How do we get value out of AI?

And that comes down to not just AI for AI’s sake, but it comes down to systems, it comes down to developing, development, it comes down to making the right infrastructure choices and, of course, picking the right partners to serve, implement, deploy, and build. And of course, another big part of that is everything from modernizing to how you distribute your applications, serverless containers and all that. And we’re going to talk Kubernetes in this episode. Really excited to have Barry with me today. Barry leads the Kubernetes business for AWS, and he’s here on the show. So very excited to have you here.

Barry Cooks: Yeah, thanks.

Daniel Newman: Barry, thanks for joining.

Barry Cooks: Great to be here.

Daniel Newman: How’s the week been for you?

Barry Cooks: It’s been a busy week for us. Like you said, at AWS, one of our favorite things to do is save it up for re:Invent and push a lot out the door, and for Kubernetes is no different. A couple of really big announcements this week, a lot of interest, a lot of excitement amongst the customer base for these.

Daniel Newman: Yeah, it’s great and obviously want to have you spend a little time talking about all of the announcements from this week Barry. Maybe start off, just talk a little bit about the role. This is an interesting part of the IT stack right now, the front page news is it seems like it’s all GenAI, but in the end, these apps need to run somewhere. They need to be made available. You need to be able to leverage all these opportunities to optimize your infrastructure. That’s a big part of what Kube and Kube has been built on. So talk a little about the role and how you think about this Kubernetes environment right now.

Barry Cooks: Yeah, so my team’s responsible for Kubernetes across AWS, and so that obviously includes things like EKS or flagship Kubernetes product, but it also includes customers who are running Kubernetes themselves on EC2 which is how everybody got started at AWS. And I think we see it exactly like you said. It’s really, how can I run my workload efficiently, how can I run my workload and scale it up, scale it down, have it respond to the environment that it’s running in? And interestingly enough, yes, GenAI tends to get a lot of attention and GenAI training, huge on Kubernetes for us. It’s been an amazing business for us over the last year or so, and we continue to evolve things in that space to meet the needs of our big training customers.

Daniel Newman: Well, it’s interesting you say that. So I was going to ask you, when you’re building products for the Kubernetes ecosystem, who are you building them for? Who are the AWS customers that should be really excited about the opportunity to build with AWS and Kubernetes?

Barry Cooks: Yeah, it’s interesting, so from a customer perspective, it’s a huge mix. You can name an industry vertical or an area, and it’s likely we have quite a few customers on it. Within the customers, the interesting thing is we are really focused in on platform developers. These are the people who sit between the core developers of the applications and those deployments into AWS, and they manage to provide the right level of services for those teams, the consistency that they need, the operational back ends.

Daniel Newman: So you started the conversation, Barry, you were talking a little bit about the announcements.

Barry Cooks: Yeah.

Daniel Newman: Let’s dig into that a little bit. Just Amazon EKS alone had a number of announcements.

Barry Cooks: Yep.

Daniel Newman: Give everybody out there, all you, give them the background on what was announced in the biggest innovations of the week.

Barry Cooks: Yeah, so obviously I’m excited about our announcements. It’s been a really big show for us. The first one was Hybrid Nodes, and our goal here was to help customers who are down on that migration journey. They’re trying to modernize their apps, but they’re still on-prem. If you look at Gartner numbers and things, they’ll tell you a huge amount of workload that’s still sitting on-prem trying to find a path to the cloud. And so, one of the things that Hybrid Nodes does is it allows you to take those assets that are on-prem and use them, but use them with EKS in its normal form.

So you get the operational efficiency that EKS can provide, but you’re still leveraging these assets. Maybe you have a seven-year amortization cycle or something like that, burn them down, and you can use those over time with EKS, you can start bursting workloads into the cloud, take full advantage of all the things that AWS and EKS can provide while still leveraging these assets. In this case of certain workloads, maybe you wanted to stay on-prem, now you’ve got a mix, and we support that, as well. So that’s been super exciting. Lots of strong interest in that.

Probably an even bigger announcement for us, a big shift as we’ve looked at the customer base and talked to them about where they are in their Kubernetes journey. Kubernetes been around for a while, it had its 10th anniversary. And originally, you had a lot of tweakers and they didn’t, they liked having that control. They really wanted to own and operate pieces themselves and so there was always a constant back and forth with, say, the EKS team about how much are we going to do? And over the last 12, 18 months, that’s really shifted. And what we see a lot now is, please do more for us. We don’t want to do these things. We got it, it works well, but we don’t have the time to manage it. We want to focus on higher-level problems.

So EKS Auto Mode was announced on Sunday afternoon, and our goal with this is we’re going to really take on a lot more of the burden in the cluster. So we’re going to install and manage different controllers that are the most common ones that customers have. We’re going to take care of version lifecycle management, one-click upgrades. And interestingly, we’ve introduced this new model with EC2. So we will actually control and spin up and manage your EC2 instances now, including patching. So if there are CVEs, you get our operational excellence to go and patch those CVEs. But at the same time, you can take full advantage of EC2, in terms of all of the capabilities around ODCRs, reserved instances, spot, all these things you can still take advantage of, but we’re going to go and manage that for you and drive your efficiency up, help you scale faster, up or down, to save costs.

Daniel Newman: Yeah, it’s interesting, Barry, if you think about the evolution of the whole cloud platform, we’re seeing so much interest in managed things, whether that’s been the growth of serverless, Bedrock. Of course there’s always going to be those companies that they want to play with all the levers and the knobs and piece parts. But it sounds to me like even in containers, and in Kubernetes, in particular, you guys are seeing an increase in demand for, hey, how much can we automate? How much simpler can this get? I’m sure there will be toys for developers in Q and everything else that they’re going to start to be really valuable. And part of the allure of Kubernetes is, it’s this giant open source project.

I know AWS, over the years, has had a little bit of an evolution of its own multi-cloud posture. I think the AI era has been a forcing function that all prem and cloud providers have had to accept that we’re going to have a certain amount of workloads distributed. But Kubernetes has always been an enabler of that. And I guess I’m interested in what your thoughts are around open source as a whole, and where AWS fits in, in the evolution of that.

Barry Cooks: Yeah, I think it’s a great question. We have evolved our thinking over time, for sure. We listen to our customers. We’ve heard a lot from customers around open APIs, like Kubernetes. The power that, that gives them the ability for them to feel like they’re going to be able to consistently drive their own operational behaviors. And for us, a lot of times, I mentioned Hybrid Nodes, on-prem versus cloud, that’s always been one of the big things we’ve heard from customers. I’m running a bunch of Kubernetes on-prem, I’d like it to be similar in the cloud. And so that’s led us down that path.

So I think for us, we see open source as super valuable to our customers, and we also see it as a community that we can help participate in. So we have a saying internally of, “Kubernetes is bigger than just us. If Kubernetes went away, our product goes away.” And if we don’t contribute to the community, then it’s more likely not to be successful. So we see it as very valuable to us to be a part of that community. We continue to contribute to it, we continue to deliver projects and donations into the CNCF because that’s what our customers need, that’s what they expect and they want, and for us to take what we are really good at, that’s operational excellence. It’s availability, scalability, security. Those are the things we are providing by managing these components on the behalf of our customers.

Daniel Newman: It sounds like the company, probably not even noted as much as maybe it should be, is really making a lot of efforts in that open space, open-source community, which is great.

Barry Cooks: We have, yeah. I think it’s very difficult in the open-source world. You can never give back as much as you get. I think that’s just generally true. A person here, at AWS, coined that internally, and I think it’s true, but you can give back. You can make the community stronger. Carpenter’s a great example. So we donated Carpenter to Scheduler in Kubernetes. In fact, if you look at EKS Auto Mode under the covers, Carpenter is managing those EC2 instances. We’re deciding what to fire up, what to scale up, scale down, it’s driven by Carpenter. We took Carpenter earlier this year and we donated it to the CNCF to put it under neutral governance to say, we think this is a great answer for the Kubernetes community, not just for AWS. And we’ve seen other hyperscalers pick up Carpenter. We’ve seen a lot of customers pick it up and deploy it themselves on-prem, and now we have EKS Auto Mode, which allows you to go leverage Carpenter, but with our operational excellence running it in the background for you.

Daniel Newman: Yeah, it’s like nerd altruism.

Barry Cooks: That’s one way to put it. Yes.

Daniel Newman: So let’s wrap this with a little bit of an outlook.

Barry Cooks: Yeah.

Daniel Newman: So we’re seeing so much change in the industry, but obviously sometimes they say the more things change, the more things stay the same.

Barry Cooks: Sure.

Daniel Newman: A lot of the guts are still really built the same way. How fast you can do it, whether that’s Q or other tools helping you build code.

Barry Cooks: Sure.

Daniel Newman: Whether that’s obviously different types of compute horsepower required for different workloads, but in the end, you still have all the same challenges for compute. It’s a scale, in a lot of ways. So what do you sort of expect, in terms of AWS and Kubernetes over the next year? What are the developments you’re most excited about?

Barry Cooks: Yeah, I agree with you. I used to always joke with people; if you want to know what’s coming next, go read the IBM Red Books from 1970 something and it’s all laid out, right?

Daniel Newman: Yeah.

Barry Cooks: Because these abstractions have happened repeatedly over time.

Daniel Newman: Yeah, the accordion.

Barry Cooks: Yeah, exactly.

Daniel Newman: It’s like client and mainframe.

Barry Cooks: Yeah, and so but what we see now is where is Kubernetes in that life cycle? Where is this container ecosystem sitting at? And what we tend to see is, it’s this movement up the stack of management, like you were alluding to earlier, how do I operationalize this more effectively? How do I do it at a much, much larger scale? We see people trying to do these massive GenAI models on EKS. Training those models takes a lot of compute, it takes a lot of responsiveness, it takes a lot of really effective error handling. And these are all the areas that we’re spending a bunch of time investing. And then looking at that boundary between that operating component set and the developers who are trying to deliver it, and looking at ways that we can help that be more effective and more efficient and scale better.

Daniel Newman: Well, Barry, it’s going to be an exciting year ahead. It was an exciting year this year.

Barry Cooks: Yeah.

Daniel Newman: And I’m very optimistic about how all this technology is really going to drive the enterprise of the future. And I’m really glad to see a little of the exuberance wear and a little bit of the pragmatism come to the surface, and I definitely felt that here, at re:Invent. So congrats on the announcements, congrats on all that’s going on, and long live open source Kubernetes and have a great rest of your AWS re:Invent.

Barry Cooks: Thanks very much. Looking forward to next year, too.

Daniel Newman: And thank you very much for tuning in and being part of The Six Five On The Road here, at AWS re:Invent 2024 in Las Vegas. It’s been a great show. Hope you’ll subscribe and tune in to all of our coverage here at the event and be part of our community to watch all of The Six Five episodes. But for this one, it’s time to say goodbye. We’ll see you all later.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Daniel Newman sees 2025 as the year of agentic AI with the ability to take AI and create and hyperscale your business by maximizing and automating processes. Daniel relays to Patrick Moorhead that there's about $4 trillion of cost that can be taken out of the labor pool to drive the future of agentics.
On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss Microsoft, Google, Meta, AI regulations and more!
Oracle’s Latest Exadata X11M Platform Delivers Key Enhancements in Performance, Efficiency, and Energy Conservation for AI and Data Workloads
Futurum’s Ron Westfall examines why Exadata X11M allows customers to decide where they want to gain the best performance for their Oracle Database workloads from new levels of price performance, consolidation, and efficiency alongside savings in hardware, power and cooling, and data center space.
Lenovo’s CES 2025 Lineup Included Two New AI-Powered ThinkPad X9 Prosumer PCs for Hybrid Workers
Olivier Blanchard, Research Director at The Futurum Group, shares his insights on how Lenovo’s new Aura Edition ThinkPad X9 prosumer PCs help the company maximize Intel’s new Core Ultra processors to deliver a richer and more differentiated AI feature set on premium tier Copilot+ PCs to hybrid workers.

Thank you, we received your request, a member of our team will be in contact with you.