Search

Journey to the Edge: Open Source, Kubernetes, and More – The Six Five In the Booth

Journey to the Edge: Open Source, Kubernetes, and More - The Six Five In the Booth

On this episode of The Six Five – In the Booth, host Paul Nashawaty is joined by ZEDEDA’s Erik Nordmark, Co-founder and CTO for a conversation on navigating the complexities of open source in the evolving landscape of edge computing at KubeCon Paris 2024.

Their discussion covers:

  • The concerns surrounding open source and the Linux Foundation’s role in addressing them
  • The impact of containers on legacy software and the role of Kubernetes in modern applications
  • Insights into the customer journey toward adopting Kubernetes, containers, and edge computing
  • Challenges specific to distributed edge computing, including issues related to physical security and connectivity
  • The role of AI-enabled applications in processing data at the edge

Learn more at ZEDEDA.

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Paul Nashawaty: Hello and welcome to today’s session. I’m here in the booth with ZEDEDA, and my name is Paul Nashawaty. I’m joined by Erik from ZEDEDA. Erik, would you like to introduce yourself?

Erik Nordmark: So nice talking to you here. So my name is Erik Nordmark. I’m a co-founder and CTO at ZEDEDA. And yeah, if we say ZEDEDA, but other people say ZEDEDA, that’s fine. And we used to say as long as they pay us money that you can pronounce it any way they want.

Paul Nashawaty: Perfect.

Erik Nordmark: So we’ve been focusing on the edge computing space, and what we like to call the edge is pretty big. It’s anything from something that’s just outside of the cloud, to something that’s running on an embedded computer on the factory floor type thing. What we focus on is distributed edge. This is actually things like IOT gateways, sort of a small form factory, industrial PCs, to sort of potentially ruggedized servers that are sitting out on a truck, or the retail location or whatever. And in many cases it’s more sort of physically exposed.

They might have less network connectivity. And typically, I mean there’s no IT staff there. There might not be any staff at all. It might be sitting out. We have cases out in solar farms and wind farms where there’s no one around. You need to send somebody there. It will take hours or days before they get there. So that’s sort of the domain of edge computing that we focus on. And then looking at enabling both running legacy software like virtual machine inhibitors and containers, Kubernetes, whatever. That’s part of the evolution.

Paul Nashawaty: Well, that’s a lot to consider when you’re thinking about edge computing and you’re thinking about what’s happening at the edge. And one of the things that really comes up quite often in my interactions with clients and customers, is the fact that open source is a challenge, right? And it offers a lot of challenges, but also a lot of gains. So the Linux Foundation can offer a lot of advantages to that. Could you speak a little bit about that?

Erik Nordmark: Yeah. And in terms of what challenges people see, it depends on where they’re coming from. So I think that we have some customers that sort of, it’s the IT department that’s coming, and they’re doing sort of cloud out strategies. So they’re already used to running containers, Kubernetes in the data center, and they just want to be able to deploy that stuff out of the edge.So they’re sort of familiar with this because by and large, that all runs on open source. You have other customers that the group people we’re talking to, they’re more coming from the embedded space and they say, “We’ve been doing this embedded computing appliances, now we want to connect it. We want more agility, being able to deploy new things, move towards containers at JI, whatever.”

But they say, “But wait, we’ve heard that this open source thing is not secure,” because they maybe heard that from Microsoft 20 years ago. But then as Microsoft started using Linux and Azure instead of like, they changed their story, but they might not have picked up on that. So this sort of concern that it might not be as secure, it is something that you have to be able to respond to by referring to it has actually changed. If you look today, whether it’s open source or closed source, there’s plenty of challenges around, how do you keep your software supply chain sort of secure? Because all the software, it’s very complex, there’s lots of different components back at this, et cetera. And the Linux Foundation is spending a lot of time on building the tools for this, as well as encouraging the various projects to use those tools to be able to build what’s called software bill of materials.

Paul Nashawaty: Right.

Erik Nordmark: Being able to have ways of tracking CVEs against it, stuff automatically figuring out what needs to be updated as part of your project, et cetera.

Paul Nashawaty: Yeah, it makes a lot of sense. The thing I’m seeing in our research in customer conversations is many organizations want to work with vendors that have support and open source activities. You had mentioned that it may not be secure, but the other view is it’s tested and hardened by the community because it’s a lot of people touching in. So I think that there’s an interesting perspective there.

Erik Nordmark: And it’s also that they can actually take it apart. I mean, you have people that do penetration testing, both sort of black box as well as white box. When they actually look at the source code and say, “Hey, could we attack this stuff?” It looks a bit fishy. Right? So open source actually helps with lots of this, makes it easier.

Paul Nashawaty: Absolutely. And you mentioned the aspects of having SBOMS or software bill of materials. That’s important too for having the right delivery, especially within the CICD pipeline.

Erik Nordmark: Yeah.

Paul Nashawaty: So that’s really a big factor. But speaking of that, when we look at the modernization effort, we look at applications and what it means at the edge. Does Kubernetes impacts at the edge, and the data has a lot to offer here. There’s a lot around containers, and how containers may be helping, but also could be hurting environments. What are your thoughts there?

Erik Nordmark: Yeah, I mean, we’re on a journey with our customer, and just the startup experience was when we started this six years ago, we thought, “Okay, people are going to deploy things at the edge that’s sort of modern. They’re going to do serverless, they’re going to do unikernels, they’re going to do all of these things.” And then as we start talking to the customers, they say, “Oh no, but we’re running on Windows XP, so how can you help?”

Paul Nashawaty: Yeah.

Erik Nordmark: So for the customers, it is sort of that journey that they have something deployed out in the field, often running on Windows as sort of a standalone appliance, and they say, “How can we move this stuff to the future?” And they see that it’s containers in Kubernetes, that’s the future. And then, how can you actually guide them on that thing? You can start by deploying this virtual machine. You run Linux with containers next to it. You can now figure out what data you want to expose. How do you actually build the flexibility when you refactor that, when you refactor that monolithic Windows system into separate containers as well. But everybody sees that the flexibility you get with Kubernetes and containers, that’s something that they want to get.

Paul Nashawaty: It’s definitely a journey. We see, again, in the conversations, there’s that, I usually tell the story around past, present, future. You have your heritage or past applications that you need to do something with, whether you encapsulate them into a VM or Kubernetes, or maybe a fact container maybe. But there’s really the impact of modernization and moving to containerization, microservices and orchestration. So along those lines, but there’s very specific things that happen at the edge, and when we look at it, how do you think Kubernetes, containerization, orchestration, how does that all impacting specifically at the edge and what’s happening there?

Erik Nordmark: So, I think that people want to leverage these tools, and I think that they see that flexibility, and I think there are different levels. So today we have very large deployments where people are deploying single known Kubernetes out of car dealerships, to sort of basically have a secure, flexible way of delivering firmware into EVs, et cetera. But people also want to say, “Well, but that’s step one on this journey.” Because then you want to say, “Okay, now I want to have redundancy, so I want to be able to build a cluster where I can actually get the services to fail over as the hardware dies,” et cetera. So there are these different things where, yes, it’s all about leverage. It’s all about leverage, in the software that’s actually been built in the Kubernetes ecosystem. And figuring out what do you need to do uniquely for the edge. This distributed edge is different in a few different ways in terms of security threats, and in terms of network connectivity.

Paul Nashawaty: Yeah, I was recently reading some industry research around the fact that over the next three years, we’re seeing that organizations are anticipating 500 to a thousand edge applications at each edge location, that’s worldwide. So this is just a tremendous amount of growth. And that type of growth with modernization, obviously Kubernetes and containers is a faster way to scale, faster way to use it. So what you’re talking about in that space, is really meeting the client where they are. They may kind of be a little bit slower on their journey, but eventually move towards that rapid deployment. And when we see these other environments that you’re talking about… Specifically, you mentioned skill gap issues. You mentioned some other issues that kind of occur at the edge. One of the things that I think about when I think about the edge and application, is some of the unique challenges that happen at the distributed edge, like physical challenges, security issues, even connectivity. Like you were mentioning is connectivity issues. Can you expand a little bit more on how ZEDEDA can help with that?

Erik Nordmark: Yeah, so there’s certainly… We can start with the software infrastructure. So if you’re deploying things in the cloud, there’s a bunch of things that the hyperscaler, all it takes care of for you because you get a VM and you don’t actually, the fact that they have a bunch of physical security, they have the ability to update the underlying host software, et cetera, right? There’s stuff there that you get for free. And if you’re running on a bare metal box, sitting out in the field, you don’t have that anymore. So you need to think about those things, and sort of applying principles like… Just like we have immutability with containers and Kubernetes, when you want the underlying operating system to be that way as well. So that’s why we actually started building EVOS, and contributing that to the Linux Foundation.

But then connectivity wise, the deployment inside the cloud is assuming that, “Well, I can actually go and reach out to whether I have a hundred different nodes in my cluster, whether I have a hundred different clusters, I can go reach them all the time.” Or most of the time. What we’re seeing out in the field is well, sure they’re there, they’re a lot more clusters. There might be 10,000 clusters with one node in them, or three nodes in them, but a significant percentage is going to be powered off, or they’re sitting in a shipping container out on the way out to be installed, but they might already be onboarded from a software perspective. So you sort of have to structure things differently in terms of how you actually go and provision things. So the principles that we have with Kubernetes are great. I’m talking consistency and immutability, et cetera. Declarative configuration. But the sort of implementation needs to evolve to be able to deal with this sort of, “Yes, I want to go update this piece of software. I want to go deploy this pod.” Okay, it will start when it connects back in, which could be in a minute, it could be in a week.

And then security wise, in many cases, these things are in exposed locations, or there’s enough of them and enough value that they realize that, “Okay, people are going to steal these things.” Some people might steal them because they think it’s a computer, right? “I can resell it.” Some people will steal them because the data that’s on it. And being able to deal with that both from sort of physical security perspectives. But it’s different saying, “I can block access to this server, to this VM running in the cloud,” as opposed to “No, somebody exactly there. They can unplug the disc, they can boot something different. They have access to the bios, what harm can they do?” So sort of building that underlying infrastructure is the key enabler for this. So you can now actually take, once you have that in place, now, you can take both your legacy DMs, as well as your containers and your Kubernetes workloads, and you can say, “I have a substrate to run the stuff on that is robust enough and trustworthy enough that it makes sense.”

Paul Nashawaty: I like where you were going with that because when I think about that monetization journey, and you think about the adoption of containers, what we’re seeing in industry research is portability of applications is critical. Actually 20% of respondents of a 400 person survey indicated that portability was critical. They see that 67% of those respondents said it was very important to them. So to be able to move that application from core to edge to cloud and et cetera-

Erik Nordmark: Yeah.

Paul Nashawaty: Is incredibly important. And that’s where I think that ZEDEDA has a leg up there as well.

Erik Nordmark: It’s all about them sort of being able to optimize things. They can test them in the cloud when they get more data coming in and they can say, “Okay, can I run the same thing out at the edge? Okay. Right.”

Paul Nashawaty: Right, right. No, that’s great. And so on our last kind of topic, and I would be kind of amiss if I didn’t talk about it. We have to talk about AI, because AI is a big factor, and everybody’s talking about it. We’re seeing it everywhere here at the show here. But when we look at AI enabled applications, what’s different from ZEDEDA perspective that helps AI at the edge and those applications at the edge?

Erik Nordmark: So one thing that people will actually be running AI in different forms. They’ve been running with GPUs. You have customers that have been running with GPUs, sort of doing more traditional analytics out in the field, and they will end up doing more and more of this stuff.

I think the way people are talking about this today, is we actually train things on a data set sort of into more controlled environment, and then actually do the imprinting out at the edge. And then over time, sort of figure out what does it mean in terms of the feedback loop, in terms of getting retraining in place. I’ve been talking to people here during the show, and most people worry about the day one problem, “How can I get it from the developer into deployment?”

But then sort of the retraining part, and all of that stuff, that will be sort of next year or the year after. But it’s moving very quickly. But I think the other part of this is deploying the stuff at thousands of locations. You want to make it be as automatically deployable as possible. So you drop ship out hardware, somebody plugs it in, and it’s securely onboarded. And then one of the things that we bring is the ability to specify policy, so that when this device actually shows up and connects up to ZEDEDA, the rules are already there saying you should actually deploy these things, right? You should run with these policies and et cetera. So it auto deploys their downloads, whatever it needs, and then you’re up and running.

Paul Nashawaty: So very much ease of use.

Erik Nordmark: Yeah.

Paul Nashawaty: That’s what it sounds like to me.

Erik Nordmark: Yeah. I mean it’s all about that because you want it to be robust, you want to be secure, but it needs to be easy to use. Because you want to crank these things out. The local installer is an electrician, right? It’s not an IT person. Right?

Paul Nashawaty: Yeah. Well, Erik, as we wrap up our session today, would you like to leave the audience with some parting words or where they can go to get started with you?

Erik Nordmark: Yeah. So if you want to find out more about ZEDEDA, you can go look at our website if you’re interested in the open source side of things. There’s also LF Edge under github.com, LF Edge Eve is the Project Eve. So this is all open source edge operating system. So please go and check that stuff out, and please join the community. So we’re trying to build this stuff for the next generation.

Paul Nashawaty: Erik, I’d like to thank you for your perspective and your insights today. It’s been really wonderful having you on today’s show. And I’d like to thank the audience for attending today’s session. For more information, please go to futurumgroup.com.

Author Information

At The Futurum Group, Paul Nashawaty, Practice Leader and Lead Principal Analyst, specializes in application modernization across build, release and operations. With a wealth of expertise in digital transformation initiatives spanning front-end and back-end systems, he also possesses comprehensive knowledge of the underlying infrastructure ecosystem crucial for supporting modernization endeavors. With over 25 years of experience, Paul has a proven track record in implementing effective go-to-market strategies, including the identification of new market channels, the growth and cultivation of partner ecosystems, and the successful execution of strategic plans resulting in positive business outcomes for his clients.

SHARE:

Latest Insights:

T-Mobile Raises 2024 Guidance Driven by Q1 2024 Service Revenue, Profitability, and High-Speed Internet Breakthroughs Plus Record Low Postpaid Phone Churn
The Futurum Group’s Ron Westfall and Daniel Newman examine T-Mobile’s Q1 2024 results and why they expect T-Mobile to fulfill its raised 2024 guidance as the company is outperforming its rivals across important mobile network service categories.
Generative AI-Powered Workflows Are Helping to Fuel Performance Across All Key Business Areas
The Futurum Group’s Daniel Newman and Keith Kirkpatrick cover ServiceNow’s Q1 2024 earnings and discuss how the company has successfully leveraged generative AI across its platform to drive revenue growth.
A Game-Changer in the Cloud Software Space
The Futurum Group’s Paul Nashawaty and Sam Holschuh provide their insights on the convergence of IBM, Red Hat, and now potentially HashiCorp and the compelling synergy in terms of developer tools, security offerings, and automation capabilities.
Google Announces Q1 2024 Earnings, Powered by Revenue Gains across Cloud, Advertising, AI, and Search
The Futurum Group’s Steven Dickens and Keith Kirkpatrick cover Google’s Q1 2024 earnings and discuss how the company’s innovations across cloud, workflows, and AI are helping it to drive success.