Search
Close this search box.

The Six Five In the Booth with Nokia’s Daniel Derksen at Mobile World Congress 2022

Six Five hosts Daniel Newman and Patrick Moorhead are joined by Nokia’s Daniel Derksen, Director of Regional PLM EMEA, to talk about the challenges service providers are facing today and how Nokia is addressing those.

Be sure to subscribe to The Six Five Webcast so you never miss an episode.

Watch the episode here:

Listen to the episode on your favorite streaming platform:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.

Transcript:

Patrick Moorhead: Hi, this is Pat Moorhead with Moor Insights & Strategy. We are here for a Six Five Podcast in the Nokia booth here at Mobile World Congress 2022 in Barcelona Spain. It’s great to be here. We have Daniel our co-host from Future Research, and we have double Daniel today from Nokia. How are you doing?

Daniel Derksen: Thank you so much. Welcome. Welcome to our booth. Thank you for joining us. And yeah, we’re very good. We’re happy to be here in Barcelona again. Finally meeting people face to face. It’s a great change.

Patrick Moorhead: No, it is incredible. And this does actually feel like a real show. I went to CES in Las Vegas and not a whole lot of people went there, but I feel like this is really, really good.

Daniel Derksen: To be honest. I wasn’t entirely sure when we entered, but the traffic has been really good, feedback has been good. And it’s great to really see customers again, face to face. Obviously, we’ve all been doing Zoom and Team meetings for forever in a day it feels like. So, yeah. Great to be meeting people face to face again and talk about all the things happening in our industry.

Daniel Newman: For sure.

Patrick Moorhead: And as we headed back to the Nokia booth here, we’re in one of the demo areas, but this is a mega complex of itself back in hall three. And for anyone that’s out there that hasn’t been to Mobile World, hall three is the big hall where the biggest companies, the biggest booths. And Nokia maybe has at least one of, if not the biggest booth at all of Mobile World Congress. A lot of legacy and pedigree that the company has in the mobile space. I know my first cell phone. And I know you guys do a lot more than that now, and we’re going to talk about that. But my first was a Nokia so.

Daniel Newman: Same for me.

Patrick Moorhead: So Daniel, let’s start out talking about a little bit about challenges that service providers face today.

Daniel Newman: Absolutely.

Patrick Moorhead: What are you seeing? What are the big ones?

Daniel Derksen: So I’m sure you must have noticed that this whole 5G theme is big at this event. You may have noticed a few flyers here or there. So if I look at it from an IP infrastructure perspective. The move to 5G, obviously is a very exciting thing. There’s a lot of change happening from a surface offering perspective to end customers. It’s a great thing, new consumer offerings, as well as new innovative enterprise offerings. But from an infrastructure perspective, it’s a shift. We’re shifting from, it used to be appliance, to virtualize, to now cloud native infrastructures. And that has implications on the network. Like how the network is supposed to respond to these workloads.

What it means that the network needs to do in order to get the benefits of moving to cloud native. And I think that’s one of the big themes, at least in this area, that we’ve been discussing over the last few days.

Patrick Moorhead: Yeah. It has been phenomenal to see the transition. I would say it started in the core and maybe we saw it in the RAN and now we’re starting to see it in all aspects of the network. To be able to, I’ll say, give the full benefits of 5G. Because it’s really just the beginning here, with smartphones. And IOT comes in, Massive MIMO, just great, great type of stuff. But the question that I have though, cloud native has its challenges for service providers. And I’d love to hear your point of view on that, given your proximity that you work with them super, super close.

Daniel Derksen: Absolutely. So I think the move to cloud native, there’s a few different angles to it. First of all, there’s a knowledge angle. There’s a bunch of new tools, new technologies that people need to master and adopt in order to get the benefits and the ability to actually operate this the way it’s intended to be operated. That’s the first thing. If I look at it from an infrastructure perspective, the move to cloud native, what it really means is we’re starting to chop up applications in smaller components, in microservices that are delivered in containers. And why are we doing that? It’s because we can independently scale components of the solution. You need a little bit more of function A, B or C, you spin up a few more pots to cater for that. Give you more flexibility and basically get elasticity in the solution rather than let’s throw another big appliance at the problem. So that’s the whole reason why we’re doing this, which is a great thing. But of course, if you have elasticity at the application level, we need the network to follow.

Because spinning up a bunch of pods to do more database transactions or whatever the service function may be. If that’s not connected to the network, it’s not going to be overly useful. And this is one of the main topics that we’re addressing from our perspective, with the solution that we’re introducing here called adaptive cloud networking. Which is really bridging the gap between dynamicity at the application layer and the network.

Daniel Newman: It is really interesting how we are seeing cloud scale take over the conversation here in the mobile business. The ability to kind of have the network operators think like public cloud hyper scale and how the two, and the relationships, by the way. That the two we’re starting to see more and more of the relationships between the service providers carrier’s legacy, infrastructure companies, public cloud providers. It’s actually real exciting because the scale it’s going to create. And that leads me to where I want to ask you. Nokia, you talked about in the beginning, a ton of pedigree, a ton of legacy, how are you guys helping with this?

Daniel Derksen: So again, from an infrastructure perspective, if I look at the main problems that our customers are facing in adopting this infrastructure, it’s how to operationalize these cloud native infrastructures. We need to move them to Kubernetes infrastructure. We need to move them to data center infrastructures that are capable of dealing with the dynamicity that I was referring to. And from a solution perspective, we’re doing some work here on something we call cloud adaptive networking. It’s actually composed of a few different components. First, a data center solution for large scale data centers, like the more traditional telco cloud type of data centers. Scaling from a few racks to many racks, depending of course, on who you are and what kind of population you’re serving. On the other end of the spectrum, we are actually introducing solutions for edge. So the adoption of edge or the move towards edge is another theme that we’re seeing throughout the evolution of 5G now. And it’s creating a new set of problems. The way I like to talk about it is when you have central DC and you have a few.

If you operate those independent from the network and you have some kind of stitching to make these things work together, there’s a few instances it’s okay. You have one operational team here, another operational team there, you make them talk to each other to do this interconnect every now and then, fine. But once we start moving to much more larger scale distributed data centers, and you don’t have five or 10 data centers, but you have 50 or a 100 or 200. At this point, you cannot operate in silos anymore. We need to have a mechanism that basically glues these things together. And that’s basically what we’re doing with adaptive cloud networking.

Patrick Moorhead: So the edge is hot. Now everybody has a different definition of edge, depending on what your vector coming in. But for the service providers and for Nokia, I’m curious, what are some of the drivers of the edge use cases, business cases that you’re looking at for this service providers right now.

Daniel Derksen: So there’s a few things driving that edge discussion. There is a scale element to it. So obviously when you start scaling up the centralized infrastructures, there’s a desire to, instead of building huge infrastructures, have a limited set of distribution. Whether you call that edge, is another topic it. Because you might as well call it regional, metro. I mean, if you talk edge in like mech type of terminology, we’re typically talking small. We’re typically talking about half a rack or maybe a rack or maybe two racks type of data centers. So that’s a different type of edge. If you look at driving that it is Cloud RAN, that needs to sit somewhere. It’s low latency applications that need to obviously sit somewhere. So if you want to take the benefit of low latency applications or low latency services, you need whatever the customer’s trying to reach. You need to be relatively close to the customers, otherwise terminating the connectivity close to the customer, but having the content sit far away is really not going to help the overall service.

Patrick Moorhead: Yeah.

Daniel Newman: I mean, are you seeing a lot of instances where and you hit this, there’s varying levels of granularity of what an edge is, and of course… But there are all these distributed edge to cloud is what we maybe call it now. And you’ve got the big data center, the smaller, the tiny, the true edge. But these things need to be functionally operating harmoniously.

Patrick Moorhead: Absolutely.

Daniel Newman: Are you seeing a lot of growing use cases and interdependence in this whole edge to cloud scenario?

Daniel Derksen: There is most definitely. I mean, the first challenge, when you start thinking about this kind of edge and distribution is, “How do I operate that piece? How do I operationalize it? Do I control it centrally, or do I do something in a more distributed fashion?” And if you look at what people are deploying at the edge, it’s typically container type infrastructure. You’re deploying your controllers, your Kubernetes controllers in a distributed fashion. From a networking perspective, we’re convinced that, that networking, the glue that is tying the application to the network, it needs to be distributed too. It cannot be that you have a bunch of remote locations that have a dependency on a central component, if that component isn’t there. Now suddenly you’re in trouble in terms of deploying new workloads in this decentralized fashion. So we’re introducing a solution called ENC Edge Network Controller, which is basically an extension of Kubernetes.

Where we’re providing a controller that is exposing the network, the network capabilities, as true native Kubernetes constructs. Something called CRD. So the way that the customer can consume the network, it is exactly the same as the way that they’re consuming compute.

So it obviously makes it from an operational perspective converge. That’s already a good thing. The other thing by leveraging Kubernetes in some of the constructs that it provides for our application, it means that it becomes a very small footprint application. And again, that’s super important for that edge location. If we’re talking about edges that literally have a handful of computes, or half a rack, you don’t want to come with a solution to manage that infrastructure that is bigger than the workload. It makes no sense. So having this kind of lightweight approach to extending Kubernetes with ENC, I think is a very good value prop to address some of the challenges in that space.

Patrick Moorhead: So I want to wrap this segment by getting very, michronic here. I’m going to put you on the spot. There’s a lot of technology companies talking about the cloudification of service providers, the edge, and the core network. What makes adaptive cloud networking unique in the marketplace, different from your competitors?

Daniel Derksen: So, first of all, I think… I mean, we talked about centralized cloud. We talked about distributed cloud, edge. We need to connect these things together too. Just having a wonderful way of automating one piece and the other piece, but nothing in the middle, it makes no sense. So from an adaptive cloud networking perspective, we’re also covering the wide area network and automating the connectivity from edge to cloud. The whole chain. And I think that’s one of the differentiators from a overall Nokia perspective. We have solutions catering to all these pieces that we ultimately need to make work together to have this end-to-end horizontal plane, in which you can dynamically deploy workloads at that location in the network that makes sense. If I zoom in a little bit more on the actual offering. So from a data center solution perspective, we’ve actually introduced a new fabric solution with our SR Linux data center fabric. It’s a set of devices. It’s a networker rating system that we built from the ground up, which is, to be honest, an interesting topic by itself.

Clean sheet approach, to building a network operating system fully based around model driven infrastructure. And why does it matter? We’re talking in this context about automation, it’s all about automation. But if you are automate something, or if you want to automate something, you need machine to machine interfaces to interact with it. And if you look traditionally at the way that we’ve been building IP equipment CLI, SNMP. Those things were not there for machine to machine. So with SR Linux, we have a fully model driven infrastructure. And that means that the interaction or the interfaces with the system can be described by models. And the good part about that is you can generate codes based on those models. So machine to machine interaction becomes a lot more easy and robust. Actually, that’s a very important element. So SR Linux. Then from a management perspective, we have an intent based manage system for the centralized fabric called FSS, which I think is the whole idea of dealing with the data center in a more abstract sense than a device here and device there is something that will be important from a manageability perspective.

And from an operational excellence perspective. So that’s on the central DC for edge as I just talked about with ENC. We are introducing a unique, I haven’t seen any… Actually I was talking to one customer just earlier today and he said, “Well, your approach to extending Kubernetes for this use case, I have not seen anywhere. And I’m really excited about it.” So we’ll have some interesting follow up as we go along. But I think if you look at that space, the way that we’re dealing with the lightweight nature of the controller really operationalizing edge is a key differentiator to us. Absolutely.

Daniel Newman: You guys are definitely bringing some exciting innovation. I love that you mentioned the automation and AI, that it has to be a part of any story right now. Just with the scale of, and volumes of data and the interactions from machine to machine. Companies have to be automating more tasks if they’re going to be able to scale.

Daniel Derksen: Absolutely. I think it’s fundamental and this is foundation that we’re putting in place. For AI ML to really thrive, you need that foundation to be solid. You need telemetry for everything. You can’t make decisions in a machine, by a machine, in a machine learning fashion, if you don’t have data to act upon. So the starting point is have those foundation, have those machine to machine interfaces. And then you can start building on all of the good things that AI and ML are hopefully going to bring to us as an industry.

Daniel Newman: Well, it’s very exciting, Daniel. I want to thank you for spending some time here with us at Mobile World Congress 2022 for this Six Five In the Booth episode. Pat, this is going to be a wrap for us by the way. Then we’re out there. Our last scene.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Mark Patterson, EVP and Chief Strategy Officer at Cisco, joins Patrick Moorhead and Daniel Newman to shed light on Cisco's strategic foray into AI, discussing its potential industry impact and the importance of collaboration in driving innovation.
Twilio’s Q3 Results Showcase Strong Financials and Cutting-Edge Technological Advancements, Positioning It as a Leader in AI-Driven Customer Engagement
Keith Kirkpatrick, Research Director at The Futurum Group, explores Twilio's Q3 earnings, focusing on how AI advancements and robust data integration are driving growth and transforming customer engagement for today's leading brands.
OpenText’s Strategic Focus on Cloud, AI, and Cybersecurity Propels Growth Despite Revenue Adjustments Following AMC Divestiture
Keith Kirkpatrick, Research Director at The Futurum Group, discusses OpenText's Q1 2025 results, reflecting strategic technological advancements and emphasizing growth in cloud services, artificial intelligence, and cybersecurity.
Gary Steele, Go-to-Market President at Cisco, shares his insights on enhancing digital resilience in the age of AI, emphasizing the synergy between Cisco and Splunk.