The Six Five On the Road: Distributed Infrastructure, AI, and How IBM’s Vision of the Future of Computing Extends to Edge

On this episode of The Six Five – On The Road hosts Daniel Newman and Patrick Moorhead had the opportunity to sit down with key executives across IBM to talk about their full-stack infrastructure and the future of computing.

In this interview segment, Daniel and Patrick were joined by Nicholas (Nick) Fuller, VP, Distributed Cloud at IBM Research, to explore distributed infrastructure, AI, and how IBM’s vision of the future of computing extends to Edge.

Watch their other IBM conversation segments:

IBM’s semiconductor vision and ecosystem with Mukesh Khare, VP Hybrid Cloud at IBM Research

The benefits of fundamental science and technology innovation with IBM’s Ross Mauri, GM IBM Z and LinuxONE

How IBM’s Cloud fits into their full stack and impacts the future of computing with Hillery Hunter, GM, Cloud Industry Platforms & Solutions, CTO IBM Cloud, and IBM Fellow

How Quantum Computing is shaping the future of IT with Jay Gambetta, IBM Fellow & VP Quantum Computing at IBM Research

Watch the full episode: IBM’s Full Stack Approach to the Future of Computing

To learn more about the IBM Research, check out their website.

Watch our interview here and be sure to subscribe to The Six Five Webcast so you never miss an episode.

Listen to the episode on your favorite streaming platform:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.


Patrick Moorhead: That’s exactly right. And with that tee-up, thank you very much, we’re going to talk to Nick Fuller, who runs research for the distributed cloud. Talk about how AI basically permeates in everything outside of the data center. So super exciting. Stick with us here, folks, this is going to be great.

Nick. It’s great to see you. Thanks so much for coming on the Six Five, first time.

Nick Fuller: Thank you, and good to be here, Patrick.

Patrick Moorhead: Absolutely. Yeah, I appreciate, and we’re continuing this conversation talking about, first of all, holistically in the market what clients are looking for in terms of the future of computing, where hybrid cloud comes into that. And hey, we’re really lucky today because you run research for distributed cloud, AI, data, everything. So thank you so much.

Nick Fuller: It’s great to be here with both of you. Thank you. Thanks for having me.

Patrick Moorhead: Yeah.

Daniel Newman: You didn’t use the E word, the edge. But yeah, Nick, reading your bio, first timer on the Six Five, super excited to have you here. You’re covering a lot of ground, and I think as we tell this future of compute story, you really can’t tell that without talking a lot about what’s going on in the distributed cloud, and like I said, the edge. Big story, big opportunity. We’re all hearing the data numbers about the proliferation, exponential growth and volumes of data in enterprises around the world, and every other organization that uses compute are focused on the opportunity at the edge. So I’d love to get your take on how IBM perceives the edge, the opportunities, the challenges, and what you’re working on in this particular space.

Nick Fuller: Absolutely, Daniel. So when you look at edge, there’s a key vector that plays a role here, and that vector is the increased disaggregation and decentralization that’s happening from an infrastructure point of view. The emergence of cloud suggested, when you go back to 2006, customers would move their workloads from their data centers to cloud and those cloud data centers would be in specific locations. As time went by, you had the emergence of more providers like Ridge, like CloudFlare. Compute got better, the advancement in AI with GPUs and so on, accelerators, and customers needed to solve problems at the edge in addition to moving their workloads to cloud.

So that vector of disaggregation is a key one from an edge standpoint. Additionally for us, when you think through edge, because of that strong rise in hyperscalers there’s this notion of edge in versus cloud out. What does that really mean? Cloud out simply means you take your cloud stack architecture and you run that stack on a customer’s edge, whether that be a retail appliance, whether that be in a manufacturing floor, a warehouse, what have you. Whereas edge in lends itself to being cloud agnostic, but also really critically elevating the data plane to a first class citizen. From a cloud out point of view, the control plane dominates and data is not really first class. We see that as a key differentiation for us as a company.

Patrick Moorhead: So the industry has had a lot of changes over the last 30 years. And one of the biggest ones, interestingly enough, even though it’s maybe 25% of the data, moved from the on-prem data center to the public cloud. And there were a ton of lessons learned there, right? It’s been 10 years, maybe it’s glass half full where you would have expected more to be there if it were so easy, but there are some things keeping that from happening. But IT has learned a lot. What are some of the lessons going from on-prem data center, colo data center to the edge that you’re seeing? Good lessons of how to do it and maybe some things to stay away from.

Nick Fuller: Yeah, fantastic question. And when you go back to that original trend that you highlight, data privacy obviously, compliance and regulatory issues as it relates to different geographies obviously all played a role. And then the value prop that many companies thought they would be gaining by moving their workloads to cloud, namely to enhance developer opportunities, to grow new business, and certainly to reduce technical debt, some of those didn’t actually pan out. But really what still matters at the end of the day to an enterprise, to a CISO, to a CIO, to a CTO, the same factors rear their beautiful head. Security, software delivery life cycle, and overall manageability of that portfolio. These things continue to be relevant from an edge standpoint. And we see that to be true as we look at the various enterprise clients with whom we work as it relates to the innovation we’re building for edge computing.

Patrick Moorhead: Interesting.

Daniel Newman: So IBM has clearly put an all-in approach on a platform that has basically common platform for your distributed infrastructure. That was a mouthful, but I got it. How does this provide an advantage? Because this is one of the things I think a lot about, and I do have an answer for it, but I’m going to let Nick answer this. Talk about why you’ve gone down that route, the advantage it creates, and the challenges as companies are trying to move in this direction if they don’t use that common platform.

Nick Fuller: Yeah, fantastic. So the platform weighs into what architecture you ultimately are adopting from an enterprise standpoint as you go on that journey. At the heart of this all, you’re trying to solve some sort of challenge that grows your business. Whether that be from a savings standpoint, from a revenue standpoint, that’s really what an enterprise is aiming to address. Our platform, based on OpenShift and various extensions of that as it relates to footprint and location, so single node OpenShift, MicroShift, et cetera, running on a range of infrastructure, gives customers that flexibility as it relates to running their workloads on the edge to solve problems in quality control, for example, retail ordering, you might have seen IBM’s recent acquisition in the quick service restaurant space with McDonald’s.

All of these are critical. These types of workloads, whether it be in the case of quick service restaurants as far as natural language processing for order taking, whether it be for visual inspection for quality control in manufacturing, or a range of other applications, when anchored on that platform, the various visions of Red Hat’s platform that I mentioned, MicroShift, single node OpenShift, and full-blown OpenShift, gives you an architecture with data and AI capabilities that we’re building for what? Scalability. That ultimately becomes the challenge that you face moving from a proof of concept to getting into full-blown production.

Patrick Moorhead: So we talked a little bit in the green room about, this edge thing is new. Well, okay, well, we’ve had compute on the edge for a long time. What changed? And I think we can agree that first of all, there’s a whole lot more data being electronically captured on the edge versus maybe paper tallying, doing cycle counts. It’s automatic when somebody takes a loaf of bread off the shelf at a grocery store. And we finally have enough compute power and we have machine learning algorithms to run against it that are very efficient. But listen, we’ve talked a lot about the infrastructure, but I think I’d really love to hear about the data. And I want to hear about how IBM is leveraging AI, machine learning, and even let’s say a distributed data fabric to make all of this easier and more effective.

Nick Fuller: Fantastic question. And I touched on this briefly a second ago leading into this question. When you solve a problem initially and you demonstrate feasibility, it gives you an idea of how practical that can be as far as addressing that issue, usually with AI machine learning, whether that be NLP, whether that be computer vision, what have you. When it comes to going from that proof of concept to running that model in multiple places, running many more models scaled by orders of magnitude in a variety of locations, that cannot be done with the same type of infrastructure. The anchoring platform helps, but you need an architecture that’s scalable. And the scalability in that architecture, you’re able to leverage that first vector we touched on, the disaggregation and infrastructure. And that choice is up to the client, right? Whomever you’ve chosen as your hyperscaler provider, that’s your choice, that’s where you’ll do your model training.

That model will then be served at a particular location, a retail branch, a warehouse, a manufacturing plant, et cetera, for some type of issue to address. And let’s take the visual inspection one with manufacturing. You solve that issue, but then as you build more models, are you really going to take the point you made with all the data being generated at those locations back to the cloud? It’s not practical. You need an architecture that allows you to take some of that back to the cloud. And what we do here is build a range of data and AI capabilities that are platform centric. So for example, imagine you’ve generated a ton of data from various locations, and now you need to retrain the model because there’s no supervision there.

When you pull up, maybe you still do, I don’t, maybe none of us do any more for that matter, and you order something at McDonald’s and you say, “I want a large fry,” and they got it wrong, well, there’s a way for the model to be supervised there. But if there’s a shift on the manufacturing floor, no one’s supervising that. You need a way to ultimately determine if that model has drifted. You need a way to determine if you can take that set of data that has been generated and only take a sample of it, because the images are fairly similar. So we use AI and machine learning to cluster data, so we figure out what goes back to the cloud, what stays on the manufacturing floor. We infer whether the model has drifted or not using a variety of techniques. And that helps with not only the onboarding of new models, but the ability to scale that infrastructure from plant to plant to plant.

Daniel Newman: Yeah, Nick, no question there’s a ton of opportunities and challenges being presented, and federating certainly can help solve some of them. I think about the examples that you give, and we certainly have the opportunity now with all the data at our disposal to keep getting better, to keep getting sharper, and to improve all the different types of edges, the retail, the bread on the shelf, the next generation of shopping certainly, and of course factories of the future, the edge, the opportunity there significant. But the challenges because of distributed architecture are still, they’re palpable, they’re significant.

Nick Fuller: Absolutely.

Daniel Newman: And something that we expect and we’ll be watching as analysts for you and your team to continue to evolve and innovate upon. And Nick, we’ll look forward to having you back to talk more about all the things you’re doing at the edge, distributed, cloud, and more as part of our future of computing story.

Nick Fuller: Awesome. Thank you, Dan.

Patrick Moorhead: Thanks Nick, appreciate it.

Nick Fuller: Thank you, Patrick. Appreciate it. Pleasure.

Patrick Moorhead: So I really liked that conversation with Nick. Not only did it show the power of the edge, but also some things to think about in practical terms if you’re looking at moving a lot of your applications and moving a lot of data around. Data’s going everywhere, there are some things that enterprises really have to think about.

Daniel Newman: Yeah, we’ve spent the last few years doing a lot of research, spending a lot of time discussing the edge and what this looks like, the rapid proliferation of data, the exponential volumes of data that companies, enterprises, and organizations can benefit from. And it also creates immense challenges. The more you move around, the more “edges” you create, that really requires more thoughtfulness in how you build out your architecture, create that common platform that we talked to Nick about, because that edge is only going to get bigger. While the data centers, at some point there’s only so much physical footprint, and by the way, maybe they even get smaller, the edges are going to be more prevalent, more volumous, and that’s going to create a lot of challenges.

Patrick Moorhead: Yeah, I get the question a lot, Daniel, which is, what changed? We’ve had compute on the edge for 40 years. And the way that I like to explain it is that you have more machine data now. You have sensors that are 50 cents and you have cameras that are taking pictures of things going on, and not even necessarily for security, but things like inspection on an assembly line. “Is this part good?” And that a massive amount of data that’s being computed can’t all be shipped up to the data center or the cloud to be worked on. It has to be done in a much more intelligent fashion, like you said.

Daniel Newman: And there are some organizations that probably wouldn’t mind if all that data went up to the cloud.

Patrick Moorhead: Exactly.

Daniel Newman: But it’s not sensible. And going back to the example you used with Nick about the loaf of bread, it’s not a sensor necessarily on the loaf of bread. It’s computer vision and it’s that computer vision taking snaps in real time over and over of a retail environment and everything that’s happening. And you need that algorithm to be able to process to say, “Hey, that loaf of bread went off the shelf. What does that mean for restocking? What does that mean for revenue turnover? What does that mean for our margins? What does that mean?” So this is both the opportunity and the challenge of the edge. But I love it, because basically it’s what also brings our world to life. It’ll be the creator of the metaverse. It’ll be the creator of the next generation of customer experiences. And of course it will be an opportunity for so many enterprises to do more and be more successful.

Patrick Moorhead: Yeah. So I feel like key message here is, listen, you have an architecture for your on-prem data center. You have your architecture for cloud. You need an architecture for the distributed edge and an architecture that ties all those together from a data perspective. So I think it’s a good way to end this here.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.


Latest Insights:

The Six Five team discusses Marvell Accelerated Infrastructure for the AI Era event.
The Six Five team discusses Google Cloud Next 2024 event.