A Look at the Role of IBM Z in Digital Transformation – Futurum Tech Webcast Interview Series

On this special episode of the Futurum Tech Webcast – Interview Series, I am joined by Ross Mauri, General Manager for IBM Z, for a conversation that focuses on the future of IBM Z, digital transformation and what’s ahead for the tech industry. This conversation is the second in a three-part series with IBM Z.

In our conversation we discussed the following:

  • A look at the Red Hat acquisition and how that is helping IBM proliferate the hybrid cloud journey
  • A detailed exploration of IBM Z’s perspective of agility in digital transformation
  • What the future holds for confidential computing
  •  How mainframe will continue to evolve to keep up with the cloud

It was a great conversation and one you don’t want to miss. Want to learn more about what IBM Z and what they are doing in this space? Check out their website.

Also, be sure to check out the other episodes in the series:

The Magic In Optimizing App Modernization: Finding the Right Approach for Each Workload

A Dive Into IBM Z’s AI Value Proposition

Don’t forget to hit subscribe down below so you won’t miss any episode.

Watch my interview with Ross here:

Or listen to my interview with Ross on your favorite streaming platform here:

Don’t Miss An Episode – Subscribe Below:

 

Disclaimer: The Futurum Tech Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.

Transcript:

Daniel Newman: Hey everybody. Daniel Newman, principal analyst at Futurum Research. And I’m back again for this third video in a series I got to do here with Ross Mauri, GM IBM Z business. Ross it’s been a lot of fun to have these conversations with you. And like so many of these kinds of sit downs I want to end by looking ahead.

Ross Mauri: Okay.

Daniel Newman: Waving into the future something I’m sure you are asked to do from time to time. So I want to start by weighing in, on hybrid cloud. So in the world of mainframe, we’ve already sort of talked, maybe even dispelled a little bit, that there is some sort of mutual exclusivity between mainframe and cloud. We’ve hit that, but IBM’s trajectory and the folks of the business is heavily cloud related. It’s one of your big, major, key strategic tenants. The $34 billion acquisition of Red Hat is sometimes an under sung hero in terms of helping the company continue to proliferate its hybrid cloud journey. But the mainframe has a lot of relationship with the cloud and your business in particular. Talk a little bit about how that journey is evolving.

Ross Mauri: So I think the acquisition of Red Hat was obviously a big moment for us and the OpenShift platform and tools like Ansible, I think are game changers when it comes to hybrid cloud. Because you want to do things one way and you want to do it in a somewhat architected and open way so that you can move your workload to a mainframe. You can move it to a public cloud. You can move it to a private cloud. Whether you’re moving your workload or not, where your applications that support your business processes run can most easily connect, share the data that they need to. And it’s that architecture that really, for me, defines the hybrid cloud platform and gives our clients the value. And speaking of clients, I mean, our clients have connected systems to mainframes for many decades.

So it’s how you’re connecting them now, whether it’s on the same data center floor, IBM Z to a cluster, right? Or if it’s out to outside the data center to multiple other public clouds or other types of services. It’s that connection that’s the most important thing. And that’s the thing I think we’re making much more easy. And again, in an architected way, applications can be developed once and they can run anywhere. The data fabric and how you connect to get data is there, it’s clear. How am I going to get to my data when I want to so I’m not transferring too much data, unnecessary cost, but I’m getting to the pieces of data that I need to provide a super end user experience or make a decision somewhere out in the ether? So I think that hybrid cloud is the key.

I think that I just have to say one thing, our development teams and the Red Hat teams really hit it off, because there was always this thing, is the culture of Red Hat going to fit within big blue? And it was interesting just to watch our development teams connect and port products and create products together. So I was really happy about that. There was no culture divide. It’s about, we share a vision. We are into open standards and open source and we want to innovate. And so, the marriage so far has been really good. And I think again, from a client point of view, the feedback I’m getting is that there’s a great hope on the journey to cloud now that it can be done with thinking about where experiences need to occur, what’s the best fit for purpose for the platform and the infrastructure that it’s on to deliver that experience.

How do we connect them together in a coherent way? So that performance management and things like that, that matter if you’re running a big system can be done once and kind of across the network. So it’s been fun and I’m really excited about the future actually.

Daniel Newman: Yeah. There’s lots to be encouraged by. And I like that you mentioned that there are these stronger relationships and interdependencies created between these different groups. I think we’ve started to understand it with cloud and prem in terms of data center and creating the hybrid cloud. We’ve somehow even started to understand it with cloud and cloud in prem with the multi-cloud architectures that are growing and then you’ve got Edge, which of course is an exponential addition to complexity and mainframe. Like I said, for whatever reason, it’s taken more time than it really should for people to understand that there is this strong interdependence that is going to exist. And by the way, a lot of the technologies that you’re building and I’m going to come back to that in a minute, but I want to cue you up, give you some time to think about it.

Like Hyper Protect, like Data Fabric are really kind of these critical technologies that are going to make cloud and mainframe more accessible to each other. Something else you mentioned though, that I want to touch on is agility. So, a lot of what drives this cloud first philosophy, despite the fact that it’s not as frequent as we like to think that companies are all cloud, but there is a growing cloud first philosophy within enterprises. That if the cloud is suitable, the workload goes there and then they work the architecture around that. And that’s been about agility, but what about your perspective and Z’s perspective on agility? Just like other parts of services that we’ve talked about in our past videos, your philosophy on agility is similar, yeah?

Ross Mauri: I think so. And I mean, it depends what person is looking for agility. Developers, application developers are clearly looking for agility because they’re more productive, they can spin things up and spin them down. All the automation in a CICD pipeline. I mean, that’s the type of agility a developer’s looking for. And again, you can can get that on today’s IBM Z. It’s just a matter of the software stack and we have it available. There’s other types of agility. There’s operational agility. If your banking and your regulator says that you have to demonstrate that you can do site swaps with your workloads from a resilience, cyber resilience point of view. Our systems are perfectly designed to have a running bank or banking workload, transfer hundreds, thousands of miles away to another data center, pick up the work there without anyone that’s accessing it knowing that. So there’s agility in resilience and high availability, right?

And then I think there’s also agility that people look for and they talk about it. They talk about consumptions models. How do I pay for this IT resource? In the long, long past you had to buy it, install it, put it on your premises, and run it all yourself and operate it. The cloud gives the ability to really not own anything. To just go up and use services. So the agility there is, in the consumption model to me as a business, I don’t have to worry about laying out the capital and worry about depreciation and all that. I can just go and use it. What people don’t really realize is well, Daniel is that if you want variable consumption, you can get that on IBM Z today for your software and your hardware. We have models that breathe in and breathe out along with your demand. So agility comes in many flavors. It depends who’s looking for it. I’m trying to address agility on all dimensions to all personas that matter.

Daniel Newman: Absolutely. And there is no one size fits all. You talk about consumption and different accounting models. I mean that actually, every company has its own unique perspective. Some companies like depreciation, you know that? But…

Ross Mauri: Absolutely.

Daniel Newman: Not always a bad thing, but something else I want to talk about, we did some very interesting research over the past year. And we’ve noticed there’s an uptick in adoption and interest in investing in confidential computing. IBM is one of the companies involved in the confidential computing and consortium. One of the leaders, something that I’ve heard you talk about many times, but I’d love to hear a little bit more about where that’s at. I sometimes think it gets a little bit lost in the background of all the different technologies, but I think this is something that needs to be put out constantly in the light. Our research showed this solves a lot of problems. What do you see in there?

Ross Mauri: Well, if you look at the cyber threat and cyber statistics that are going on, let’s just take breaches. Last time I looked, 65 to 70% of all data breaches are caused by some type of insider threat. A compromised human, compromised credentials, right? That’s real and we all see the data breaches that are going on around the world. Well, in looking at that, in looking at the technology for security that we have built up over decades within IBM Z, and we constantly enhance it every year, every generation it’s enhanced. But we looked at that and said, “I bet we could make an enclave that if a client put their application and data in, they would sign the application so that we would know that the application was not tampered with anywhere in the supply chain, getting it from the client into this secure enclave. Data the same way.”

And once it’s locked down with the data always encrypted, we could make it so that the only person that could access that is the client with the key, with the right cryptography key. And that was very appealing. We’re actually on our fourth secure enclave generation. That’s out there in the market today. So we’ve been doing this for a while. And again, we did it so that, you’ve got a Java jar, and you want to run it in a secure environment. You literally can pick it up off of wherever it is today. Drop it into one of our secure containers and it will run. Only it will now run with complete security so that no one, whether it’s an on-prem instance of this. So no one within my company who’s compromised could get at it.

Or if it’s in the IBM public cloud where we’ve also surfaced the Hyper Protect services. No one in SRE, no one with system admin privilege, nowhere. We can’t access that data because again, you have the crypto key. So again, we took our knowledge of security that we developed with banking, again, over a long time and said, “Let’s bring it to the masses.” So now, whatever industry you’re in, if you care about your data and you really want to protect it, confidential computing is the thing for you. And again, it’s easy to use and easy to access.

Daniel Newman: Yeah. We’re seeing a lot of momentum. There’s a ton of visibility out there. Breaches get a lot of attention. When you have personal data, PPI data, and you’re really trying to make sure that you’re following the letter of the law and that law is evolving constantly. That you’re well set up. And we know because some of the breaches have been in very sensitive places, whether it’s been government breaches, whether it’s been credit bureaus or credit… And what we feel is, as a society is a huge lack of trust in those companies, and it’s been shown time and time again. They never recover. They never fully recover that trust that gets lost. And now that there’s technology out there that can actually enable companies to, let’s say, without perfection, greatly reduce their risk to near zero, like five nines.

You’d almost have to wonder what company wouldn’t want to seriously investigate, if not start to move at least that most sensitive data into those environments. And it also brings up a point that you and I spoke about in the second video, but I think is worth repeating. And that’s the difference between technical assurance and operational assurance. Because most companies, when you’re signing an agreement will say, “We will be sure we don’t look at your data,” right? You pointed this out, I’m just reiterating it for the crowd in case they didn’t watch the second video. But technical assurance is saying you can’t.

So when you were mentioning that example, you talked about who holds the key? Well, there’s a key on both sides, like full encryption. Is that like, why do people like fully encrypted chat? Well, because even if the big tech company was asked to release it, they could only give the encrypted data. You would have to then consent as the user. So talk a little bit about just how important that is and how much that’s really driving customers. And are you seeing that uptick in demand?

Ross Mauri: Well, I think as privacy and security regulations grow, whether the company was on that point or not, now they kind of have to be. And seeing some of the fines levied in the last 12 months alone for GDPR breaches in the EU, right? So huge, huge fines. So what I think is going on, though, is that not just the big companies that again can afford the technology and should be going after locking it down and leveraging confidential computing. But it’s got to be for everybody, which is why, again, we surfaced the Hyper Protect services, so that any size company, even from a startup could access that level of security. And the startups are flocking to us. They’re blockchain based, they’re digital ledger companies. They’re either early stage startups. We’ve also got some fully funded up and running, I would say, young companies, and we’ve got some mature companies coming on. Because again, I want to get to this confidential computing, I won’t say dream, but this paradigm where I don’t have to worry about that anymore, I can worry about other things.

And it’s an easy way to do it. It’s actually literally easy to use the Hyper Protect services in the IBM public cloud. So I think that companies in all industries, as they become more aware of their data and what a breach could mean to them. Not just financial penalty, but also reputational damage, which could probably go far beyond that. They’re going to move more and more to confidential computing. We’re on our fourth generation, as I said, we’ve been on this a long time, but again, between my team and IBM Research, again, we’re always looking at ahead and trying to anticipate the problems. And this is one we anticipated.

Daniel Newman: Absolutely. I think it was well designed and we have to always acknowledge the black hats. They’re very good and every time you move the chess board, they’re moving the chess board too. So having this level of sophistication, shouldn’t be understated. In fact, it should be reiterated. And if you’re a CIO or CSO or a CTO, you really have to say, if you know the technology is available that could protect you, then it really becomes not a question of if, but when and how. And so, getting a little bit more emphasis there.

Another thing you said a few times throughout our conversations, both these videos and just discussions that we’ve had is the cloud is not so much a destination. It’s an architecture. And so as we sort of wrap up the series, talking about the future, one of the things I think a lot of people wonder is how does the mainframe continue to be modernized in such a way that it keeps up with the cloud? And I think there’s kind of a technical answer to this question. And I would suggest there’s also an operational and almost a business answer to this question. So I’d love to have you hit it on all fronts, if you could. A great way to finish here.

Ross Mauri: Sure. So to think that the mainframe isn’t as modern as the cloud today just means you haven’t peeled back the onion and looked at the layers. We start at the hardware layer. We’re absolutely leading edge, state of the art. From security, HSMs to the microprocessors and the technology and the design techniques we use, all the way through the hardware, right? Absolutely state of the art, but what is really important then is the software that brings this hardware to life. It’s the personality. It’s what allows you to connect. It’s what allows you to run your applications. And the software, some of it’s been around for a long time. I mean, I started IBM, full-time in 1980.

And I think some of my code is still running within the z/OS. And you might say, “Well, that’s really old, isn’t it?” I’d say, no. We’re constantly evolving all this software so that it applies to modern desires, wants, and needs, modern paradigms.

And again, as I said before, more open standards based access. So we move the software fast. We’ve done so much, especially with Apache Ignite and so many things of these past five years that I don’t think that anybody that really looks under the covers would say, “Oh, no. This is not… ” This is a modern platform, but it does some things that Windows can’t do and Linux can’t do on its own. So we’ve got some things built in and that’s really about scale, high availability, and resilience. And again, that’s some of the things that we’ve learned over the decades that we build into our packages.

So I think we’re modern and I always challenge clients when I meet with them, “Send me your CTO, send me some of your architects. Let’s have them sit down. We’ll sit down for a day or a week, whatever how long you want to sit down for, and we’ll go through things. And when we’re done, then you can give us a rating on, are we investing in the right areas?” And when they take me up on that, actually it’s the best when they take me up on that. And they send a couple people that have not been involved with the mainframe, because when they walk away, they’re like their jaws are on the floor. They’re like, “We get it now. We’re ready to use this technology.”

Daniel Newman: Yeah. To use a sports’ analogy, you’re batting a thousand when you get them in the boardroom. You just need that chance to actually tell them. And another thing that probably is at least worth pointing out, as I suggested, is that business narrative. There’s this zero sum game philosophy, right? That as cloud grows, mainframe shrinks. And proportionally speaking, I think you gave me a number, something like three and a half times growth over this past decade or so of cloud. And maybe you could say cloud might be growing faster, but I just really hate that zero sum mentality because the truth be told is whether it’s chips, whether it’s cloud workloads, whether it’s storage, all these things are growing.

These TAMs are consistently growing as the demand for data, as the demand for applications grow in every part of the technology stack, and the architecture stands to win. We always want to say things and be so emphatic about it. Oh, the cloud is going to replace the data center, the on-prem data center. The cloud is going to replace the mainframe and the actual growth over the last decade just proves that’s not the case.

Ross Mauri: Yeah. And what really proves that is that it’s cloud and mainframe together. And the 3.45 number that I gave you was the amount of installed capacity around the world for IBM Z. And that’s, what’s grown 3.5 times over the last 10 years. So, that’s a demand statement. We’re not giving the boxes away just like AWS and Azure aren’t giving their services away. We’re in business and clients value that, and are ramping up because even in a hybrid cloud world, both sides are going to grow. And there’s synergy with both, if you would.

Both compute paradigms or very bare metal infrastructure paradigms, again, and the software that connects them, I think blends them together and lets fit for purpose workloads, decide where they best go.

Daniel Newman: Yeah, absolutely. And you know what? You cannot, by any means, say that competition won’t exist and that you won’t have to continue to innovate, like all of technology. But from what you’re telling me, and from what I think everybody out there that’s listening should want to hear here is that there’s a lot to be encouraged by at IBM Z.

Ross Mauri: Absolutely. I think that the future is very bright. I can’t wait until the next generation comes out and we can really unleash all of the great AI hardware and software that we’re going to bring to our clients. I think that’s just going to take it to a whole nother level. And so as I’ve been around this technology game for a long time, but I’m so excited now. I think the future’s bright.

Daniel Newman: Ross Mauri, GM IBM Z. Thanks for doing these videos. It’s been great to chat with you.

Ross Mauri: Thanks Daniel. Thanks everyone for listening.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Expanded NetApp Volumes Capabilities Extend Enterprise-Grade AI and EDA Storage to Google Cloud with Enhanced Scalability, Cost Control, and Compliance Features
Camberley Bates and Krista Case of The Futurum Group share insights on how NetApp and Google Cloud’s storage updates enable scalable, AI-ready, and compliance-aware cloud infrastructure.
On this episode of The Six Five Pod, hosts Patrick Moorhead and Daniel Newman discuss the whiplash-inducing tariff announcements affecting US-China trade, Intel's strategic move to sell a majority stake in Altera to Silver Lake, and activist investor Elliott Management's stake in HPE. The hosts engage in a lively debate on the merits of selling AI chips to China, exploring the complex interplay between national security, economic interests, and technological advancement.
Brad Shimmin, VP and Practice Lead at The Futurum Group, examines why investors behind NVIDIA and Meta are backing Hammerspace to remove AI data bottlenecks and improve performance at scale.
Looking Beyond the Dashboard: Tableau Bets Big on AI Grounded in Semantic Data to Define Its Next Chapter
Futurum analysts Brad Shimmin and Keith Kirkpatrick cover the latest developments from Tableau Conference, focused on the new AI and data-management enhancements to the visualization platform.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.