Search

Leveraging the Hybrid Cloud for Operational Resilience of Mainframe Data – Infrastructure Matters

Leveraging the Hybrid Cloud for Operational Resilience of Mainframe Data - Infrastructure Matters

On this episode of Infrastructure Matters, host Steven Dickens is joined by BMC‘s Chad Reiber, Solution Engineer, and Tim Ceradsky, Director of Software Consulting, for a conversation on how modern enterprises can ensure operational resilience in the hybrid cloud environment, especially when dealing with mainframe data.

Our discussion covers:

  • The concept of operational resilience in the context of mainframe data within a hybrid cloud environment.
  • The importance of immutable copies of mainframe data for ensuring operational resilience, and how they differ from traditional backup methods.
  • The key benefits of strategically placing immutable copies of mainframe data across the hybrid cloud infrastructure to mitigate risks.
  • Common challenges organizations face when implementing and managing immutable copies of mainframe data in the hybrid cloud.
  • Strategies for leveraging immutable copies of mainframe data to enhance data availability and expedite recovery processes in hybrid cloud environments.

Learn more at BMC.

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Or grab the audio on your favorite audio platform below:

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this webcast. The author does not hold any equity positions with any company mentioned in this webcast.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Transcript:

Steven Dickens: Hello. Welcome to another episode of Infrastructure Matters. I’m your host, Steven Dickens, and I’m joined today by Tim and Chad from BMC. Hey, guys. Welcome to the show.

Chad Reiber: Hey, Steve. How are you?

Steven Dickens: Yeah, good. Good.

Chad Reiber: Excellent.

Steven Dickens: Good to chat to you. We’re diving straight into it today. We’re talking about operational resilience from a hybrid cloud perspective, which I think is going to be really interesting. So maybe get us away, Tim. Frame up what we’re talking about and give us some context for the listeners and the viewers.

Tim Ceradsky: Sure. So an interesting development from the last couple of years is that cloud storage, especially, has really skyrocketed. And when we say cloud storage, we mean object storage that’s either on premise, so regular storage arrays that have a object interface, or storage that’s at a hyperscaler, like an AWS and Azure and so on and so forth. What’s interesting is, by tapping into that space, it really empowers the mainframe to be able to have a very scalable storage pool, very flexible, and it really opens up some really new vistas for that mainframe storage that they’ve never been able to experience before, and it tackles some of those thorny problems in data protection and resiliency that are, let’s say, difficult to deal with from a legacy perspective with tape and VTLs and so on.

Steven Dickens: So can you elaborate and take us maybe that next level down, Chad?

Chad Reiber: Sure.

Steven Dickens: How are people operationally deploying this object storage-based approach to mainframe data?

Chad Reiber: So the problem is when you store your data on the mainframe, it is potentially attackable, right? If that’s a word, right? So if-

Steven Dickens: We’re going to allow it. We’re going to allow it.

Chad Reiber: We’re going to allow it. All right.

Steven Dickens: You’re good.

Chad Reiber: So ransomware, cyber attacks, or attacking your mainframe, it could take any sort of action: deleting data sets, encrypting data sets. So if we can store them out on the cloud, in object storage, it is not on the mainframe. So even if they attack your mainframe and take down your storage, it is separated. We call it immutable. It can be accessed either from a whole new system or from the system once we recover it.

Steven Dickens: So that gives us flexibility. It gives us the security. It helps from an availability posture point of view. Starting to see some of those things come through with DORA regulations, particularly in Europe. Are you starting to see that factor into the conversation?

Chad Reiber: So we’re getting pulled in a lot out in EMEA. A lot of the banks, a lot of the insurance companies out there, because they’re pushed by DORA to say, what happens if your mainframe’s attacked? What is your plan? And everyone has a plan, but do they take it to that level, right? So there’s always those “what if” questions? What if someone does this? What if someone does that? Having that data separated out and backed up out on object storage where it is protected, it’s encrypted, it gives you that capability to recover.

Steven Dickens: And Tim, I know from chatting off camera, you spend a lot of time chatting to customers. What are some of those challenges as they try and deploy this model? As you mentioned, a couple of years, still relatively new, people are starting to think through, they see the regulations come through, they start to see, hey, if we can get this data up onto the public cloud, we may be able to connect it to an AI or a more public cloud-based service. It’s not just from an availability point of view, but what are some of those challenges?

Tim Ceradsky: So that’s where the things that we’re doing and being able to provide the simple connection of… If you look at other vendors in this space, a lot of times what they’re doing is they’re grafting on a single module to their already existing strategy because they don’t want to lose what they already have built with that customer. What we’re doing is saying, you know what? Let’s take a fresh sheet of paper. And what we’ve built is a started task that runs on the mainframe that allows you to use zIIP engine MIPs as opposed to your regular MIPs. Let’s say non buildable. That’s nice.

Suck all that data off and chunk it up and do it in parallel so when we send it to either the cloud storage on premise or in the cloud, we can do it at a very high rate of speed and be able to move it there; but we don’t make the customer change their methodology with their applications. We don’t have to change JSDL, we don’t have to change our mentality. So we’ve kind of cracked that code a little bit.

Steven Dickens: Reducing friction and making it simple.

Tim Ceradsky: We reduce the friction, we make it simple, we make it less complex to manage. Having a simple web GUI that anybody who has a small amount of training can really be able to step in and say, I understand what I’m seeing here. I can tell whether something’s been successful. It even has a simple capability of being able to simulate the backup. And so you can actually see what was getting backed up to make sure that you didn’t just back up 300 terabytes of something that you didn’t expect to get, that you’re getting exactly what you expected. You can set up all your parameters, you can do it in minutes. I literally would say it would take somebody who’s a reasonable technical person to create a backup environment… I’m not saying everybody could, but I mean anybody that understands a little bit about the environment would be able to create a backup policy in minutes. It’s simple.

Chad Reiber: I think the key is performance, right? Everyone says, oh, you can’t beat this, or you can’t beat that. It is scalable. So depending on what you’re trying to back up and how fast you’re trying to back up, we can scale. There’s many different knobs that we can turn or put in processes to make it run faster. So that is key, I think, to this solution.

Steven Dickens: So I’ll just do the whole question again. That’s fantastic. How are organizations actually taking advantage of this and deploying it in their shops, and what are they seeing?

Chad Reiber: With AMI Cloud Solution, there’s really three legs of the solution, and one of them is the vault. That is the third copy that they’re moving out to the cloud, wrapping it with an S3 wrapper. It’s secure. It’s encrypted out there. But there’s two other parts. One part that’s important is the AMI data, and that’s where we’re replacing some old legacy software that customers are utilizing that slow… It’s expensive. It’s expensive to run. With the AMI Cloud, we can do it a lot faster, a lot cheaper, and provide that cost savings not only in VTL, but also in processing.

Steven Dickens: And what are you seeing, Tim?

Tim Ceradsky: And that third leg, I think, is… What’s interesting about that is such a value driver for organizations because you can make all that investment in building a data protection strategy, a ransomware strategy, or whatever you want to call it, and do a great job with it. Traditional approaches work. There’s a reason they’ve been around for 40, 50 years. Because they work. But what it doesn’t do is it’s a little bit of a dead end when you think about what else can you do with that data? And that’s what I think cloud opens up entirely new vistas with, because when you put it into the cloud, now that data is sitting there at a spot that we apply our third leg to with analytics, and we can pull that data back out in a format that you can then apply in your data scientists’ place.

80% of a business’s critical information lives on the mainframe, and it’s always been a struggle. It’s always been difficult to take advantage and really use that investment in the data scientists world. Entire companies have been built around ETL and being able to spend a lot of money to get that data out, but it costs a lot of MIPs. It costs a lot of money to do that. It’s another system. You got to put it back into your data protection strategy and everything else. It’s really complex. The promise of what we do with AMI Cloud here with that third leg is the ability to take that investment and get another use out of it, which is great, and-

Steven Dickens: Especially in the era of AI where people are looking to harness the corpus of data and do something with it to drive the business forward.

Chad Reiber: And the data is closer to those applications.

Steven Dickens: For sure.

Chad Reiber: It’s already moved there for one reason or another. It’s closer. It’s available.

Steven Dickens: If you want to leverage the cutting-edge generative AI technologies that are going to live on the cloud typically and then get the data close to those services.

Tim Ceradsky: Right.

Steven Dickens: Well, guys, this has been a fantastic discussion. Thank you so much for joining us.

Tim Ceradsky: Thanks. Thanks for having us.

Chad Reiber: Thank you for having us, Steve.

Steven Dickens: Appreciate it. You’ve been watching another episode of the Infrastructure Matters podcast. Please click and subscribe and we’ll see you on the next episode. Thanks so much for watching.

Author Information

Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the Vice President and Practice Leader for Hybrid Cloud, Infrastructure, and Operations at The Futurum Group. With a distinguished track record as a Forbes contributor and a ranking among the Top 10 Analysts by ARInsights, Steven's unique vantage point enables him to chart the nexus between emergent technologies and disruptive innovation, offering unparalleled insights for global enterprises.

Steven's expertise spans a broad spectrum of technologies that drive modern enterprises. Notable among these are open source, hybrid cloud, mission-critical infrastructure, cryptocurrencies, blockchain, and FinTech innovation. His work is foundational in aligning the strategic imperatives of C-suite executives with the practical needs of end users and technology practitioners, serving as a catalyst for optimizing the return on technology investments.

Over the years, Steven has been an integral part of industry behemoths including Broadcom, Hewlett Packard Enterprise (HPE), and IBM. His exceptional ability to pioneer multi-hundred-million-dollar products and to lead global sales teams with revenues in the same echelon has consistently demonstrated his capability for high-impact leadership.

Steven serves as a thought leader in various technology consortiums. He was a founding board member and former Chairperson of the Open Mainframe Project, under the aegis of the Linux Foundation. His role as a Board Advisor continues to shape the advocacy for open source implementations of mainframe technologies.

SHARE:

Latest Insights:

In a discussion that spans significant financial movements and strategic acquisitions to innovative product launches in cybersecurity, hosts Camberley Bates, Krista Macomber, and Steven Dickens share their insights on the current dynamics and future prospects of the industry.
The New ThinkCentre Desktops Are Powered by AMD Ryzen PRO 8000 Series Desktop Processors
Olivier Blanchard, Research Director at The Futurum Group, shares his insights about Lenovo’s decision to lean into on-device AI’s system improvement value proposition for the enterprise.
Steven Dickens, Vice President and Practice Lead, at The Futurum Group, provides his insights into IBM’s earnings and how the announcement of the HashiCorp acquisition is playing into continued growth for the company.
New Features Designed to Improve CSAT, Increase Productivity, and Accelerate Deal Cycles
Keith Kirkpatrick, Research Director with The Futurum Group, covers new AI features being embedded into Oracle Fusion Cloud CX with the goal of helping workers improve efficiency and engagement levels across sales, marketing, and support.