Search
Close this search box.

AI Assistants and the Mainframe – Six Five In The Booth at IBM Think

AI Assistants and the Mainframe - Six Five In The Booth at IBM Think

On this episode of the Six Five In The Booth, host Steven Dickens is joined by IBM’s Tina Tarquinio, VP, Product Management, IBM Z and LinuxOne, for a fascinating conversation on integrating AI with the mainframe, a cornerstone for modern IT infrastructure. This discussion delves into IBM’s recent advancements and predictive insights into future developments.

Their discussion covers:

  • The crucial role of the mainframe in hybrid cloud environments
  • How IBM Watsonx Code Assistant for Z is revolutionizing application modernization with AI
  • The introduction of Watsonx Assistant for Z, enhancing productivity through conversational AI and generative AI technologies
  • The synergy between semiconductors like Telum and the mainframe in powering AI
  • A look back at IBM’s recent earnings strength and forward-looking non-promissory insights into next year’s refresh

Learn more at IBM. Click here to learn more about the announcements from IBM Think 2024.

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Steven Dickens: Hello and welcome. My name is Steven Dickens, and you’re joining us live from IBM Think with The Six Five. We’re coming to you with Tina Tarquinio. Hey, Tina.

Tina Tarquinio: Hey, Steven. Thanks for having me, and thank you for coming to Think and sharing this experience with us.

Steven Dickens: So we know each other really well-

Tina Tarquinio: We do.

Steven Dickens: … but tell the listeners and viewers what you do for IBM.

Tina Tarquinio: So I have the privilege of leading product management for IBM Z and LinuxONE. I have what I think is the best job at IBM. That means my team and I are the ones that introduce all of the new offerings for IBM Z and LinuxONE, so IBM z15. We’re working on zNext and everything in between.

Steven Dickens: So you mentioned z16 there. Let’s dive straight in. I track your earnings really closely. In the last quarter, you guys delivered 5% constant currency growth for the platform eight quarters into the cycle. I think that’s unprecedented. I think you guys are really rocking it with some of those numbers. Tell me why you think that’s resonating so strongly with clients and why you’re able to drive that growth so late.

Tina Tarquinio: Awesome, and you’re absolutely right. I would first have to give kudos to all of the IBMers and our clients that work together to decide what technology we would put into the system because that really is what’s driving this growth. As you just heard Arvind mention in the keynote, we talked about the Telum processor. That has AI inferencing embedded on the processor, and that has really been a game changer for our clients.

We have over 150 clients that are doing proof of concept pre-production or in production using that technology, really spectacular. The other is just the movement within regulated industries. There’s a lot of technology on IBM Z that helps our clients achieve the swift movement of the regulations and the different countries they have to serve in. It’s really been outstanding to talk to our clients and learn how they’re harnessing this technology, and I think that’s really at the heart of the growth. But thank you for the recognition.

Steven Dickens: Well, so you mentioned the Telum processor there and you mentioned AI. I don’t think people would naturally think AI and mainframe unless they’re deep in the technology. Tell us a little bit about what IBM’s doing and what you’re seeing clients adopt when it comes to Telum. I think there’s a really good story there around transactional AI, but maybe if you could double click, that’d be great.

Tina Tarquinio: Yeah. When we were designing the processor, we are a little bit obsessive as many of our clients know about getting their input on what we are going to put into the next set of offerings. When we were really digging into how could they harness the power of AI, there were two roadblocks that we encountered the most from them. The first was there was no technology available with low enough latency where they could do the inference in transaction. So they were making decisions if the transaction was over a certain amount, they would do it, but they would offload it and then bring it back, or they’d do it in batch at night or over the next day. That really was impacting their ability to have true fraud detection.

The second roadblock, once we solved the latency technology part by putting the inference on the processor, the second roadblock we heard was deployment of the models was taking too long. So you’d have a data science team that wrote a model in R or Python, pick your favorite, and they’d throw it over the line of the production team that have to rewrite the model. Then the cycle would go back and forth, but that’s not really good for agile deployment of models, which rapidly change.

So with the IBM z16, not only did we introduce the accelerator on the processor, we introduced technology that you could build and train your model on any platform and deploy it right on IBM Z, and that has really been a game changer. We’re seeing use cases from fraud detection. One of the major U.S. health providers is using it to do pre-screening on radiation x-rays and everything in between. It’s really amazing.

Steven Dickens: So I think the key thing you mentioned there for me was inference in transaction. You blew past it really quick, but I think that’s super key. You look at the fraud use cases, credit card transactions, so many of those workloads are transactionally on the mainframe, but being able to do that inferencing at real time in the transaction, we hear a lot about generative AI. We hear a lot about all the fancy use cases for creating images, but really. The transactional integrity and bringing that inferencing is key. You mentioned 150 clients there. Is that what’s driving that adoption?

Tina Tarquinio: That’s where it starts, right? So clients now have to either offload the scoring or they do it later in the day. This allows them to score every transaction, everything from my favorite Starbucks all the way up through a large purchase. This means they can score it real time. If anybody ever looked at their mobile banking app, you would see maybe you add a transaction, and it would say, “Pending.” That means the bank is still working out the transaction, whether or not has it settled the transaction, the money’s moved back and forth.

This allows that to speed up because you can score the transaction, you can settle the payment quicker, and that’s good for everybody. If you have a fraudulent transaction, I’m sure we’ve all been on the other end of an email that says something’s been declined or your credit card’s been used, the mitigation after the fact is really painful. So not only can you avoid all that after-the-fact mitigation, you can just have a safe transaction up front, and all of that real time, less than one millisecond of latency is really incredible.

Steven Dickens: You talked about that from a customer service point of view. There’s obviously lost transactions or trying to provide that transactional integrity, that’s huge for the bank, but also for the merchants, huge for the customers. So one of the other areas I know you guys are innovating on is this whole hybrid cloud piece.

There’s lots going on with containerization and microservices, again, not something somebody would naturally associate with the mainframe. But you guys have been doing a good job of embedding some of that containerized technology directly into the operating system. Maybe you can talk to some of that.

Tina Tarquinio: Yeah, I’ll share a few. There’s a lot of ways we are doing that, and we have been on this journey for I would say several years and generations. Most recently, in March, we announced z/OS containers. This is the ability to have native containers on z/OS that act like any other container on any other platform. No unique things on Z, which is really incredible in z/OS.

Steven Dickens: We’re talking about it. We’ve been tracking this for a while, z/OS and containers in the same-

Tina Tarquinio: It’s really incredible.

Steven Dickens: … sentence, that technology embedded directly into the operating system.

Tina Tarquinio: Right, and we did it so that it was on par with other containers in the industry, so similar terminology, similar movement. That was our goal to not be different and it’s really incredible. We had Linux on Z for over 20 years now, full container platform on there. We’ve introduced OpenShift platform on Linux on Z and z/OS. Again, you can run OpenShift on z/OS.

It’s really a testament to our teams and our clients paving the way really to have the mainframe be this superpower asset that drives the rest of the hybrid cloud. If you think about the data and the transactions that reside on Z, you want that to charge up the rest of your hybrid cloud and not duplicate and take advantage of both the resiliency and the security that you have.

Steven Dickens: So what’s been the reaction? I think you guys are on fire when it comes to that development journey, making the mainframe a tier 1 participant in a hybrid cloud story. We saw it in the keynote today, Arvind in talking about the mainframe. What’s been that customer reaction to what you guys are doing in the hybrid cloud space?

Tina Tarquinio: Yeah, once they figure out the right use case to get started, I would say almost 10 years ago we introduced just exposing APIs so you could take advantage of business logic, and that’s a little crumb to get started. Then from there, now we’re into Ansible Automation, all of this. It’s really unlocked capability they didn’t know they have or couldn’t take advantage of. It’s unlocked their ability to work across their enterprises with different teams. Most of everything we do in life is a team sport. So when that starts to happen across clients and they work together using all the technology they have at their hands, pretty incredible outcomes.

Steven Dickens: So as you mentioned at the very top of our discussion, you’ve got what you think is the best job in IBM.

Tina Tarquinio: Yes.

Steven Dickens: I’ve heard you say that for years now. One thing I know you and your teams are focused on is not only the Next box, but what comes after that.

Tina Tarquinio: Yes.

Steven Dickens: There’s two, three generations in the cycle, probably can’t ask you too much, but tell me what you can about what we should be expecting for said zNext.

Tina Tarquinio: Yeah, so Steven is right. We are always working on two or three of the next generations at any given time. So zNext is well underway. We’re in the final test stages of that. We’re working on zNext+1 and zNext+2. IBM is an incredibly unique platform around the mainframe in that we design everything from the silicon all the way up to building and manufacturing the systems.

So to make that early decision on what’s the right technology node, three nanometer, two nanometer, we’re making those decisions now for zNext+2 plus two. In general, you can expect us to continue to deliver accelerators in the system. So z16, we had the AI accelerator. Before that, we had compression encryption. So we’ll continue to look at accelerators where we can really speed up a transaction.

Steven Dickens: Might be a bit of AI in there, I’d imagine?

Tina Tarquinio: I’m sure it will.

Steven Dickens: That’s not too speculative, I guess-

Tina Tarquinio: Yes.

Steven Dickens: I think.

Tina Tarquinio: We’ll have have next generation of AI acceleration, and we’re going to double down on other infrastructure in the system for AI. You’ll continue to see us focus on security. We’re still the industry’s first and only-

Steven Dickens: Obvious, right?

Tina Tarquinio: … quantum-safe system. Then I think what you’ll see is how do we combine these technologies? How do we combine AI for security and make compliance easier, less friction for our clients? How do we combine AI with parts of the system that are maybe doing workload management or other monitoring type of things on the system?

So you’ll see us start to put things together. Way far out, you’ll see us start to figure out how do we put quantum and Z together for our client’s use case? I say I have the best job at IBM because I get to learn about all of that and work with teams that make that happen.

Steven Dickens: So I think that’s a fantastic way to wrap us up here. If people are watching this, what would be the three key takeaways that you’d have them take away from not only IBM Think, but how they should be thinking about the mainframe?

Tina Tarquinio: Well, at IBM Think there’s going to be really great announcements about the IBM Watsonx assistant for Z, so stay tuned for that. There’s really incredible ways where IBM’s bringing our unique information to help clients with skills on IBM Z. For the mainframe, I would have you think about what business problem are you still trying to solve, ’cause I would almost guarantee we’d have a way to help you get closer to that with the IBM z16, and I’d be remiss if I didn’t mention AI. Many ways we can help you accelerate that journey and take advantage of the data you have on the platform.

Steven Dickens: Well, Tina, always great chatting to you.

Tina Tarquinio: Thank you for having me.

Steven Dickens: You’ve been watching another episode of The Six Five brought you live from IBM Think. I’m your host, Steven Dickens. Please check out the other episodes, and we’ll see you next time. Thank you very much for watching.

Author Information

Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the Vice President and Practice Leader for Hybrid Cloud, Infrastructure, and Operations at The Futurum Group. With a distinguished track record as a Forbes contributor and a ranking among the Top 10 Analysts by ARInsights, Steven's unique vantage point enables him to chart the nexus between emergent technologies and disruptive innovation, offering unparalleled insights for global enterprises.

Steven's expertise spans a broad spectrum of technologies that drive modern enterprises. Notable among these are open source, hybrid cloud, mission-critical infrastructure, cryptocurrencies, blockchain, and FinTech innovation. His work is foundational in aligning the strategic imperatives of C-suite executives with the practical needs of end users and technology practitioners, serving as a catalyst for optimizing the return on technology investments.

Over the years, Steven has been an integral part of industry behemoths including Broadcom, Hewlett Packard Enterprise (HPE), and IBM. His exceptional ability to pioneer multi-hundred-million-dollar products and to lead global sales teams with revenues in the same echelon has consistently demonstrated his capability for high-impact leadership.

Steven serves as a thought leader in various technology consortiums. He was a founding board member and former Chairperson of the Open Mainframe Project, under the aegis of the Linux Foundation. His role as a Board Advisor continues to shape the advocacy for open source implementations of mainframe technologies.

SHARE:

Latest Insights:

Veeam Makes a Strategic Move to Enhance Positioning in Next-Generation, AI-Driven Cyber Resilience
Krista Case, Research Director at The Futurum Group, covers Veeam’s acquisition of Alcion and its appointment of Niraj Tolia as CTO. The move will strengthen its AI cyber resilience capabilities.
Google’s New Vault Offering Enhances Its Cloud Backup Services, Addressing Compliance, Scalability, and Disaster Recovery
Krista Case, Research Director at The Futurum Group, offers insights on Google Cloud’s new vault offering and how this strategic move enhances data protection, compliance, and cyber recovery, positioning Google against competitors such as AWS and Azure.
Capabilities Focus on Helping Customers Execute Tasks and Surface Timely Insights
Keith Kirkpatrick, Research Director with The Futurum Group, shares his insights on Oracle’s Fusion Applications innovations announced at CloudWorld, and discusses the company’s key challenges.
OCI Zero Trust Packet Routing Zeros in on Enabling Organizations to Minimize Data Breaches by Decoupling Network Configuration from Network Security
Futurum’s Ron Westfall examines why newly proposed OCI ZPR technology can usher in a new era of network security across multi-cloud environments by decoupling security policies from the complexities of network configurations and simplifying security policy management.