In this episode of the Futurum Tech Webcast Interview Series, I’m joined by Eddy Ciliendo, Chief Strategy Officer for Model9 for a conversation about the performance of Model9’s cloud solution.
So how fast is fast? That’s exactly what Eddy and I covered in this conversation. Here are some of the highlights:
Eddy opens the conversation by explaining the physics behind Model9’s powerful performance.
- We discuss how cloud storage can be faster than FICON.
- We explore how Model9 is approaching increased efficiency with their product.
- We discuss Model9’s architecture and how it leverages the speed of parallelized deployment.
- Eddy shares a significant customer success story where Model9’s solution brought increased efficiency to operations.
Learn more at model9.io.
You can view the video of the conversation here:
Or grab the audio on your streaming platform of choice here:
If you’ve not yet subscribed to the Futurum Tech Webcast, hit the ‘subscribe’ button while you’re there and you won’t miss an episode.
Disclaimer: The Futurum Tech Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.
Transcript:
Steven Dickens: Hello. Welcome to the Futurum Tech Webcast in collaboration with Model9. I’m joined by Eddy Ciliendo. Eddy, welcome to the show.
Eddy Ciliendo: Thank you, Steven.
Steven Dickens: So I’m really looking forward to this one. Performance. I can see you smiling. Lots of people ask the question. We’ve spent some time talking about Model9 in the company. Now we want to drill down. You’re going to have to help me here. Connecting a mainframe to a public cloud. How is that faster than connecting a mainframe to on-prem storage often in the same data center, maybe even next to the mainframe? You’re going to have to explain that one for me.
Eddy Ciliendo: Yeah, and you’re right, I’m smiling because I think we get this question every time. People oftentimes are just completely in utter disbelief that we could even be close to the same speed as on-prem infrastructure, usually, as you know, is connected via fiber optic cables.
Steven Dickens: Mm-hmm.
Eddy Ciliendo: So I think there’s two main reasons why Model9 is just as fast, or actually faster than this on-prem infrastructure. The first reason is in physics, right? So two things.
Steven Dickens: You’re going to take me back to school, are you?
Eddy Ciliendo: I might, actually. So one of the important things is first of all to understand, are we talking about latency or are we talking about throughput? Right? Latency, absolutely. As you go over the network, and regardless if we’re talking cloud or even in your data center, as soon as you go over the network, you will incur latency. That is usually in, starts somewhere around a millisecond, goes up to whatever, 10, 20 milliseconds.
Steven Dickens: And that’s just the simple pieces of light have got to go down a, it’s light down a cable.
Eddy Ciliendo: Absolutely.
Steven Dickens: You can’t avoid that.
Eddy Ciliendo: Absolutely. But if you look at our use cases, right, whether it’s creating third data copies, backing data up, mainframe data, up to cloud object storage, or moving bulk data for AI and machine learning purposes, you’re not that much concerned about how fast your first transaction gets there, as you would be in an OLTP workload, right? You’re concerned about throughput. And now we get to the second part of physics in this short physics lesson here, is as we care about throughput, well, there’s a reason why everybody outside the mainframe world has been embracing network defined architectures. Because the network has gotten so fast, we’re talking 100 gig ethernet infrastructures, whereas ficon is still stuck with 25 gig.
So the pipe, even though the bits are moving slightly slower, the pipe is so big that from a throughput perspective, we can achieve huge throughput just going through that ethernet pipe. So that’s kind of the physics aspect of how we can achieve our performance levels.
Steven Dickens: So let’s just unpack that from a latency point of view. You’re not in the transaction given the use cases that you’ve got, it’s third copies, it’s being able to move data off to the cloud, so you’re not needing that sort of in-transaction latency piece. And then because you’re moving large amounts of data to the cloud, it’s not about the speed of the pipe, it’s the size of the pipe. Is that a simple way of thinking about it?
Eddy Ciliendo: No, no, you’re spot on, right. So you know, kind of simplify, we’re breaking it down. If you have a million IOPs, DB2, OLTP, core banking, infrastructure, whatever, you want to do this FICON attached to one of the large mainframe storage vendors, that’s nothing that we are going to replace, at least just yet.
Steven Dickens: Is there something you want to tell me about future roadmaps? But no, I mean I think that’s really interesting too, because I think people will hear this, they’ll hear about Model9. They’ll think, “oh, I can throw away all of my mainframe storage, I can move everything to the cloud.” And that’s a bit more nuanced than that I think.
Eddy Ciliendo: Absolutely.
Steven Dickens: It’s for those transactional, you’re still going to need ficon attached.
Eddy Ciliendo: Absolutely.
Steven Dickens: But for most everything else, you’re going to be able to do that with object storage. Is that the right way of thinking about it?
Eddy Ciliendo: No, you’re spot on. You’re spot on. Right. And again, in those other use cases that we just discussed, and whether it’s cyber resiliency, third data copies, whether it’s moving data efficiently to the cloud, whether it’s backing up data to an object source such as on-prem, but is network attached. For all of these use cases, we’re talking terabytes, petabytes of data that we need to move efficiently. Again, so the size of the pipe becomes much more important than the actual latency of that pipe.
Steven Dickens: And that’s how you’re competing, or probably coexisting is probably a better phrase with that ficon. Is that the right way of thinking about it?
Eddy Ciliendo: Absolutely. Yeah.
Steven Dickens: Fantastic. Eddy, anything else we should be thinking about as we think about performance and Model9?
Eddy Ciliendo: Yeah, again, so as I said, the first topic was physics. So people have to understand we’re not talking about latency, we’re talking about throughput and looking at the physics of ethernet these days, what kind of offerings are available? The pipes are getting so big, so fast, so efficient, whether that is in your data center or going out to some of the major public cloud providers, right. So that’s the kind of first piece of the equation. Now, the other thing that I think is just as important is rooted in the architecture of Model9, right. A lot of our competing products look at the storage media, whether it is tape or virtual tape in a very serial fashion, right. Even if you have a virtual tape server with a bunch of flash drives in there, you still access that device in a serial fashion because you’re still thinking in terms of tape or even virtual tape.
And since we’re such a new company in the market, we did not have to deal with any of that technological depth. We were able to start from a clean slate and think about, “Okay, so how can we make things more efficient?” And what we’re doing is we’re reading data from the mainframe, from (inaudible), from disk, from tapes in parallel fashion, ingested data. We use all the bells and whistles of the modern Z platform. We leveraged the zip engines, the more zip engines you throw at Model9, the faster we work, the more OSA cards you throw at Model9, the faster we work, the more crypto engines you throw at Model9, the faster we work, you get the idea.
Steven Dickens: And is that, being able to parallelize the deployment is where you get that speed?
Eddy Ciliendo: Absolutely. Right. So it’s again, it’s parallel ingestion, leveraging, again, the huge IO capabilities of the mainframe, right? We’ve always been talking…
Steven Dickens: It’s a beast. Always has been.
Eddy Ciliendo: Absolutely right. So we are leveraging that capability. So we read massive amounts of data. Enter Model9 agent, ingest that again, in a very parallel fashion, we are going to use all those coprocessors. And then as we move data out, again, we’re not having a single stream out to the cloud, out to object storage per default. You write in 10 streams, we chunk the data up, we write in 10 streams of five megabyte. You can obviously configure that depending on your object storage target. But you see, there’s a huge level of parallelism. And then the target also makes a big difference, right. I mean, the public cloud providers, the hyperscalers, they can absorb massive amounts of data being sent to them. And the same is true with a lot of the on-premise object storage platforms. They’re all flash systems these days that are extremely fast. They can absorb or write out, again, tons of data. So we’re much faster both in moving data to object storage, again for backup purposes, but then also much faster in getting data back from object storage for restore purposes.
Steven Dickens: It’s always great to get that architectural view and get the physics view. But what are customers saying? You’ve now got some pretty big deployments, you’ve moved large volumes of data, I’d imagine for those clients, what has the customer experience been?
Eddy Ciliendo: Oftentimes shock.
Steven Dickens: Yeah, I can imagine. I know I’m probably the proxy for that. I think just trying to understand that, and you’ve made a lot of sense there, but what are they seeing? Have you got any data points or any sort of customer examples?
Eddy Ciliendo: Yeah, absolutely. So I think two examples I would like to share. One is in the backup restore space, and again, when I was saying shock, I literally mean shock. They thought that our product wouldn’t work because their backup jobs were complete so quickly. So they thought something must be wrong. But now, the backup job ran properly, and I think they went down from full volume backup, one of their large infrastructures that was taking them almost 23 hours, down to less than an hour.
Steven Dickens: That’s pretty significant.
Eddy Ciliendo: Yeah. And obviously that’s probably on the higher end. But I would say on average, people see their backup restore windows shrink by somewhere around 80%. And you know the impact of that, right? So your backups are completed sooner. That means your whole batch cycle can now also start or complete a whole lot sooner.
Steven Dickens: And backup’s a key part of that overnight batch and being able to shrink that down so dramatically, it’s going to have an impact.
Eddy Ciliendo: Yeah.
Steven Dickens: So Eddy, if you were to summarize a performance from a Model9 perspective, what would be that key takeaway?
Eddy Ciliendo: Parallelism, parallelism, parallelism, right? Think about that. Understand that we’re in the game of throughput and not latency. And again, a new architecture like Model9 can do away with a lot of the legacy that has been built into some of the other products over the decades. So we can do things, we’re really cloud native and a whole lot more efficient.
Steven Dickens: Fantastic overview. Eddy, where can customers find out more?
Eddy Ciliendo: So the first stop should always be our website, model9.io, where we have a learning portal for prospective customers. They can go in there, see demos, we have white papers around performance where we go into more detail if customers are interested, and they also have the capability then to set up either custom briefing or custom demo where we go into more detail.
Steven Dickens: Fantastic. You’ve been listening to the Futurum Tech Webcast, brought to you in collaboration with Model9. My name’s Steven Dickens. We’ll see you next time. Thanks very much for watching.
Author Information
Regarded as a luminary at the intersection of technology and business transformation, Steven Dickens is the Vice President and Practice Leader for Hybrid Cloud, Infrastructure, and Operations at The Futurum Group. With a distinguished track record as a Forbes contributor and a ranking among the Top 10 Analysts by ARInsights, Steven's unique vantage point enables him to chart the nexus between emergent technologies and disruptive innovation, offering unparalleled insights for global enterprises.
Steven's expertise spans a broad spectrum of technologies that drive modern enterprises. Notable among these are open source, hybrid cloud, mission-critical infrastructure, cryptocurrencies, blockchain, and FinTech innovation. His work is foundational in aligning the strategic imperatives of C-suite executives with the practical needs of end users and technology practitioners, serving as a catalyst for optimizing the return on technology investments.
Over the years, Steven has been an integral part of industry behemoths including Broadcom, Hewlett Packard Enterprise (HPE), and IBM. His exceptional ability to pioneer multi-hundred-million-dollar products and to lead global sales teams with revenues in the same echelon has consistently demonstrated his capability for high-impact leadership.
Steven serves as a thought leader in various technology consortiums. He was a founding board member and former Chairperson of the Open Mainframe Project, under the aegis of the Linux Foundation. His role as a Board Advisor continues to shape the advocacy for open source implementations of mainframe technologies.