The State of Persistent Memory with Intel’s Kristie Mann Part 2–Futurum Tech Podcast Interview Series

In this special episode of the Futurum Tech Podcast Interview Series, Daniel Newman welcomes back Intel’s Kristie Mann, Sr. Director of Product Management for Intel’s Optane DC Persistent Memory products. In the first part of the interview, Daniel and Kristie explored the impact memory has had on businesses. In this episode, the two dig a bit deeper into Optane technology to understand how it works in the real world technological ecosystem.

Like most technologies, Optane persistent memory works within an “ecosystem” of other technology partners, including OEMs, software, cloud providers, and hyper-scalers. According to Kristie, the Optane ecosystem works like a system of concentric circles with OEMs located in the middle, and software providers in the next circle out. For strongest optimization, each level needs to be deeply integrated.

Even in its early stages, Optane’s ecosystem is gaining momentum in terms of growing its ecosystem partners. On the OEM side, it’s partnered up with HPE, Dell, Lenovo, and other providers to make the technology available to a massive number of customers. On the software side, companies like VMWare, Oracle, Apache Spark, SAP, and Linux have all been partnering for “deep integration” with Optane, as well.

Kristie noted that Oracle was one of Optane’s early development partners. In 2017, they did a demo of the technology on stage to illustrate data replication. The two companies saw the benefit of software optimization early on—on the level of 10x performance.

That optimized performance is one reason so many other software providers have jumped onboard. According to Kristie, a second reason is that Optane uses open standards programming, so their software partners know that any investments they make into Optane persistent memory will also translate into other persistent memory technology they use in the future. In other words, there’s no fear of lock-in with Optane at this stage of development.

Daniel brought up the issue of hyperscalers and the role they play in the future of architecture in IT. Kristie shared that Optane is working with hyperscalers in two main ways: internal infrastructure (efficiencies and consolidation) and services infrastructure (deep engineering, proof of concept, beta deployments, etc.)

Before wrapping up, Kristie shared three use cases / success stories from Optane customers. The first, Quilt content delivery service, experienced 2x cache of memory at the same cost by moving their video content closer to the edge in regional data centers using Optane. This decreased buffer time and allowed for happier customers. Second, an online retailer (name confidential) experienced 8x faster query response time and was able to offer real-time (versus next-day, batch-processed) recommendations for customers after using Optane. Lastly, gaming hoster Nitrado was able to balance CPU and memory use to increase game instance density by 175 percent using Optane technology.

Want to know more? Check out our recent study, The State of Persistent Memory, which forecasts how businesses are using persistent memory. The study, created in collaboration with Intel, includes useful data and information that will help businesses navigate the years to come.


Daniel Newman: Welcome to this edition of the Futurum Tech Podcast, The Interview Series. I’m Daniel Newman, Principal Analyst of Futurum Research and your host for this episode which we will have a returning guest, Miss Kristie Mann, who works with Intel and their Product Management Team of their Optane DC Persistent Memory. Very excited to have her back. She joined us a few weeks ago, maybe a month ago, to talk a little bit about the data centric strategy at Intel. This time I want to bring her on the show to talk about something else, but before I share with you what “else” is, I need to go ahead and do that little disclaimer and say, this edition of the Futurum Tech Podcast Interview Series has been sponsored in part by Intel, and this show is for information and entertainment purposes only, so please do you not use or take anything we’re saying as financial advice or recommendations to buy stock. Whew. Got that out of the way. Kristie Mann, welcome back to the Futurum Tech Podcast. How are you doing today?

Kristie Mann: Hi Daniel. I’m great. Glad to be here.

Daniel Newman: I think you can tell I’m slap happy. You wouldn’t know it, but I’m recording this on a Friday afternoon. Or maybe you could tell because of just the way I feel. But welcome back to the show. It was great having you on last time. And I know last time we talked a little bit about this general data centric strategy and this time I’m going to have you talk a little bit more even deeper about kind of what you’re really focused on an Intel. But before I start asking you questions, and firing away, and fire away, I will, for everybody out there that either didn’t listen to the first edition or that just doesn’t memorize every guest that’s been on our podcast, can you go ahead and introduce yourself, tell him a little bit about your work at Intel?

Kristie Mann: I sure can. So I’m the Director of Product Management for Intel’s Optane DC Persistent Memory Products, and I manage the product managers and marketing teams that develop the solutions and the products for persistent memory.

Daniel Newman: Oh, right. And really it’s a lot cooler than even that. Right? So you’re really in charge of building some really neat and interesting technology. And I have a handful of kind of specific questions that I will tell everybody I have warned her with, but I’m still going to make this a make this tough. But right before we jump in obviously I throw out Optane DC persistent memory or Optane persistent memory and I throw this out there, but not everybody out there necessarily knows what that is. And before I dive into some of the questions about more about what you’re doing, can you just kind of tell everybody what Optane is?

Kristie Mann: Yeah, that’s a great place to start. So Optane persistent memory is actually a new product category. It’s something that we haven’t had in the data center before. It’s a little bit like memory and a little bit like storage, but it’s actually not like either of them. So if you took the elements of memory, and for me that’s the speed and the byte address ability and the fact that it sits right next to the processor, and then you took the best elements of storage, that’s the high capacity and the fact that the data is persistent or you don’t lose it when you lose power, and you merge them together and provided that to your customer at a very affordable price, unlike DRAM, then you would have an product. And that’s what we’ve done. We’ve created this product that actually looks like a memory DEM. It’s physically and electrically compatible, sits on the DDR4 bus, but it’s built using 3D cross point media instead of DRAM and it starts to make amazing things possible.

Daniel Newman: And really quickly, put that into plain speak, it integrates very, very smoothly, right? For those that are running their data centers on xeon scalable cascade lake, it’s almost-

Kristie Mann: That’s correct.

Daniel Newman: … the DEM plugs right in and it’s integration and software and everything has really been predetermined, right?

Kristie Mann: That’s right. We’ve worked really hard with our OEM service providers and our software providers and it’s integrated into the platform design so that it sits right in the DDR4 slots right next to DRAM and as long as you have a cascade lake processor and an OEM system that’s been designed for the persistent memory, plugs right in and works.

Daniel Newman: Beautiful. And I will do that from time to time. Kristie, because I know memory and semiconductors and chips and some pretty geeky stuff, but everyone out there, we have a really great audience.

Everything from executives running companies to highly technical architects in network and software. So I try to always kind of bring things back to make sure everybody can get onto common ground. So, all right. Out of the way.

We know what Intel DC Optane or Intel Optane DC persistent memory is, but let’s talk a little more about how the business is going now. It’s starting to get into that more mature phase. The journey’s moving forward, and it seemingly getting some traction. I know that as I’ve been in analyst conferences around the country, I’ve been hearing the name more and more. Talk a little bit more about the ecosystem. Right? And what’s going on with Intel Optane persistent memory.

Kristie Mann: Yeah. And it’s funny that you say it’s getting to that mature phase because I don’t know. Eight months doesn’t feel mature to me yet, but every day we’re getting a little bit more mature. So it’s an exciting time. And I will say that we’ve been on a multi-year journey. We’ve been working with our ecosystem partners for several years now and Intel’s in a unique position in that we partner with OEMs, software providers, cloud service providers, and we’ve been working with them for a long time now and we’re really starting to see the fruits of our labor.

So let me start with the ecosystem. When I talk about the ecosystem, I kind of have a vision in my mind of a concentric circles with the center really containing our OEM partners. When we think about how the world works, our OEM partners are the foundation of the infrastructure that everybody uses to build out their data centers. So if we think about we need a system that has both the DRAM as well as the SSDs and now the persistent memory where the firmware and the BIOS has been optimized to understand what to do with persistent memory. We need our OEMs like HPE, Dell, Lenovo, and Cisco to be providing these systems and make them available globally so that customers can buy the persistent memory.

And I’m just so excited to say that we have all of the global OEMs now offering worldwide these systems with Intel persistent memory. And then outside of it we have powerhouses globally, like Inspir, Fujitsu and Vintech, ZTE and all the ODMs and systems integrators. So that center of the circle, we have a very strong foundation on all major OEMs are now shipping with Optane DC persistent memory.

Daniel Newman: Yeah, there’s a lot.

Kristie Mann: If you move-

Daniel Newman: Oh, I’m sorry.

Kristie Mann: Yeah. Go ahead.

Daniel Newman: Keep going. No, I was just saying that’s a lot. That’s like, stop, let’s take a breath. Wow. What a system you’ve built there. I said, I did say maturing. All right. And if I did say mature, I’m going to say that I said maturing.

Kristie Mann: Yes, that makes me feel much better. And then moving out from there, Daniel, you know, let’s talk about the software component. So with a memory DEM or an SSD, you don’t really have to think about the software, but with a solution like Optane DC persistent memory, there are multiple operation modes. And to get the most out of this technology, if the software knows what to do with the persistent memory and how to place the data, you can actually allow the application to place the data and you start to see even better performance. So we’ve been working with our ISV and OSV partners to optimize the software, and we now have a very large ecosystem of operating system and applications that have been optimized as well. So we have VMware, Microsoft, Oracle, Reddis labs, SAS, Cloudera, Apache, spark, Aerospike, just to name a few.

Daniel Newman: Just to name a few as seemingly being more and more Kristie with each question I ask you. So, let’s narrow it down to one here. Recently at Oracle OpenWorld and, boom, Optane DC persistent memory was announced to be integrated into its new Exadata X8M. That’s a big win. I mean when you talk about databases, but no matter what people’s thoughts are around Oracle, that company has a humongous presence in that space that had to be a really big success for the company. How did that deal come together?

Kristie Mann: Yeah, that was a big day for us. Oracle has been one of our early development partners for this product, and I’m not sure if you recall, but back in 2017 they actually did a demo of Optane persistent memory. They did data replication at Oracle OpenWorld onstage. That was right after we had samples, so they’ve been a longtime partner, and we’ve been working together for a long time. They immediately saw the benefit of having more hot data closer to the CPU and they saw the value of the software optimization that their database software had by optimizing for app direct. And clearly it paid off because if you saw when they made their announcement, they saw 10X the performance of other comparable offerings based on the claims that Larry Ellison made during his keynote. So it’s been a close partnership with them. It has elements of both technology, collaboration, competitive features, and of course a compelling performance per TCO story in my mind.

Daniel Newman: And nobody makes bigger claims from the stage than Larry Ellison. I say that with admiration. I’ve always found Larry to be so intelligent and so bold on stage. So I was cheering, knowing that we’d spent about a year working side by side with you and your team and doing research around non-volatile and adoption of it. When I saw it being picked up by Oracle, that seemed like a really, really good thing. So, but beyond Oracle, I’ve actually been coming across more wins. I did a little research, I saw SAP, saw VM ware, I saw Linux all claiming deep integration with Optane persistent memory. Talk a little bit about the common threads and what drove these various significant players all in different parts of the food chain to also align with you and the team building Intel Optane persistent memory.

Kristie Mann: It doesn’t surprise me that you were doing research again.

Daniel Newman: Oh, yeah. There I go.

Kristie Mann: Yep. So I think there’s probably three common threads for why we’re seeing really successful deep integration from across the various ecosystem partners. You know, I think first and foremost, this is a real problem that we’re solving with this technology. The gap has been acknowledged for a lot of years. Anytime the gap in performance gets greater than 10X, you, you typically see a new tier emerge, whether it’s cache on the processors or SSD versus HDD. When you see these large gaps, you typically see a new technology emerge. In this case, it’s been hard to tackle the big memory storage gap without software help. And that’s why we have this unique hardware/software approach and they recognize that there’s value in us doing this together.

But the other thing is that I think that Intel has chosen more of an open standards approach by allowing an open programming model and influencing through standards bodies like SMEA, any investment that these software companies are doing is transferable to other persistent memory solutions. So it makes the investment decision easier and it makes it so that they don’t feel like they’re locked into one technology. We’re all advancing the industry together. And I think that that’s very powerful. We’re partners, not supplier or customer relationship.

And then I think last, Intel is uniquely positioned to be able to influence both hardware and software partners and we’re not competitors. So through that ability to influence both ecosystems, we have the technical expertise and the collaboration models to be working across with all of these partners.

Daniel Newman: There is a common thread. If there’s a company that does partner marketing and kind of just understands that ecosystem so well, Intel’s always impressed me. And as an analyst being in the scene, I can’t tell you how frequently we’re engaged with a different OEM or a hyperscaler or a network company on a project to find out Intel had some part of this project we’re working on because the company really puts a lot of value into its partnership. So it’s a really smart strategy and it certainly has to work well for you and your team as you’re trying to expand the presence and where Optane persistent memory is being deployed.

Speaking of which, Kristie, I just said hyperscalers and if you read anything I write and if you follow my work, you know that I’m very bullish on hyperscalers driving the future of architecture that will be used for enterprise IT. I’m not saying private cloud or on-prem is going to go away. I’m just simply saying that public cloud and IS are going to grow. It’s still actually in… Crazy as I… I called your product mature and I’m going to call it cloud in its infancy, Kristie. Just how about that for a laugh. But, you know, only a 20 to 25% of a cloud workloads truly deploy to the cloud and hyperscalers are making more and more investments in things like AI and ML and custom A6 and specialized technologies and platforms for instance, that can help companies work more efficiently and get more of what they need from their IT from public cloud. So I imagine you’re thinking about the public cloud and the hyperscalers for Intel’s Optane persistent memory. Talk a little bit about that relationship.

Kristie Mann: Yeah, the hyperscalers are really core to our adoption strategy for this technology in the future. And when I think about how we work with the hyperscalers, it’s in a very different working model depending on which type or which part of the business you’re interacting with because they’re huge and they have a lot of different needs. And so I tend to think of them maybe in two separate business models, although there are probably twenty, if I was really going to be honest about it. But the easiest way for me to separate it is really whether I think about their internal infrastructure, which is just as big as their services infrastructure.

And so if I’m thinking about what they’re doing for their internal infrastructure down to every single service provider or hyperscaler, they are looking at what can they do to drive efficiency and consolidation within their own infrastructure to be more efficient. And I would say that there’s a huge play there for persistent memory.

And then when we look at what they’re doing in their services or in their software as a service or unique things that they do, like video recommendation or AI services, the sky’s really the limit. And so I would say that we are working with them with deep engineering collaboration. We’re looking at proof of concepts. We’re even as far as beta and deployment on a lot of different working models there. But those things do take time. It’s not a short cycle, and it’s definitely a huge part of our strategy.

Daniel Newman: Yeah, no question. And it’ll be an interesting thing to watch is the adoption and then the actual adoption of the customers of the hyperscalers in terms of utilization of these solutions in the public cloud. So I got one more question for you and thank you so much by the way, Kristi, for taking the time to join us here in the Futurum Tech podcast. You know we’ve talked a lot about the industry, so we’ve talked about ecosystem, we’ve talked about the hyperscalers, we’ve talked about some of the partners that have been adopted. What about the customers? So the customers that you’re really deploying this for. Can you share any customer successes, wins, challenges really at that end user level? And maybe just a little bit about what you see as the future of growth for Optane persistent memory.

Kristie Mann: Sure. Yeah. And these are the fun stories that I like to tell and we’re starting to see more and more of them. So I selected three, you gave me a warning. So I selected three for you that we could talk about today. I have one of them, which is a content delivery provider. I have an online retailer that I wasn’t allowed to use their name, but I’ll still tell you about them. And then I have a gaming hoster. So I think these are three great examples. So I’ll tell you little bit about each one and then you can ask any questions that you have.

But let’s talk about Quilt. So they are a content delivery provider, and their strategy has been to move more and more of their video content to the edge in regional data centers so that they can improve their customer experience.

They’re finding that they want more and more of the video content right near their customer so they don’t have as much buffer time. And they found that by adding the DC persistent memory to these edge systems, they were able to double the amount of cached content in memory with the Optane DC persistent memory at the same cost as their prior solution. So they were super excited about that. They let us announce this to the press in the Korea Memory Day that we had back in September. So they are very excited and happy customer.

Daniel Newman: Yeah, that’s a good one. I remember reading about that a little bit. So it sounds like it’s significant when you start talking about those percentages, you can be talking six, seven, eight figures depending on the scale of the company.

Kristie Mann: Exactly. Exactly. It’s amazing to think of what they can do and the fact that the end consumer watching Netflix or watching your local football game, you don’t have to wait for it to buffer when you rewind or fast forward. I mean who loves to wait for buffering, right?

Daniel Newman: Now, when you say football, which football are you talking about?

Kristie Mann: Oh, you know I am not a sports fan, Daniel. But any kind of a NFL or soccer, either one.

Daniel Newman: Okay. I’m a soccer guy, so I just want to know how good friends we were going to be when this was over. Keep going. I know you had two more.

Kristie Mann: Okay. So the next one is a large online retailer, and I don’t have permission to talk about who it is, but they were using a DRAM and hard-drive solution to respond to online queries for online retail. And they were doing batch processing overnight to make recommendations for the next time the customer shopped or for when they needed to make site improvements. They worked with our solution architects to create a system with less DRAM and they added Optane DC persistent memory. And after doing that, they were able to achieve 8X faster query response times and they were able to stop batch processing overnight and provide real time recommendations. Imagine what that did to their ability to shape customer demand while they’re doing their shopping.

Daniel Newman: Yeah, that’s pretty impressive. Now I want to know who it is.

Kristie Mann: Yeah, I know.

Daniel Newman: I’m sorry you can’t share it on the air. I’m going to ask her again when we go off the air, but I can’t tell you so she probably didn’t tell me anyway. All right, so I only have just a minute. So give me the last one because these are always the funnest part. Hearing the customer stories.

Kristie Mann: I know, right? Okay. The last one is Nitrado, which is an online gaming hoster. And I’m already excited about this one because we tested this running Minecraft and both of my kids play Minecraft endlessly. So Nitrado was struggling with providing developers and gamers with the right amount of memory for their gaming needs. And they would typically run out of memory before their CPU is were utilized. So by utilizing Optane DC persistent memory, they were able to balance their CPU and memory utilization and they achieved 175% increase in game instance density without sacrificing any performance. So by improving their data center efficiency, they were able to keep hosting prices low and provide still the same gaming experience for their customers.

Daniel Newman: Wow. Are you a gamer?

Kristie Mann: I’m not, but my kids are and now they’re in love with Nitrado.

Daniel Newman: Well they better be.

Kristie Mann: I know.

Daniel Newman: It pays for dinner.

Kristie Mann: That’s right. But they sleep with persistent memory under their pillow too, so…

Daniel Newman: Well, we all do that. You know you put down a pillow and you see if it grows. That’s terrific, Kristie. You know I want to thank you so much because I probably kept you a little longer than I told you when I asked you to come on over and do this episode. But after the first one, there was just so much more I wanted to do again to Optane persistent memory and I got to the surface a little bit on the first one but I figured, Hey let’s do another one. Let’s dive deeper. So appreciate you coming on. Appreciate you bringing the customer stories.

But kind of as you could see anybody that’s out there that’s listening, you can kind of hear from the ecosystem to the hyperscalers, which is kind of part of the ecosystem, definitely part of the ecosystem, to the customers, it’s being adopted on many levels. Which means, say it’s not mature but it is maturing. And so just like cloud, I think it’s going to continue to be adopted. It’s going to continue to see scale. Our research that we did with Intel showed a certain amount of of interest and especially at the application level and data with special important database application for nonvolatile memory. And the Intel Optane persistent memory solution is really unique and if your computing server is built around cascade lake, this is such a natural direction to take your memory strategy.

So thanks so much for tuning in. Can’t wait to have you back at some point. I’ll probably have to give it a few months because you know I can only hammer you so many times on this stuff. So many great insights. And for everyone out there really appreciate you tuning in. Please hit that subscribe button. Please stick with us. Come back for more interviews with really smart execs from some of the world’s greatest tech companies, and tune in for additional episodes of Futurum Tech Podcast where we weekly cover the whole industry. But for now, for the Futurum Tech Podcast Interview Series, for Daniel Newman, joined by Kristie Mann. We’re out of here. Talk to you again very soon.

Disclaimer: The Futurum Tech Podcast is for information and entertainment purposes only. Over the course of this podcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.

Thank you to Intel for sponsoring this edition of Futurum Tech Podcast.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.


Latest Insights:

With the Introduction of Its New NPU-Powered Core Ultra Processor, Intel Launches the Era of the AI PC
Olivier Blanchard, Research Director at The Futurum Group, shares his insights on why Intel’s new Core Ultra processor is an inflection point for both Intel and the PC segment as a whole.
Avaya Experience Platform Is Making Strides With a Solid Cadence of Feature Rollouts
Sherril Hanson, Senior Analyst at The Futurum Group, breaks down Avaya’s announcement on its progress across multiple fronts and discusses the latest enhancements to the Avaya Experience Platform.
The Six Five Team discusses Amazon FTC.
The Six Five Team discusses Apple iPhone 15 defects.