The Six Five – On the Road with Pure Storage CEO Charles Giancarlo

The Six Five - On the Road with Pure Storage CEO Charles Giancarlo

On this episode of The Six Five – On the Road, hosts Patrick Moorhead and Daniel Newman are joined by Pure Storage’s Chairman and CEO Charles Giancarlo for a conversation on Pure Storage’s advancements and strategic position regarding Artificial Intelligence (AI).

The discussion covers:

  • The evolution of Pure Storage’s stance on AI during Charles Giancarlo’s 7-year tenure
  • Current state of Pure Storage’s AI deployments
  • The critical role of flash technology in driving AI innovation
  • Insights into Pure Storage’s partnership with NVIDIA
  • Pure Storage’s use of AI for enhancing storage operations for system users
  • A deep dive into Evergreen//One

Learn more at Pure Storage.

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.


Patrick Moorhead: The Six Five is on the road here in Santa Clara, Pure Storage world headquarters during GTC, Dan, what a week. What an event.

Daniel Newman: Yeah, it’s great to be here. It’s beautiful here in Santa Clara. But this week has been a story that has been building of AI in the future and we’ve had so many great opportunities to hear from the leadership at NVIDIA, but we also are talking to so many of the partners.

Patrick Moorhead: That’s right.

Daniel Newman: Those that are driving the business and that’s what’s brought us here to Santa Clara.

Patrick Moorhead: That’s absolutely right. And the other thing, we know that it really takes a village here, and GPUs networking, I think one of the only thing NVIDIA is not doing now is storage and we just happen to have Charlie here, CEO of Pure Storage. Charlie, welcome back to The Six Five.

Charles Giancarlo: Well, thank you. And welcome to Pure Headquarters.

Patrick Moorhead: It is beautiful here.

Charles Giancarlo: Thank you.

Patrick Moorhead: A lot of Pure Storage orange, which is amazing and…

Charles Giancarlo: A lot of green as well.

Patrick Moorhead: A lot of green moss art is in and trendy. It’s beautiful campus.

Charles Giancarlo: Oh, thank you. Thank you very much.

Patrick Moorhead: I heard it’s pretty new.

Charles Giancarlo: It is. We moved in last June and the troops love it, we love it. I feel like we finally got out of the dormitory and have our own place.

Daniel Newman: It looks great. It’s beautiful here. Charlie, you joined Pure Storage about seven years ago. You had been speaking about AI since really you joined and you were focused on the three legs of infrastructure, storage, network compute. Talk to us a little bit about that position you had seven years ago and where it came from and how it’s evolved now.

Charles Giancarlo: Sure. We have to go back in the time machine a little bit, and seven years ago, the general view was that everything was going to the cloud, right? You remember that very well, right?

Daniel Newman: Oh, absolutely.

Charles Giancarlo: Literally everything was going to the cloud, and storage in particular was headed for white boxes, open source code, fully commoditized-

Daniel Newman: JBOG

Charles Giancarlo: Right. And frankly, everybody in the market, practically, everybody in the market, all the major vendors viewed it that way. And so what does that mean? That meant they stopped investing. And I had sort of a contrarian point of view, which is that, well, if we believe that computers, software and AI was going to continue to change everybody’s life. Well, data centers, whether that was in the cloud or in my belief, still hybrid, that is to say that many of the enterprises were going to be running their own data centers. If that was going to continue to be important, well, and if we continue to see advances in compute, continue to see advances in networking, well, then we have to see advances in storage as well. I was going to leave the opportunity open for a challenger to really challenge the major players who are no longer investing in it. And so when Pure came calling, I responded, I thought this was the company that actually by investing in storage was going to change the trajectory of storage and maybe win the day, and we’re seeing that happen.

Patrick Moorhead: Yeah, the foresight, it’s easy to look at now, right? But it was really a hard call at that point and I’m constantly trying to educate as an analyst that the importance of I call the quadrangle, which is processing network storage, memory and accelerators, and storage is such a key part of the AI pipeline. And as you said, this is not new for you, you’ve had multiple AI deployments for years, but I have to ask, what’s the state of those deployments? And maybe we can go from ML, DL, generative AI, fill in the blanks here.

Charles Giancarlo: Yeah. Well, we have hundreds of AI customers, and I have to smile a little bit when I now talk about traditional AI, right? Because there’s nothing traditional about AI, but AI has been around for at least a decade and it was doing things such as protein folding or simulating stock market reactions for high speed trading or just controlling robots in a factory. And we’ve been selling it to those environments for many, many years. Now, of course, the world has changed seemingly overnight with ChatGPT, with Gen AI and now with RAG and other. And undoubtedly it’s going to continue to build upon itself over the next several years.

Fortunately, we do have a good background, but we also have to run like hell, sorry for the French, to keep up with this market. It’s great to have a great partnership with NVIDIA and others as we go down this path. A number of our customers, especially our largest customers, are driving architectures and driving the software that demands a lot out of our systems. So it is great to be working with these companies. And just to put a rapid bow around this picture, I personally believe it’s not only going to drive the highest end of storage for things like large language models and training, I believe it’s going to raise the level of the needs for all of the storage in customer’s environments as they start to want to run models against all of their data.

Patrick Moorhead: Makes sense.

Daniel Newman: You mentioned traditional AI, Charlie, and it almost makes me think of the term legacy AI.

Patrick Moorhead: That doesn’t really exist yet.

Daniel Newman: Although Pat and I love to talk about how we have four decades of the earliest algorithm, so it’s also not that new, but storage, I think there is sort of a well understood legacy architectures for storage and of course Pure has been very focused on flash. Now with this AI inflection that we’re seeing, talk a little bit about why flash and why you see that it’s really important for driving this AI innovation.

Charles Giancarlo: Right. Well, there are several reasons. And not only are we highly focused on flash, we’re only focused on flash. We have no disc whatsoever. Flash is important for multiple reasons. One is that there’s a lot of data. In fact, the majority of the world’s data is on hard disk.

Daniel Newman: Sure.

Charles Giancarlo: Okay. Which is hard to believe. When we think about computers and modern computers, we don’t think about mechanical systems. Remember, go back to any act it had vacuum tubes, right? Well, there’s still mechanical systems in most data processing today and they’re called hard disks. Those hard disk systems just barely have the performance levels necessary for whatever application they’re hiding behind. And because of that, if you also want to leverage that data for AI, well, you have to copy it out of there and put it in something more performant, but we now have the ability to just replace those hard disk systems with a similarly priced flash system, which will have four to five times the performance, and at 1/10 the space power and cooling of that hard drive system.

So that’s another thing that 1/10 the power and the cooling. Well, as you start adding GPUs, what do you need a lot of? You need a lot of power and cooling. And data centers tend to be limited in terms of the amount of power that they have. Data centers are not sold in square feet anymore. They are sold in megawatts. And if you’re pressing up against the edge of your power envelope, you’re stuck. There’s no more power to be had in most locations in the world, including in the US right now. And if you are our force to expand to another data center or to bring in more power, you’re talking about millions of dollars in years of activity and expense. So by being able to reduce the power and cooling footprint of your storage, you could save something on the order of 20% of your total data center power, which that you could then reuse for GPUs. So whether it’s power or whether it’s performance flash just has it all over hard disk, and the press loves simple projections and we’re now on track to eliminate all disc based systems in the next four years.

Patrick Moorhead: Yeah, it’s interesting, you put the performance CapEx and OpEx together that combination, and you talked about millions of dollars, we’re talking hundreds of millions if not billions. I know there’s difference between hyperscaler and maybe a Colo or enterprise data center, but you’re looking, you’re in the tens if not hundreds of millions of dollars now. And also accelerating that and easily you see factoids of how much next generation data centers, how much power of the entire globe, but we’re looking at doubling the data center power draw in two years. So it’s not just an economic CapEx or an economic OpEx or even performance when you have all three like you have with flash. It’s very compelling. And by the way, I have heard that flash is going to get rid of hard drives each year for the, the end year-

Charles Giancarlo: Oh, yea-

Patrick Moorhead: Yeah. We’ve seen that, but it’s like we’re getting closer and closer there. So GTC, NVIDIA, you and NVIDIA have been partners for a long time. Can you talk a little bit, frame the relationship? What have you worked on? What are you working on now together?

Charles Giancarlo: Well, it’s a strong technical relationship. We have delivered together some of the largest, if not the largest AI supercomputers in the world. When you are pressing the boundary of performance, there’s always another bottleneck to get through. A lot of times, it’s software, by the way, it’s drivers or it is the performance of the actual application software that’s running. And together we’ve run up against a number of those hurdles and we work together to be able to eliminate them. So that’s been great.

Now, as you know one of the latest areas of focus for NVIDIA is what they call RAG or Retrieval Augmented Generation. In RAG, what you want to be able to do, and this is something that we’re very focused on together, you need to be able to access a large fraction, if not all of the data inside an enterprise. And again, that means that you have to be able to get access to it. There are two reasons why it’s very, very difficult to get access to all the data in an enterprise right now. One is, as I mentioned before, data is largely hidden behind the application it serves today, right? It is not a first class citizen from a network perspective.

Patrick Moorhead: ERP with ERP.

Charles Giancarlo: That’s right. So if you want to copy the data, you often have to do it through the application. If you want to access the data, you have to do it through the application. And if the performance level is just enough for the application, again, that’s another barrier. So between the fact that the data itself is not networked and it doesn’t have the performance necessary for RAG, as I mentioned, this is an opportunity to raise the level. Pure helped that in two different ways. One, we mentioned, which was the performance level. The second one is because all of our systems operate on the same operating environment. In data storage today, even if you have a single vendor that’s not Pure, generally they’re supporting that full environment with four or five different hardware software combinations.

They’re not unified. We have one software environment, we call it Purity. It exists on all of our systems. That allows us to then be able to leverage that with something we call Fusion to network the data storage, underlying the applications that it serves. And so this allows data to be accessible to things like AI, even when that data is supporting its primary application.

Daniel Newman: Yeah, it’s really interesting. We’ve entered a world now, where AI is sort of infused in every product. And then of course every one of our businesses, Charlie, are using AI.

Charles Giancarlo: Yeah.

Daniel Newman: I have to imagine storage has evolved in a lot of ways. You talked about Purity, you’ve talked about your software layer. How are you thinking about it Pure using AI to make storage operations for your practitioners that are using your systems? Is AI becoming part of that story?

Charles Giancarlo: It is. We just saw a demo of this last week at our sales kickoff. Simply stated, one of the things that Pure was founded on was the idea of simplicity. Our systems and individual systems very simple. What Fusion is doing, is making a full scale deployment of lots of different systems able to be accessed as if it’s a single pool of data. And now we’re looking at using AI to basically give it a natural language interface to customers. So customers could say, “Listen, I want to be able to provide X number of terabytes of storage to this application that I’ve just deployed.”

And instead of having to define lungs or to be able to actually write the code necessary to get it to happen, it would be generated automatically. Or let’s say you wanted to analyze your retailer and you’ve been running a Pure Storage for the last year or two, you could say, “Look, I’m expecting a 30% increase in traffic through the Christmas holiday. Will the system be able to operate or are there things I need to enhance?” And it would come back to that kind of language in a prompt, and it would be able to come back to you and tell you what you might need to do differently or order or change in order to be able to meet that surge in demand.

Patrick Moorhead: It’s been really fun. In my heart, I’m a product person and seeing kind of point of differentiation, incremental value to customers. I like the way that you were software first and it was really about the experience. I like the architecture that enabled, “Hey, you need new storage. Let’s just pull it out and put an upgrade in”, very sustainable that you also have Evergreen. How does Evergreen fit into, I can almost, hard for me to say this straight face legacy AI, but how does it play into this newer flavor of generative AI?

Charles Giancarlo: Well, the great thing about Evergreen and now especially Evergreen//One is it gives the customers optionality for the future where if they had purchased the product, they kind of get stuck on what they purchase and have to utilize that, right? With Evergreen//One, they can basically sign a contract and then as they go forward in the future, as their needs change, they can change out different parts of their storage for other parts of their storage. So whereas a customer in the past, decided well, I’ve got this environment that I need storage for the next 10 years and they would buy it. How much can we predict life five years, 10 years-

Patrick Moorhead: that’s really hard,…hard–,

Charles Giancarlo: … in the IT environment. Evergreen//One gives the customer the benefit of not having to decide today what the next five or 10 years looks like. It gives them all that flexibility they can change underneath.

Patrick Moorhead: Yeah, I mean we’ve seen just crazy ML, DL, generative AI, mixture of experts, I mean, it seems like every two years we’re coming up that next new thing and sure researchers are working something so somebody can buy Evergreen//One and have some sort of a guarantee that they can protect some of their investment, even though you might be putting new tech in.

Charles Giancarlo: Correct, new tech or different tech, right? Or higher performance or frankly, I need less of X and we can do that.

Daniel Newman: I mean, Pure really was building the Evergreen//One model around looking and feeling a heck of a lot like cloud and how people consume.

Charles Giancarlo: And actually I’ll tell you that it is cloud now. It is SaaS.

Daniel Newman: I was just setting you up to say that.

Charles Giancarlo: Yeah, thank you. No, it is a SaaS model. The only difference is the infrastructure for that SaaS model. It could be on a hyperscaler, it could be on a Colo, it could actually be on the customer’s premise. But to really wrap a bow around that SaaS model, if it’s on the customer premise, we pay them for the space and power, because we’re hosting it there.

Daniel Newman: But it very much feels like how consumption model. Enterprises want to consume, and you’ve given them the flexibility to do that, stay up to date, stay upgraded with the newest technologies. You can continue to evolve it and then deliver it to them.

Charles Giancarlo: Exactly right.

Daniel Newman: And this as I see it’s interesting, Pat, so many people at GTC are talking about how big of a moat does NVIDIA have. And you and I both, we endlessly like to talk about, there’s the chip kind of mercenaries, right? Oh, you’ve got a GPU, you’ve got an AC, you’ve got… And then there’s the systems, and then there’s the whole stack-

Charles Giancarlo: CUDA.

Daniel Newman: …the software. And if you look at that, that’s sticky. With Evergreen//One, I think there’s an argument that you’ve put some years time, energy, effort into building a pretty big moat. How big is that moat?

Charles Giancarlo: It’s a big moat. We have four things that we think, you have to start your software from scratch in order to be able to mimic what we do. Evergreen’s a good example of that. This whole idea of where we can do non-disruptive upgrades, literally forever of every component in the system without causing any application downtime with our customer. If you haven’t built your software from scratch to be able to do that, you’re not going to, I don’t think our competitors can, by the way, we’ve had it for 10 years, no one’s caught up yet. So then we have our DirectFlash technology, right? DirectFlash means we don’t use SSDs, which were designed to mimic hard disks. Since when do you design a semiconductor to mimic a mechanical device? So we have our DirectFlash. It’s what’s going to allow us, in my opinion, to be able to penetrate even the hyperscalers with our technology.

Third, we have what we call our cloud operating model. So this is the idea that the customer interacts with us and with the systems entirely through the web. And with Evergreen//One, they never have to touch a system. We manage that entirely. And the fourth one is the fact that we alone have a highly consistent portfolio, by which I mean we have one operating environment, which we call Purity and one management environment, which we call Pure1 to manage all of our products. Whereas the storage industry has been characterized by having different software hardware combinations for every storage niche that exists out there.

What flash has allowed us to do is really unify to have one environment. Imagine we had a half dozen different networks in every customer, how would you be able to network those applications? You wouldn’t be able to do that, right? With a half dozen or more different storage systems, you can’t really network that data conveniently or efficiently. We have one operating environment across all our products makes it easier for our customers to be able to manage, gives us the ability to have Fusion whereby the data is able to be networked. And it just reduces the complexity for everyone involved.

Daniel Newman: There’s a bit of a data management story in there.

Charles Giancarlo: There’s a data management story in there, exactly.

Patrick Moorhead: And by the way, I have to hit on this. So I’ve been known to be the person sitting in these, I’m never the first analyst to ask the question, but one thing I’ve definitely seen in the storage industry is storage companies recasting themselves as data companies. And then it’s like, okay, well, what about the data management companies? How do these two come together? So there’s a little bit of marketing going on, but is there a little bit of reality going on where that data is right there, let’s activate, let’s do something.

Daniel Newman: it’s prompting, it’s GenAI.

Charles Giancarlo: So there’s a little bit of both, as you point out. And the difficulty is because there’s not the right words yet to describe this. So for example, the data storage companies generally don’t know the exact data that’s in the system. But what we will be managing is not the individual bits of data, but the data sets themselves, the whole data sets. And if you think about how data is exploding and how there are different data sets that need to be used and need to be managed. Because when you make a copy of a data set, don’t you want to know where that copy is?

And when you modify that copy and yet make another copy of it, you want the provenance of all these different data sets and how they relate to one another. That’s what we can manage. Now, there’s still going to be the whole ETL chain somewhat disrupted by AI, but there’s going to be a whole ETL chain and there’s data management that goes on there as well. So if I were to be even more exact as to what we do when we talk that is we Pure do when we manage data, I’d call it data set management.

Daniel Newman: Interesting. Kind of a file versus object and there’s some complexities in that, but the way eventually, and by the way, Pat, you were kind of nice, but I’ll give Charlie the last word on this, but I mean there are some companies that are storage companies that are kind of proclaiming to be data management companies.

Charles Giancarlo: Oh, yes,

Daniel Newman: And that’s going on and we’re doing a lot of work internally to sort of, there’s a certain amount of that, to your point, that can be done. But there is a cutover.

Charles Giancarlo: There’s a cutover between data sets and data itself. That’s right.

Daniel Newman: So wrap us up here. Give us the outlook for the next year. I mean, geez, it feels like asking you doesn’t give you enough runway, but realistically, how much innovation happens in a year now? What’s the next year for Pure?

Charles Giancarlo: Well, we’ve done an amazing… I mean really our product proliferation over the last year has been nothing short of tremendous. We introduced just not even a year ago, our first E Family product, which was FlashBlade//E.

Patrick Moorhead: That’s right.

Charles Giancarlo: We’ve had tremendous success with the E Family. This is the first product line that can now address cheap disc at a similar price point, but 1/10 of the space power and cooling. And it’s what allows us now to claim with the one operating environment that we can satisfy all the customer’s needs. Because while it was just what flash was before, which was high end, we weren’t addressing the majority of needs. Now we can address the majority of needs. Fusion’s going to allow us to network that data or those data sets so that customers can get access to it regardless of where it is inside the entire environment.

And you’re going to see, of course, us really addressing some of the highest performance needs for even training in the AI environment. So full range on one operating environment, one set of products that are consistent from AI all the way down to archive, block file and object, and everything from small scale to large scale to exabyte scale. So it’s a pretty broad and amazing product line to be fully integrated with one operating system.

Daniel Newman: Charlie, thanks so much for sitting down with us here.

Charles Giancarlo: No, it’s really my pleasure, always.

Patrick Moorhead: It’s great stuff. It’s great to see the office. Great message. I mean, it’s fun stuff.

Daniel Newman: Excited to follow the journey and I’m sure we’ll be talking really soon.

Charles Giancarlo: Thank you very much for the time.

Daniel Newman: All right, everybody, hit that subscribe button, join us for all of our episodes here of The Six Five. We are on the road at Pure Storage at their beautiful headquarters here in Santa Clara, during the GTC 24 conference. We got to go for now, but we’ll see you all really soon. Bye bye.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.


Latest Insights:

Camberly Bates, Chief Technology Advisor at The Futurum Group, highlights Solidigm's groundbreaking work in AI, sustainability, and edge innovations presented at the Six Five Summit. Solidigm's advancements are set to redefine the future of data storage, emphasizing efficiency and environmental stewardship.
Oracle Exadata Exascale Debuts Aiming to Unite the Best of Exadata Database Intelligent Architecture and Cloud Elasticity to Boost Performance for Key Workloads
The Futurum Group’s Ron Westfall examines why the Exadata Exascale debut can be viewed as optimally uniting Exadata with the cloud to provide customers a highly performant, economical infrastructure for their Oracle databases with hyper-elastic resources expanding Oracle’s market by making Exadata attractive to small organizations with low entry configuration and small workload affordability.
Brad Tompkins, Executive Director at VMware User Group (VMUG), joins Keith Townsend & Dave Nicholson to share insights on how the VMware community is navigating the company's acquisition by Broadcom, focusing on continuity and innovation.
On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss AWS Summit New York 2024, Samsung Galaxy Unpacked July 2024, Apple & Microsoft leave OpenAI board, AMD acquires Silo, Sequoia/A16Z/Goldman rain on the AI parade, and Oracle & Palantir Foundry & AI Platform.