The Critical Role of High-Capacity Storage in AI – Six Five On The Road

The Critical Role of High-Capacity Storage in AI - Six Five On The Road

Just 25 years ago SSDs were a mere 1 or 2 GB, fast forward to 2024 and we are at 122 TB!

Now with AI’s BIG dreams, they need seriously BIG storage to make them come true! Patrick Moorhead and Melody Brue are with Solidigm‘s Dave Dixon and Greg Matson, for a special panel edition of Six Five On The Road. The group sits alongside industry leaders Chloe Jian Ma from ARM, Renen Hallak from VAST Data, Jacob Yundt from CoreWeave, Sophie Kane from Ocient, and Roger Cummings from PEAK:AIO for a conversation on the evolution of high-capacity storage and its pivotal role in AI’s future.

Highlights from the panel include:

  • The introduction of Solidigm’s new 122TB drive and the implications of QLC-based high-cap SSDs in the AI realm
  • Tech transitions over the years from the era of minicomputers to GenAI, focusing on how each shift impacts the compute-memory-storage-networking spectrum
  • The challenges and solutions surrounding storage and data management for AI, and the critical need for power and space efficiency
  • How Solidigm and its partners are addressing the surging demand for energy-efficient AI infrastructures and the benefits of high-capacity SSDs and QLC technology
  • Forward looking thoughts on growth areas in AI, storage innovation, and the role of efficient data centers in sustainable technology advancement

Learn more at Solidigm.

Watch the video below at Six Five Media and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: Six Five On The Road is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript:

Patrick Moorhead: The Six Five is On The Road here in New York City. We are at Solidigm’s 122-terabyte launch event. It is exciting to be here, Mel. And as a recovering product person, I love product announcements.

Melody Brue: It is exciting.

Patrick Moorhead: It is for sure. And as analysts, we have to reflect on the industry and how this announcement fits in. And it is pretty clear that with every major inflection point we’ve seen, whether it’s minis to client-server, the PC revolution, social, local, mobile, generative AI, web, e-commerce, we always put pressure on the subsystems for infrastructure. And whether that’s CPU’s for compute, networking, memory, and storage. We’ve heard a lot about GPU’s with generative AI. We’ve heard a lot about photonics with networking. We’ve heard a lot about HBM with memory on GPUs, but there’s just not a lot of conversation about the storage element. And what really excites me about this announcement, and I think it should everybody out there in the tech world is that it not only brings higher performance. It brings higher reliability. It brings higher sustainability. And with energy out there hitting us big time, that’s important.

Melody Brue: It is important, and we really do have to address both the space and the power efficiencies.

Patrick Moorhead: That’s absolutely right. So without further ado, let’s dive in. We’re going to have a conversation. You’ve heard what we think about this, and you’ve heard what Solidigm thinks about this. Let’s go in and talk to some of their partners. Yeah, why don’t we start with Chloe. Noticed everybody I think is in alphabetical order here. Just maybe give your name, who you work for, what your company does. I know a lot of people know what your company does, but just for the sake of the audience and we are getting this on video.

Chloe Jian Ma: Sure. My name is Chloe Jian Ma and I run go-to-market for Arm’s IoT and Embedded business. Strangely, actually, from our line of business, we cover the storage sector. Arm, we actually went public in New York about a year ago. September 14th, 2023, we became a public company on NASDAQ. And Arm is a compute platform company. And our route is really in embedded and edge computing. I don’t know about you but I cannot live without my phone. And if you are using one of these smartphones, 99% of the chance, you are on Arm. But we have made a lot of progress from edge to basically the core and data centers. And most recently, since about 2018, you probably heard about AWS Graviton from gen 1 to gen 4, Microsoft Cobalt, Google Cloud’s Axion, these are all hyperscalers, self-developed, Arm-based server CPUs. And in the AI era, NVIDIA’s Grace Hopper, Grace Blackwell, that Grace CPU is also based on Arm architecture.

So we’ve been making a lot of progress from edge to cloud. Arm is the most pervasive compute architecture. And we’re not only from cloud to edge, but even within cloud and data center, we’re in compute, and we’re in networking and interconnect like the NVIDIA DPU. The BlueField DPU has arm processors inside. And we’re also in storage. A lot of storage controllers are based on Arm. I think that can offer a lot of imagination. And this pervasiveness of Arm allows data to be processed where data gravity dictates.

Patrick Moorhead: Excellent. We’re just going to right down the line. Roger.

Roger Cummings: Hi, everyone. My name’s Roger Cummings. I’m the CEO of PEAK:AIO. We are helping enterprises be successful in AI and HPC workloads. How we do that, and you folks know this is you invest in the GPU infrastructure. Well then, you find out that your legacy storage infrastructure isn’t keeping up with that GPU investment. So we have a software-defined layer that turns every common utility node into a supercharged AI and HPC server for you to take full advantage of that investment. We’ve had some great… We center on three things. Performance, density, and power. And it really correlates to Solidigm’s message as well. You see the physical footprints getting smaller and smaller. We live at that edge. We provide intelligence at that edge. We can not only run your models but run your inference associated with that where we have some great QLC technology that we offer as well. We’ve got some wonderful use cases with Solidigm, and we have many more to come, so I look forward to working with you guys.

Jacob Yundt: My name is Jacob Yundt. I’m the director of Compute Architecture at CoreWeave. I usually introduce myself as the server guy, so if you see me around, the server guy. But if you’re not familiar, CoreWeave is a specialized cloud service provider. We focused on accelerated compute. I can kind of think of it as HPC as a service. But right now, we’re focused on building the biggest baddest training clusters the world has ever seen. Also focusing a bunch on other types of accelerated compute like inference. But right now, similar to the messages that we’ve been hearing today that we’re focusing on power efficiency, scaling, and the story of this high-cap QLC drive is tied to that.

Sophie Kane: And I am notably not Dylan Murphy.

Melody Brue: I was going to say, that’s a lot to go after and also you’re not Dylan.

Sophie Kane: Yes. Yes. Dylan couldn’t be with us tonight. He got stuck on a train on the way from Boston, but I’m happy to step in. So I’m Sophie Kane and I’m the director of Growth Marketing and Business Development for Ocient. And Ocient is a data analytics software solutions company. And we specialize in providing the analysis or analysis for always-on compute-intensive workloads for both data and AI. And we do that by taking advantage of what we’re going to talk about today, which is by putting the compute next to the storage. And on average, we typically decrease cost, energy, footprint size by 50 to 90%.

Patrick Moorhead: All right, Renen.

Renen Hallak: I’m Renen, founder and CEO of VAST. VAST is eight years old now. We built a new type of data platform that has extreme levels of capacity, performance, resilience, cost, ease of use, and primarily we are used, as you may have guessed for AI workloads these days. We built a data store which is for unstructured data, file, and object, the database for structured data, and then we’re now adding a data engine for the compute aspects of it. It’s that software infrastructure layer that sits above the hardware and underneath the application.

Patrick Moorhead: All right. Great introductions there. Let’s go.

Melody Brue: All right. Chloe, we’re going to start with you. You already kind of talked a little bit about your pervasive footprint from cloud to edge. But I want to talk a little bit more about the energy-efficient foundations and the importance of power-efficient hardware such as compute and storage in addressing those types of challenges.

Chloe Jian Ma: Well, first, I want to thank Solidigm for quoting Arm’s CEO about the data center power consumption and the urgent need for us to design more power efficient AI infrastructure. Basically, for the last 20 years or so, data center and cloud infrastructure have become more efficient. And in terms of data center power consumption, for the last 10 to 20 years, it kind of stayed flat because the PUE, the measurement of data center infrastructure efficiency has been improved so we’re not consuming a whole lot more data center power. But that’s going to change when the latest round of Gen AI ever since the LLM was born. And so currently, data center as a whole is consuming about 460 terawatt-hour of power. So that’s about equivalent to the power consumption of Germany as a country. But it’s going to increase significantly.

For example, I think Meta is building a 100,000 H-100 based cluster to train its Llama 4. And the power consumption, it’s about 370 gigawatt-hour and that’s equivalent to powering about 34 million of American households. So we have to think about new ways to make data center more efficient. I saw some stats in one of the energy-related conference. So out of the AI data center power consumption, about 40% is on AI compute. And then the other 40% is on liquid cooling to just cool the AI compute. And then the rest, maybe around 20% is in power, no, is in networking and storage. So seemingly, maybe storage doesn’t consume a lot of power to start with. But in the AI engine, so the GPUs and the AI compute, they’re like the engine. And the storage and networking, they’re actually feeding the fuel, the data into this engine, so you don’t want to keep the engine idle, and you want the engine to be running at its maximum efficiency. So that’s why storage and networking, they’re all very, very important to making the AI infrastructure more efficient. I’m very excited to see this new launch of this biggest SSD ever.

Patrick Moorhead: Yeah, Chloe, a great history lesson too. And I’ve been chronicling Arm’s move into the data center. And there was a day that people said Arm would never make it in the data center. I think you first came in storage, and then you moved to offload, and then you became… I’m a little compute-biased maybe, but you were at the big table with CPUs, and you’ve done a full run of all the hyperscalers with that. And I think we may have even have worked on analysis that said, “If Arm were inside of every data center in every server, there might be 30% power that could be left over to do other things.” But anyways, thanks for those comments.

Renen, you’re up next here buddy. Hey, congratulations on your big Cosmos announcement, and also you’re part of the big xAI cluster 100,000 nodes in Tennessee. I think that’s super cool. And also, again, as an analyst firm, we’ve been chronicling, really breaking all the rules here by compressing the stack. We were joking in the green room. Okay, maybe the fourth floor up there that, “Well, wait a second. Is it a storage company or a data company? Storage company? You don’t actually sell storage. You sell software that people run on their storage devices.” But I want to ask you, in the context of this announcement, where does QLC value performance fit into what you are trying to deliver your customers? Why are you on stage?

Renen Hallak: Sure. They invited me, and so I came. The reason I came is because Solidigm has actually been with us for seven years now. The company is eight and a half years old. Ever since they were Intel, they believed in us when we were very, very small. And even though we were very, very small, they saw our vision and aligned to it. And in many ways, they are responsible for a big part of our success. So thank you for that, and thank you for inviting me up here today. We are definitely a storage company and a data company. We are proud of our storage roots. That’s where we started. And on top of that, we added a lot more over the years. Today, I sometimes use the analogy of an operating system versus a storage system. That middle-of-the-sandwich software infrastructure stack that abstracts hardware away from these new AI applications, but not within a computer, not even within a data center. It’s a global machine. We have what we call the VAST DataSpace that allows us to build one global namespace across geographies. And yes, we get used by the biggest of the big. One of our partners is here on stage, Jacob. And I think wherever you’ll find a big deployment, VAST is there, and I think also CoreWeave is there. But I won’t speak for Jacob.

Jacob Yundt: It’s okay for me. It’s fine.

Renen Hallak: In terms of QLC, Solidigm was the first one with QLC drives. And I remember when it wasn’t entirely clear that there would be a market for QLC drives because they didn’t have enough right cycles. And we told them, “Don’t worry about that. As many as you can deliver, we will sell.” We were this tiny, and so it was hard to believe us back then. But I think we fulfilled that promise over the years. And over the years, we’ve been growing and asking for larger and larger drives, which is why having the 122 is so exciting. It enables a move away from hard drives. I think you saw the previous speaker said that 90% of data storage is still on hard drives. That is not for long. This new drive and these types of drives will definitely cause that to shrink down to, in my opinion, nothing. We will still have tape somewhere in a warehouse, but there is no need for the hard drive anymore.

And especially as you move to these new AI applications, they require much larger capacities and much faster access because we’re no longer analyzing numbers. It’s now pictures and video and sound, genomes, natural language. Somebody gave the example of when cell phones switched from text messages to multimedia messages, and how much more storage capacity they required. We’re now seeing the exact same thing in AI as we switch from large language models to multimodal. And I think, or I know that these GPUs are very, very hungry for information. And so you need it to be large. You need it to be fast. That’s why we’re called VAST.

Patrick Moorhead: Oh, I love that. By the way, I really appreciate… Y’all appreciate the origin story there. I didn’t know the intersection there. But just a follow-up here. What does larger capacities actually mean to you? I mean obviously, people can store more, but does it change the way you architect your product?

Renen Hallak: It does not. We architect it from very early days to be able to sustain extreme levels of capacity and density. Most architectures fall over because the blast radius becomes too large. But we have a shared everything architecture where all of the nodes can see all of the devices, and that means that there’s no problem going up to 120. Now, we’re asking Solidigm for 240, and it’ll keep going. I’m sure we’ll see a petabyte drive out there in the not-too-distant future.

Patrick Moorhead: I don’t know. I’m putting this data on my calendar next year. So, Solidigm, don’t let me down here. You’re on a roll. I appreciate that. Let’s move to CoreWeave. Jacob, your customers aren’t advertised or public, but they are some of the most important names out there in the AI business. Congratulations on that. You’re scaling like crazy. So you said you were the compute guy and efficiency and performance. Can you talk about some of the challenges of designing for things that almost contradict each other? Performance, efficiency, and scalability all the way down to the component level.

Jacob Yundt: Yeah, definitely. Also, Renen and Greg stole all my good lines and content for stuff. But yeah, we are absolutely scaling like crazy. Basically, every time we have either a forecast or we have some sort of plan, we get it wrong. The good news is that we have customers that start consuming our platform and they’re like, “Oh, this is good. We like it. Good.” We’re like, “Okay, we’ve designed storage accordingly for this customer. We’re good. We got it.” And then they come back and they say, “That’s great. Now, we want all of it.” I’m like, “What do you mean all of it?” They’re like, “We want all of it.”
And so then we are scrambling because it’s like, “Oh, now we need a bajillion more racks of storage.” And high-cap drives, what we are doing wouldn’t be possible without these high-cap drives. We’re talking about HDDs and how they’re just terrible and how the legacy hyperscalers are still using them. None of these things would be possible if we were trying to do any of this stuff with hard drives.

And so we have already scaled internally from 16 to 30T to 60T drives, and I can’t wait for this drive because we need it essentially yesterday. And that’s primarily because our customers are saying, “Hey, again, I need all of it. And how do we scale the storage accordingly?” Another part of this is the power discussion. So we talked a lot about power close, talking about how the power demands for AI are just bonkers. And yeah, so if I can have a… This gets a little bit nerdy here. But if I can have a 20, 25-watt drive, and I can double the capacity on that one, and I’m still just trying to get a performance profile that’s just faster than HDDs, why wouldn’t I do that? That’s a no-brainer. If I have a footprint that’s designed for X amount of racks at Y power density, and now I can just double the storage capacity of that. Absolutely, give me that drive. Go get me the 240. Tell me when I can get a petabyte drive. So everything we do is designed to scale. It’s designed around efficiency. It’s designed to feed these just insane GPU workloads. And truly none of that would be possible unless we had a super high-cap drive like this. And the road map is just going more dense, which is good.

Patrick Moorhead: Jacob, by using more efficient drives, can you actually pack more GPUs into the same area? Is that how you look at the world sometimes?

Jacob Yundt: A little bit. The way that we look at this is that we would normally need to have X amount of racks to drive Y amount of GPUs. And if we can increase the density of that and say like, “Hey, now I don’t need X amount of racks for Y… I mean, X divided by two,” or something like that. Yeah, that leaves more power that I can have for either more GPUs or like Greg’s slide had it, more storage, which is great. Because when customers show up and they say, “We love your product, I need all of it.” We don’t have to panic and scramble and figure out how we’re going to go get a bajillion more racks in there to support their storage needs.

Patrick Moorhead: How far out do you plan? Meaning are you looking 26, 27 right now, and you’re seeing where everything’s falling together and you have power budget at the rack level, the fleet level, and the entire data center?

Jacob Yundt: I’ll say yes. But again, we get it wrong every time. So we looked at our 2025 roadmap, and we’re like, “Oh, we got this. We know what the power density is going to be. We know what our GPU count is going to be. We understand what the customers we’re targeting.” And then we get it wrong. I know I keep saying this, but this story is so consistent amongst all of our customers. Someone’s like, “Great, I want to buy so many GPUs from you.” And then they’re like, “Oh, we realize that GPUs are not fungible with all of these other cloud providers. I want all of your GPUs.” And so then that’s how we get these forecasts wrong because we need to get hundreds more megawatts now or we need to go increase power density because asking for slightly different products or whatever. But yes, we are planning out through 2027 and beyond. I suspect we’ll be changing that all the time because everything’s just changing all the time.

Patrick Moorhead: Are you getting your nuclear engineering certification anytime soon?

Jacob Yundt: I don’t remember if that was in the media training, but I think no comment is the official answer.

Patrick Moorhead: No, that’s good. Good sport on that last one. I appreciate that.

Melody Brue: That’s good. All right. We’re going to shift to Sophie who is not Dylan. We’re going to take a little bit of a turn on the solutions approach here. We talked a little bit about this prior to you coming on stage as not Dylan. But efficient hardware is essential for infrastructure, and how does that comprehensive solution approach including software address your customer’s challenges?

Sophie Kane: Yeah, great question. So, as I mentioned, Ocient is a data analytics software solutions company, solution being the key here. When we’re on stage with Solidigm, one of the key messages here is that it’s hardware plus software. It’s a better-together story. And one of the key things that we’re seeing obviously on stage tonight, but also in the market, is that efficiency and sustainability are top of mind. And the reason we know this in part is because we run an annual survey every year to data and IT professionals. And what we’re seeing is an emergence. Over half are concerned. They have these very real fears that energy is just becoming a problem that they can’t get a handle on.

And this is new. This isn’t something we’ve seen before. So again, going back to the software and the hardware play, what we’re talking about here is software sits on top of the hardware. And when the software solution or the solution in general software plus hardware is working great, it’s great. When it’s not, we’re seeing these very real concerns around cost. We’re seeing these very real concerns around footprint, and we’re seeing these very real concerns with our customers around energy consumption. And that has huge implications in the innovation game.

Melody Brue: What kind of efficiencies are your customers seeing?

Sophie Kane: Yeah, another great question. So we typically work with at Ocient, a number of industries, ad tech, telecommunications, government, vehicle telematics, including geospatial, financial services to name a few. And again, tonight, we’re here with Solidigm. You are an incredibly innovative company. You just announced the big announcement, and your innovations allow us to be more innovative. And the efficiencies that we’re seeing are, again, when we couple this hardware plus solutions approach, we typically see a decrease, again, across the board with all of our customers, between 50 to 90% cost efficiency, energy efficiency, footprint efficiency, which bottom line, it goes back to the innovation game. It allows our customers to do a lot more than what they’ve been able to do in the past.

Melody Brue: Talking about efficiency, PEAK addresses some unique efficiencies and challenges at the edge due to data growth. And we just read that today, PEAK announced… PEAK:AIO, sorry, I’m not saying the whole thing. You announced that you achieved 400% growth in US sales over the past year. Congratulations. That’s amazing.

Patrick Moorhead: Congratulations.

Roger Cummings: Thank you. Thank you.

Melody Brue: That’s huge. And this expansion was driven by your high-performance energy-efficient solutions, right?

Roger Cummings: Correct. Yeah. There’s a ton of examples where we’ve really been very efficient. These edge devices are getting more and more intelligent, and the ability for Solidigm to give us the density that we need really gives these applications that we run into a lot of potential to do many, many more workloads across the AI lifecycle that they can.

Patrick Moorhead: And just a quick follow-up. Edge means a lot of things. We heard the edge was the smartphone.

Roger Cummings: Yeah.

Patrick Moorhead: Chloe, thank you very much. But are we talking about retail stores? Are we talking about manufacturing? Warehousing? Is that the edge you’re talking about?

Roger Cummings: That’s a great question because it is interpretive to every industry we go into. So it’s anything from a MRI machine to a camera sitting, taking pictures of some… You’ve seen the story of Solidigm and that’s taking pictures of hedgehogs to a drone or a box sitting behind a Jeep kind of scenario. So edge is proliferating across all AI. We’re seeing examples of that across a myriad of different verticals that we work with today. It’s the ability to pull that information from various sources, and not only run the algorithm but understand and make decisions at that edge is becoming the norm. And there are some verticals talking more about the edge that data can’t move around. So how do you move that but you can move some of the inference associated with it? So those edge devices need to be very intelligent. And to be very intelligent, you need that density associated with it.

Melody Brue: Roger, what are you seeing as the real drivers of growth at the edge and what are some of the challenges that that presents?

Roger Cummings: Well, I mean the edge is… More and more data is being collected, more and more information is being gathered, so it’s truly a huge data problem at the edge. And some of the challenges of doing that, I mean Solidigm is coming up with overcoming some of those challenges of the density. That density putting… You can imagine the ability to put petabytes at the edge will overcome a lot of those challenges that we have right now, the data. And not only running the model, but collecting the inference and doing something with that data at the edge, that’s something that is here today, and it’s going to become more and more complex. And I guess how do we overcome that is by building these infrastructures that are intelligent enough to communicate with various edges because that’s what we’re doing now. We’re just doing it, talking about it from a multi-node environment. But really, that intelligence sometimes, and many times, has to stay at that edge and then the inference needs to travel across those multi-nodes.

Melody Brue: Pat asked this question before. What do you want to see a year from now?

Roger Cummings: I’m sorry.

Melody Brue: Pat asked this question before. So what do you want to see a year from now?

Roger Cummings: A year from now, I would love to see that we are making intelligent, whether it be at the home, but we’re making intelligent decisions with ethics, with governance, with understanding of those decisions at the edge. Because we can imply the technology, we can have all the infrastructure we want, but there’s still a human element to understanding how that decision is being made and have that visibility in how that decision is being made. And I think once we get there, and once people are comfortable with that, I think we’ll be far, far along in the growth of AI. And I think that we’re seeing that today, us personally, within life sciences and the healthcare space. And I think if we can get more visibility around that, we’ll be in a good place.

Patrick Moorhead: I just have a follow-up here. I mean, if you look in the last 40 years of history of these different paradigms, ultimately, the compute makes its way to the point of origin, and the storage has to come with that and the memory has to come with that. With generative AI, there don’t seem to be a lot of people who are talking about generative AI on the edge. I have seen it starting to get baked into the silicon, getting baked into the GPUs and the SDKs. I’m curious, how are you looking at specifically generative AI at the edge? And I’m curious, does storage density and performance and efficiency have an impact on that curve?

Roger Cummings: Yeah. I can answer from my perspective and in our environment, living at the edge. For what we see right now, I’ll be honest, people are gathering a tremendous amount of information and not really knowing what to do with it yet. They’re bringing it to the cloud or they’re bringing it to another platform in which to run their models against. And I think that’s great. It’s a practice of long ago we used to move data around to analyze it. But I think that as that edge and the maturity level of AI, in my opinion, we’re still very early in the AI practicing. The practice of, how do we measure, how do we manage, how do we understand what decisions are being made and how they’re being made, we’re still very in its infant stage. And I think as that matures, we’ll have better practice in it. Now, if you want to go technical about it, my counterparts here could probably do a much better job on it from a technical perspective, but that’s what I see. It’s just the maturity of AI, and I think we need a best practice associated with it.

Patrick Moorhead: And enterprise AI, I mean, all the data we have clearly shows that we’ve exited that experiment stage, and we’re in POCs, but we are not even close to scaling enterprise AI at all. And the reason there’s so much action in the hyperscaler data centers is the big training models.

Roger Cummings: Training.

Patrick Moorhead: We have to come in and have that happen. We also believe as an analyst firm that the models will be more specific, whether you want to call them vertical models, whether you want to call them smaller horizontal models that enterprises can rag off of.

Roger Cummings: One thing I’d have to say too is it’s not just we are running these training algorithms and we don’t have the right data. So just understanding what data you have, how complete that data is, and how that data is changing over time is really important. And I think that a lot of companies are running these models with data. They don’t know if it’s out of compliance, if it’s insecure if it is complete enough. So that level of insight, I think is a level of maturity we need to get to as well.

Patrick Moorhead: Data is the number one impediment data management-

Roger Cummings: Yeah, by far.

Patrick Moorhead: … to enterprise adoption of AI. My firm doesn’t always get it right, but we get most of the big ones right. It was kind of heresy to say two years ago that data would be the biggest impediment. But now, everybody wants to talk about data, and I think everybody on this panel is seeing this. I mean, Renen, you’re knee-deep. Ocient, I mean you’re making it happen there. So folks, really appreciate your time, Dylan, thank you. Just kidding.

Sophie Kane: Happy to be here.

Patrick Moorhead: Yes, great to be here. But I really appreciate bringing the context to… I think it’s great. We can bring big-picture context to a 122-terabyte drive here. This is great, Mel. It was great hearing from all the partners up here to really, what’s the right word, accentuate everything that Solidigm said today, and hopefully, validated a lot of what I discussed and you discussed upfront.

Melody Brue: Yeah, I really liked hearing about the sustainability benefits that Solidigm is delivering to its partners.

Patrick Moorhead: Yeah, I just love what this announcement means to the overall industry. Like I said, storage doesn’t get enough conversation when it comes to the generative AI conversation out there. But when you combine the right storage solutions with the right compute, the right memory, and the right networking, amazing things happen. Performance, reliability, and efficiency for the optimal power draw for those data centers. So I just want to thank everybody for tuning in to The Six Five. Check out all of our content for Solidigm. We’ve had a couple great interviews over the past year, diving into other announcements, other product lines, other interviews with the executives. Hit that Subscribe button. Take care.

Author Information

Six Five Media

Six Five Media is a joint venture of two top-ranked analyst firms, The Futurum Group and Moor Insights & Strategy. Six Five provides high-quality, insightful, and credible analyses of the tech landscape in video format. Our team of analysts sit with the world’s most respected leaders and professionals to discuss all things technology with a focus on digital transformation and innovation.

SHARE:

Latest Insights:

On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss Meta, Qualcomm, Nvidia and more.
A Transformative Update Bringing New Hardware Architecture, Enhanced Write Performance, and Innovative Data Management Solutions for Hyperscale and Enterprise Environments
Camberley Bates, Chief Technology Advisor at The Futurum Group, shares insights on VAST Data Version 5.2, highlighting the EBox architecture, enhanced write performance, and data resilience features designed to meet AI and hyperscale storage environments.
A Closer Look At Hitachi Vantara’s Innovative Virtual Storage Platform One, Offering Scalable and Energy-Efficient Storage Solutions for Hybrid and Multi-Cloud Environments
Camberley Bates, Chief Technology Advisor at The Futurum Group, shares insights on Hitachi Vantara’s expanded hybrid cloud storage platform and the integration of all-QLC flash, object storage, and advanced cloud capabilities.
Dipti Vachani, SVP & GM at Arm, joins Olivier Blanchard to discuss how Arm is revolutionizing the automotive industry with AI-enabled vehicles at CES 2025.

Thank you, we received your request, a member of our team will be in contact with you.