Analyst(s): Ron Westfall
Publication Date: March 21, 2025
Solidigm and NVIDIA jointly showcased a cold-plate-cooled enterprise Solid State Drive (SSD) at GTC 2025, removing the need for fan-based cooling and unlocking a fully liquid-cooled AI server design. The offering introduces a proprietary SSD casing that redistributes heat across the drive surface, countering the inefficiencies of single-sided cooling. Set for commercial availability in the second half of 2025, this launch positions Solidigm’s technology to meet growing enterprise SSD demand driven by gen AI workloads and AI server expansion.
What is Covered in this Article:
- Solidigm launched a cold-plate-cooled enterprise SSD, the D7-PS1010 E1.S, at GTC 2025
- Developed with NVIDIA, the SSD eliminates the need for fan-based cooling in AI servers
- Proprietary case design resolves single-sided cooling and enables hot-swapping
- Supports compact 1U server builds and 100% liquid-cooled infrastructure
- Product targets rising SSD demand from gen AI server deployments and workloads
The News: Solidigm, a US-based subsidiary of SK hynix, debuted its latest innovation at GTC 2025 – the D7-PS1010 E1.S, an enterprise-grade SSD with full liquid-cooling support. Developed in collaboration with NVIDIA, the drive replaces traditional air-based cooling methods with a cold-plate architecture that seamlessly integrates into fully liquid-cooled AI training servers.
Unlike conventional SSD direct liquid cooling (DLC) solutions that cool only one side of the device – compromising thermal balance and making hot-swapping impractical – Solidigm’s new design enables complete serviceability. It also supports ultra-compact server configurations, including 1U builds that operate without fans. The D7-PS1010 E1.S is expected to launch in the second half of 2025.
GTC 2025: Can Solidigm’s Liquid-Cooled SSD Redefine AI Server Infrastructure?
Analyst Take: Solidigm’s announcement at GTC 2025 represents a key advancement in enterprise storage design, introducing a liquid-cooled SSD purpose-built for rapidly evolving AI infrastructure. As data centers face mounting pressure around thermal efficiency, serviceability, and physical footprint, the D7-PS1010 E1.S directly addresses long-standing technical hurdles. The solution is well-aligned with both current and anticipated trends in generative AI infrastructure investments and enterprise SSD (eSSD) adoption.
Tackling Cooling and Serviceability Constraints in AI Server Design
AI training servers generate substantial heat across multiple system components, yet SSDs have largely remained air-cooled due to engineering challenges. Traditional DLC configurations cool only one side of the SSD, often leaving the opposite surface thermally stressed and making drives ill-suited for sustained, high-throughput workloads. Moreover, these setups typically restrict hot-swapping, complicating in-field maintenance.
Solidigm’s D7-PS1010 E1.S introduces a proprietary enclosure that conducts heat from the non-contact surface to the cold plate contact area, enabling full-surface heat dissipation. This breakthrough not only ensures balanced, high-performance operation but also restores full serviceability – including hot-swap support. As a result, the SSD is well-positioned for deployment in fully liquid-cooled AI servers, eliminating key bottlenecks related to airflow dependence and downtime during servicing.
From my view, Solidgm needed to unveil its D7-PS1010 E1.S product to gain a definitive time-to-market advantage over rivals in driving market adoption of server SSD DLC solutions. For instance, main rivals such as Samsung, Micron, and Western Digital have yet to announce plans for direct liquid cooling of their server SSD products. Kioxia’s ER3 series Enterprise SATA SSD solution can use immersion cooling, however DLC is currently not part of its portfolio mix. As a result, I find that Solidigm, alongside NVIDIA, is in a prime position to win mind share and channel influence in driving the nascent stage of implementing liquid-cooled SSD solutions across AI server infrastructure environments.
Addressing Infrastructure Needs for Dense, Energy-Efficient AI Servers
With AI-specific servers rapidly gaining market share – growing at a compound annual rate four to five times higher than general-purpose systems – the demands on system infrastructure are intensifying. Many of these servers now require over 30TB of storage each, and with projections estimating over 2 million AI server shipments in 2025, efficiency and density are taking center stage in data center planning.
The D7-PS1010 E1.S helps solve these challenges by enabling full fan removal from server chassis and supporting compact 1U configurations. This reduction in HVAC dependency and physical space requirements allows operators to lower cooling costs and increase server density – two crucial factors in hyperscale environments. A fully liquid-cooled architecture also streamlines system design while maintaining enterprise-grade thermal performance.
Through DLC capabilities, today’s solutions can support up to 80kW per rack, enabling AI infrastructure designers to implement more servers per rack and shorter interconnects for each server in the cluster. This takes advantage of water’s property of being 23 times more efficient at transporting heat than air. As such, energy can be redistributed to additional servers or other hardware.
Supporting Long-Term SSD Demand Growth from Gen AI Workloads
The eSSD market is projected to grow from 181 exabytes (EB) in 2024 to 1,078 EB by 2030, an annual growth rate of 35%. This surge is driven by the accelerating adoption of generative AI workloads, which rely heavily on high-performance, low-latency storage. AI training servers alone are expected to increase their SSD capacity from 30TB currently to around 100TB by 2030, while inference servers, retrieval-augmented generation (RAG), and media storage will contribute additional demand.
To accommodate a wide range of deployment environments, Solidigm offers the D7-PS1010 E1.S in two form factors: a 9.5mm version optimized for liquid-cooled systems and a 15mm variant for air-cooled configurations. This flexibility gives enterprises the ability to scale their infrastructure according to cooling capabilities, cost considerations, and workload profiles. Positioned at the intersection of rising AI complexity and escalating storage needs, the D7-PS1010 E1.S is designed to meet both immediate demands and the long-term evolution of AI compute infrastructure.
What to Watch:
- Adoption of the D7-PS1010 SSD across AI training server deployments, which are projected to reach 2 million units in 2025
- Effectiveness of Solidigm’s dual-form factor strategy (liquid- and air-cooled variants) in supporting diverse AI infrastructure needs
- Uptake of fully liquid-cooled 1U server architectures by hyperscalers aiming to reduce HVAC and cooling-related CapEx
- Solidigm’s ability to capture share in a rapidly growing eSSD market is expected to expand from 181 EB in 2024 to 1,078 EB by 2030
- Enterprise interest in deploying scalable, hot-swappable SSDs that can handle increasing storage loads from training and inference workloads
See the complete press release on Solidigm’s launch of the liquid-cooled eSSD at GTC 2025 on the Solidigm website.
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Other insights from The Futurum Group:
Solidigm and Broadcom Extend SSD Partnership to Power AI’s Next Growth Phase
Solidigm Expands SSD Portfolio, Ups Focus on AI Data Pipeline
Perspectives on the AI Data Pipeline with Solidigm – Six Five Webcast
Author Information
Ron is an experienced, customer-focused research expert and analyst, with over 20 years of experience in the digital and IT transformation markets, working with businesses to drive consistent revenue and sales growth.
He is a recognized authority at tracking the evolution of and identifying the key disruptive trends within the service enablement ecosystem, including a wide range of topics across software and services, infrastructure, 5G communications, Internet of Things (IoT), Artificial Intelligence (AI), analytics, security, cloud computing, revenue management, and regulatory issues.
Prior to his work with The Futurum Group, Ron worked with GlobalData Technology creating syndicated and custom research across a wide variety of technical fields. His work with Current Analysis focused on the broadband and service provider infrastructure markets.
Ron holds a Master of Arts in Public Policy from University of Nevada — Las Vegas and a Bachelor of Arts in political science/government from William and Mary.