Menu

AI Field Day: Solidigm Storage Options for AI Training

AI Field Day: Solidigm Storage Options for AI Training

At AI Field Day in Santa Clara, Solidigm explained the value of NVMe SSD storage for AI workloads, particularly with multiple options for training. Supermicro took the stage to show Solidigm storage options for AI that span a range of use cases, from data center rack-scale solutions to IoT solutions that might be mounted on a power pole.

Solidigm (the SSD business unit that Intel sold to SK Hynix at the end of 2021) continues to tell us why its NVMe SSDs are the best option for storing data you expect to access with any urgency. Data for AI training and inference gains the same benefits as many other data types, as we saw Solidigm storage options for AI at AI Field Day in Santa Clara. There was not anything very new in the Solidigm presentation. NVMe SSDs are still extremely fast, and Solidigm offers them in various performance, capacity, and physical form factors. We revisited the TCO comparison of using physically smaller, more power-efficient all-flash storage versus the higher whole-of-life cost of getting the same transactional performance from hard drives. The conclusions were familiar: all-flash storage lowers storage latency, improving application performance without excessive cost.

Supermicro’s Portfolio for Solidigm

Supermicro joined Solidigm’s presentation to showcase their joint products, ranging from rugged industrial PCs to rack-scale deployment of multi-petabyte storage. Supermicro’s ability to load a server with a Solidigm NVMe drive and a GPU into a sealed unit mounted to a utility pole was interesting news. This rugged hardware delivers Solidigm storage options and AI at the far edge. There were mode Solidigm storage options for AI in storage server chassis from Supermicro with options for dense high-performance storage using NGSFF (ruler) format drives. The NGSFF chassis is likely to support CXL, the next generation of storage virtualization, when it is fully standardized. One of the surprises from my previous work with Supermicro was the extensive networking hardware lineup. There was even Supermicro 400 GB InfiniBand switching in one of the rack-scale architectures delivered already built, integrated, and tested.

Solidigm Storage Options for AI

While the message from Solidigm and Supermicro is very much the same as it always has been, that does not detract from its value. Like every other high-performance workload, AI benefits from well-designed storage that keeps the processing fed with data. Like every other infrastructure, no design decision is suitable for every situation. Having cost, performance, capacity, and form factor options is vital. There are plenty of Solidigm storage options for AI.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Dell Enhances PowerScale NAS to Aid Generative AI

HPE Enhances GreenLake for File Storage for AI Workloads

VAST Data Unveils New Data Center Architecture to Accelerate AI

Author Information

Alastair has made a twenty-year career out of helping people understand complex IT infrastructure and how to build solutions that fulfil business needs. Much of his career has included teaching official training courses for vendors, including HPE, VMware, and AWS. Alastair has written hundreds of analyst articles and papers exploring products and topics around on-premises infrastructure and virtualization and getting the most out of public cloud and hybrid infrastructure. Alastair has also been involved in community-driven, practitioner-led education through the vBrownBag podcast and the vBrownBag TechTalks.

Related Insights
Cisco Q2 FY 2026 Earnings- AI Infrastructure Momentum Lifts Results
February 13, 2026

Cisco Q2 FY 2026 Earnings: AI Infrastructure Momentum Lifts Results

Futurum Research analyzes Cisco’s Q2 FY 2026 results, highlighting AI infrastructure momentum, campus networking demand, and margin mitigation plans, with guidance reaffirming a strong FY 2026 outlook....
Astera Labs Q4 2025 Earnings Diversified AI Connectivity Momentum
February 13, 2026

Astera Labs Q4 2025 Earnings: Diversified AI Connectivity Momentum

Brendan Burke, Research Director at Futurum, analyzes Astera Labs’ Q4 2025 beat and above-consensus guidance, highlighting momentum in smart fabrics, signal conditioning, and CXL memory as AI connectivity spend accelerates....
ServiceNow Buys Pyramid Does this Spell the End of the BI Dashboard
February 13, 2026

ServiceNow Buys Pyramid: Does this Spell the End of the BI Dashboard?

Brad Shimmin, VP and Practice Lead at Futurum, along with Keith Kirkpatrick, Vice President & Research Director, Enterprise Software & Digital Workflows, analyze ServiceNow’s acquisition of Pyramid Analytics. They explore...
Does Nebius’ Acquisition of Tavily Create the Leading Agentic Cloud
February 12, 2026

Does Nebius’ Acquisition of Tavily Create the Leading Agentic Cloud?

Brendan Burke, Research Director at Futurum, explores Nebius’ acquisition of Tavily to create a unified "Agentic Cloud." By integrating real-time search, Nebius is addressing hallucinations and context gaps for autonomous...
Lattice Semiconductor Q4 FY 2025 Record Comms & Compute, AI Servers +85%
February 12, 2026

Lattice Semiconductor Q4 FY 2025: Record Comms & Compute, AI Servers +85%

Futurum Research analyzes Lattice’s Q4 FY 2025 results, highlighting data center companion FPGA momentum, expanding security attach, and a growing new-product mix that supports FY 2026 growth and margin resilience....
AI Capex 2026 The $690B Infrastructure Sprint
February 12, 2026

AI Capex 2026: The $690B Infrastructure Sprint

Nick Patience, AI Platforms Practice Lead at Futurum, shares his insights on the massive AI capex plans of US hyperscalers, specifically whether the projected $700 billion infrastructure build-out can be...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.