IBM Releases Storage Scale System 6000

The News: IBM released the latest model of its Storage Scale hardware solution. The new Storage Scale System 6000 replaces the previous 5000 model and is being launched with a focus on AI and creating a global data platform. Read more about the new IBM Storage System 6000 on the IBM website.

IBM Releases Storage Scale System 6000

Analyst Take: IBM released a new Storage Scale System 6000 model that will replaces the ESS/Storage Scale System 5000 model. Storage Scale System, previously known as ESS, is the family of hardware offerings based on IBM’s Storage Scale software – previously called Spectrum Scale. The new model brings a technology refresh that improves performance and focuses on supporting a new wave of AI workloads.

The new Storage Scale System 6000 is a 4U appliance with dual AMD Genoa 48-core processors and is capable of housing 48 storage devices. The system supports NVMe SSDs with a capacity of up to 1.4 PB in 4U and will add future support for IBM Flash Core Modules, similar to those found in IBM FlashSystem, for even greater capacity density. In addition, the system supports HDD JBOD expansion for an additional 16 PB, and as with other Storage Scale System/ESS offerings, it provides massive scale out capabilities.

IBM boasts a 2x throughput improvement and up to 7 million IOPS per system. The system supports NVMe-oF with both 400 Gb InfiniBand and 200 Gb Ethernet. Notably, the IBM Storage System 6000 has also added support for NVIDIA’s GPU Direct.

Storage Scale (Spectrum Scale) and Storage Scale System (ESS) have long been a leading storage solution for high-performance computing (HPC) workloads. With the release of Storage Scale System 6000, IBM has intentionally positioned the system toward a related and quickly growing, area – AI. This is not to say that Storage Scale System 6000 will not be an effective solution for other HPC workloads; it certainly will be. However, IBM is positioning the system improvements as its solution for a global data platform for AI. As the diagram shows, IBM’s vision for Storage Scale is as the central point of a global data platform that connects various data sources, data lakes, and compute resources.

IBM Releases Storage Scale System 6000
Image Source: IBM

The vision for this global data platform makes sense, and Storage Scale is a solution that is uniquely equipped to take on this central role. AI is a demanding workload that requires high performance and huge capacities of data, which in many cases, might be spread between various systems, data lakes, and clouds. AI is also heavily reliant on graphics processing unit (GPU) processing, and the underlying data infrastructure must be able to adequately feed data to and from these resources to avoid wasteful idle time. Storage Scale possesses the performance and scalability required to support AI workloads while offering multiprotocol support – including GPU Direct – to connect the various data sources and components involved in AI workloads.

Although the Storage Scale System 6000 is mostly a standard technology update from the previous 5000 model, rather than some entirely new AI solution, the improvements hit upon the right characteristics to build this global data platform and support AI workloads. The 6000 brings updated hardware, improved performance, and support for GPU Direct, all while building upon a successful product in ESS/Storage Scale System.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Cohesity Data Cloud on RHEL

IBM Looks to Generative AI To Match Job Seekers To Needed Job Skills

NetApp Gets Insightful on Generative AI, Cyber Recovery

Author Information

Mitch comes to The Futurum Group through the acquisition of the Evaluator Group and is focused on the fast-paced and rapidly evolving areas of cloud computing and data storage. Mitch joined Evaluator Group in 2019 as a Research Associate covering numerous storage technologies and emerging IT trends.

With a passion for all things tech, Mitch brings deep technical knowledge and insight to The Futurum Group’s research by highlighting the latest in data center and information management solutions. Mitch’s coverage has spanned topics including primary and secondary storage, private and public clouds, networking fabrics, and more. With ever changing data technologies and rapidly emerging trends in today’s digital world, Mitch provides valuable insights into the IT landscape for enterprises, IT professionals, and technology enthusiasts alike.

SHARE:

Latest Insights:

Commvault Addresses the Rise of Identity-Based Attacks With Automated Active Directory Recovery, and the Ability to Protect Active Directory Alongside Entra ID
Krista Case, Research Director at The Futurum Group, shares her insights on Commvault’s automated recovery of Active Directory forests.
Marvell Spotlights How Incorporation of Its CPO Technology Capabilities Can Accelerate XPU Architecture Innovation
Futurum’s Ron Westfall explores how Marvell’s CPO portfolio can play an integral role in further demystifying applying customization in the XPU architecture design process, incentivizing hyperscalers to develop custom XPUs that increase the density and performance of their AI servers.
Dr. Howard Rubin, CEO at Rubin Worldwide, joins Greg Lotko and Daniel Newman to reveal how strategic technology investments drive superior economic results.
On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss Meta, Qualcomm, Nvidia and more.

Thank you, we received your request, a member of our team will be in contact with you.