SC22 Recap and HPC Trends

The annual Supercomputing Conference was held last week in Dallas, with a theme of “HPC Accelerates”. While the conference was absent during the pandemic year of 2020 and saw a smaller in-person turnout in 2021, SC22 “accelerated” the conference back into normalcy with a large turnout and a number of announcements.

Much of the excitement at the SC conference every year is attached to the Top500 list which ranks the top 500 most powerful super computing systems. The headline this year was the Frontier system at Oak Ridge National Lab – the first system to publicly record more than an exaflop – maintained its previous spot at number one on the list. The Green500 list, which instead ranks systems by their energy efficiency, was headed by the Henri system at the Flatiron Institute in New York.

Beyond the benchmark results, Evaluator Group additionally noted a number of trends from SC22:

Sustainability is a Priority

As seen with the increased focus on new benchmarks such as the Green500, there is an increasing focus on sustainability within the world of HPC. Computing requires large amounts of both energy and natural resources, especially when done on a “super” scale, and the world of HPC can no longer ignore its environmental impact. During the conference, Evaluator Group met with a number of the technology vendors present – almost all of which emphasized they were increasing their focus on sustainability. Sustainability will likely continue to be a top-of-mind trend in HPC for the foreseeable future.

Increase in Cloud and “aaS” Offerings

The HPC market has typically favored traditional on-premises deployment of infrastructure acquired through CAPEX spending, both for technical and economic reasons. While this certainly isn’t going away anytime soon, there is a trend of storage vendors offering more flexible deployment either in the cloud or through flexible consumption type models. Examples of this are Weka’s increased cloud support, Dell’s Apex HPC and HPC on Demand offerings, Lenovo’s TruScale HPCaaS, and HPE GreenLake for HPC.

HPC Needs Data Management

It is well understood that HPC and AI require powerful CPUs – and more frequently GPUs and other accelerators – as well as high performance and highly scalable storage solutions. A less recognized component is the need for data management. The large capacities of data in these environments creates an increasing need for management capabilities including data visibility, protection, movement, and lifecycle management. SC22 showcased a number of solutions that tackle data management for HPC in various ways including Arcitecta’s Mediaflux data management fabric, Hammerspace’s global file system, and Panasas’ recent integration of Atempo’s analytics and data movement functionality. These solutions all provide different capabilities and approaches, but represent a clear trend that HPC needs data management.

CXL is Here (and just getting started)

Evaluator Group has been tracking the development of the CXL (Compute Express Link) interconnect for a number of years and the 1.1 spec is finally making its debut. The 1.1 spec is mostly focused around memory expansion, but will only be the tip of the iceberg. Later specs will bring even greater functionality such as switching, memory pooling, and memory sharing. The presence of CXL at SC22 was notable, with broad vendor support and a large number of demos showcasing the technology. While CXL is still in its early stages, the HPC community should be aware of its development and future impact.

Author Information

Mitch comes to The Futurum Group through the acquisition of the Evaluator Group and is focused on the fast-paced and rapidly evolving areas of cloud computing and data storage. Mitch joined Evaluator Group in 2019 as a Research Associate covering numerous storage technologies and emerging IT trends.

With a passion for all things tech, Mitch brings deep technical knowledge and insight to The Futurum Group’s research by highlighting the latest in data center and information management solutions. Mitch’s coverage has spanned topics including primary and secondary storage, private and public clouds, networking fabrics, and more. With ever changing data technologies and rapidly emerging trends in today’s digital world, Mitch provides valuable insights into the IT landscape for enterprises, IT professionals, and technology enthusiasts alike.

SHARE:

Latest Insights:

Brad Shimmin, VP and Practice Lead at The Futurum Group, examines why investors behind NVIDIA and Meta are backing Hammerspace to remove AI data bottlenecks and improve performance at scale.
Looking Beyond the Dashboard: Tableau Bets Big on AI Grounded in Semantic Data to Define Its Next Chapter
Futurum analysts Brad Shimmin and Keith Kirkpatrick cover the latest developments from Tableau Conference, focused on the new AI and data-management enhancements to the visualization platform.
Colleen Kapase, VP at Google Cloud, joins Tiffani Bova to share insights on enhancing partner opportunities and harnessing AI for growth.
Ericsson Introduces Wireless-First Branch Architecture for Agile, Secure Connectivity to Support AI-Driven Enterprise Innovation
The Futurum Group’s Ron Westfall shares his insights on why Ericsson’s new wireless-first architecture and the E400 fulfill key emerging enterprise trends, such as 5G Advanced, IoT proliferation, and increased reliance on wireless-first implementations.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.