Menu

SC23 Recap: Liqid

SC23 Recap: Liqid

The News: Liqid announced a new UltraStack L40S reference architecture at SC23 that combines Dell PowerEdge servers with NVIDIA graphic processor units (GPUs) to help organizations quickly meet high-density GPU demand. Read more about the announcement on the Liqid website.

SC23 Recap: Liqid

Analyst Take: Futurum Group analysts talked often about GPUs with IT organizations at SC23―everyone wants them, and a whole lot of them. With its composable infrastructure technology, Liqid has been helping organizations maximize the efficiency of their GPU deployments for a long time. With this latest announcement, a solution they unveiled at SC23, Liqid is now helping customers with challenges around GPU density.

The challenge for many customers is keeping up with demand for GPU power through deploying a dense GPU based solution. Liqid’s response is a new reference architecture called UltraStack L40S, which features Dell PowerEdge R760xa servers with 8 or 16 NVIDIA L40S GPUs, NVIDIA ConnectX-7 NICs, NVIDIA BlueField DPUs, and Liqid NVMe IO Accelerator cards.

This comprehensive solution helps organizations get up and running quickly with a large amount of GPU compute power. Liqid claims that the UltraStack L40S solution can deliver “35 percent higher performance, while ensuring cost savings with a 35 percent reduction in power consumption and a 75 percent reduction in software licensing costs when compared to lower GPU capacity servers.”

The reference architecture announced by Liqid solves a clear problem for customers focusing on AI, high performance computing (HPC), or other GPU-driven workloads. These customers need a way to deploy a large amount of GPU power, and in many cases are looking for a way to do so as quickly as possible. The new UltraStack solution provides a simple, efficient way to deploy a GPU-dense solution to power these workloads.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

SC23 Recap: Quantum

SC23 Recap: IBM

SC23 Recap: Nyriad

Author Information

Mitch comes to The Futurum Group through the acquisition of the Evaluator Group and is focused on the fast-paced and rapidly evolving areas of cloud computing and data storage. Mitch joined Evaluator Group in 2019 as a Research Associate covering numerous storage technologies and emerging IT trends.

With a passion for all things tech, Mitch brings deep technical knowledge and insight to The Futurum Group’s research by highlighting the latest in data center and information management solutions. Mitch’s coverage has spanned topics including primary and secondary storage, private and public clouds, networking fabrics, and more. With ever changing data technologies and rapidly emerging trends in today’s digital world, Mitch provides valuable insights into the IT landscape for enterprises, IT professionals, and technology enthusiasts alike.

Related Insights
Microsoft’s Maia 200 Signals the XPU Shift Toward Reinforcement Learning
January 27, 2026

Microsoft’s Maia 200 Signals the XPU Shift Toward Reinforcement Learning

Brendan Burke, Research Director at Futurum, analyzes Microsoft’s custom Maia 200 architecture and market position. The accelerator supports reinforcement learning with low-precision formats and deterministic networking....
Amazon EC2 G7e Goes GA With Blackwell GPUs. What Changes for AI Inference
January 27, 2026

Amazon EC2 G7e Goes GA With Blackwell GPUs. What Changes for AI Inference?

Nick Patience, VP and AI Practice Lead at Futurum, examines Amazon’s EC2 G7e instances and how higher GPU memory, bandwidth, and networking change AI inference and graphics workloads....
NVIDIA and CoreWeave Team to Break Through Data Center Real Estate Bottlenecks
January 27, 2026

NVIDIA and CoreWeave Team to Break Through Data Center Real Estate Bottlenecks

Nick Patience, AI Platforms Practice Lead at Futurum, shares his insights on NVIDIA’s $2 billion investment in CoreWeave to accelerate the buildout of over 5 gigawatts of specialized AI factories...
Did SPIE Photonics West 2026 Set the Stage for Scale-up Optics
January 27, 2026

Did SPIE Photonics West 2026 Set the Stage for Scale-up Optics?

Brendan Burke, Research Director at The Futurum Group, explains how SPIE Photonics West 2026 revealed that scaling co-packaged optics depends on cross-domain engineering, thermal materials, and manufacturing testing....
Will Microsoft’s “Frontier Firms” Serve as Models for AI Utilization
January 26, 2026

Will Microsoft’s “Frontier Firms” Serve as Models for AI Utilization?

Keith Kirkpatrick, VP and Research Director at Futurum, covers the New York Microsoft AI Tour stop and discusses how the company is shifting the conversation around AI from features to...
Snowflake Acquires Observe Operationalizing the Data Cloud
January 26, 2026

Snowflake Acquires Observe: Operationalizing the Data Cloud

Brad Shimmin, VP & Practice Lead at Futurum, examines Snowflake’s intent to acquire Observe and integrate AI-powered observability into the AI Data Cloud....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.