Weka Achieves NVIDIA DGX BasePod Certification

Weka Achieves NVIDIA DGX BasePod Certification

The News: Weka announced that its Weka data platform received certification for a NVIDIA DGX BasePod reference architecture. The certification provides a streamlined approach for deploying Weka with NVIDIA hardware for AI applications. It also sets the path forward for Weka to receive NVIDIA DGX SuperPod certification. See more in Weka’s press release here.

Weka Achieves NVIDIA DGX BasePod Certification

Analyst Take: Weka announced a new certification for a NVIDIA DGX BasePod reference architecture. The reference architecture is based on the Weka Data Platform and NVIDIA DGX H100. Weka claims the solution is capable of 600 GB/s throughput and 22M IOPs in 8 rack units, which is stated to be 10x more throughput and 6x more IOPs than a previous NVIDIA DGX BasePod configuration based on NVIDIA DGX A100 systems.

This certified reference architecture is all about streamlined deployment and optimized performance for running AI workloads. The solution combines the basic building blocks required for AI workloads—high-performance, scalable data storage—with top of the line hardware. Along with NVIDIA H100 GPUs, the solution features Intel Xeon processors, NVIDIA ConnectX-7 NICs, NVIDIA Quantum-2 InfiniBand switches, and NVIDIA Spectrum Ethernet switches. By utilizing this reference architecture, organizations can get up and running quickly with infrastructure that is optimized for AI.

The other notable takeaway from this announcement is it sets the stage for Weka to receive further NVIDIA certifications. With the announcement of the NVIDIA DGX BasePod certification, Weka has already announced it is now setting its eyes on a NVIDIA DGX SuperPod certification. The addition of the SuperPod certification will bring further flexibility and deployment options for Weka data platform customers.

Overall, the combination of the Weka data platform and NVIDIA hardware makes perfect sense for AI deployments. AI workloads are very reliant on their ability to feed lots of data to GPUs and to do so as quick as possible. This reference architecture is set to do exactly that, by utilizing Weka’s high-performance data platform and NVIDIA GPUs. By providing a certified reference architecture—with a future SuperPod certification on the way—Weka and NVIDIA are providing an efficient way for organizations to quickly deploy infrastructure capable of supporting their AI workloads.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Answer.AI R&D Lab Aims to Bring Practical AI Products

2024 Trends and Predictions for Data Storage

Dell Releases InsightIQ 5.0

Image Credit: Weka

Author Information

Mitch comes to The Futurum Group through the acquisition of the Evaluator Group and is focused on the fast-paced and rapidly evolving areas of cloud computing and data storage. Mitch joined Evaluator Group in 2019 as a Research Associate covering numerous storage technologies and emerging IT trends.

With a passion for all things tech, Mitch brings deep technical knowledge and insight to The Futurum Group’s research by highlighting the latest in data center and information management solutions. Mitch’s coverage has spanned topics including primary and secondary storage, private and public clouds, networking fabrics, and more. With ever changing data technologies and rapidly emerging trends in today’s digital world, Mitch provides valuable insights into the IT landscape for enterprises, IT professionals, and technology enthusiasts alike.


Latest Insights:

Vendor Leverages Amazon Q on AWS to Drive Productivity and Access to Organizational Knowledge
The Futurum Group’s Daniel Newman and Keith Kirkpatrick cover SmartSheet’s use of Amazon Q to power its @AskMe chatbot, and discuss how the implementation should serve as a model for other companies seeking to deploy a gen AI chatbot.
Paul Nashawaty, Practice Lead at The Futurum Group, shares his insights on Alluxio Enterprise AI and the ability to achieve over 97% GPU utilization.
Reference Architectures, Customer Innovation Labs, and Industry-Standard Ethernet Enable Cost-Effective Ethernet for AI Training
Alastair Cooke, CTO Advisor at The Futurum Group, shares insights from the Juniper Networks presentation at Cloud Field Day 20 on 800GB Ethernet for AI training and the Juniper AI Innovation Lab.
Multiple Use Cases and Benefits Delivered via Generative AI
Keith Kirkpatrick, Research Director with The Futurum Group, discusses a case study example of efficiency, workflow enhancement, and overall service improvements delivered through Microsoft Copilot AI technology.