Weka Achieves NVIDIA DGX BasePod Certification

Weka Achieves NVIDIA DGX BasePod Certification

The News: Weka announced that its Weka data platform received certification for a NVIDIA DGX BasePod reference architecture. The certification provides a streamlined approach for deploying Weka with NVIDIA hardware for AI applications. It also sets the path forward for Weka to receive NVIDIA DGX SuperPod certification. See more in Weka’s press release here.

Weka Achieves NVIDIA DGX BasePod Certification

Analyst Take: Weka announced a new certification for a NVIDIA DGX BasePod reference architecture. The reference architecture is based on the Weka Data Platform and NVIDIA DGX H100. Weka claims the solution is capable of 600 GB/s throughput and 22M IOPs in 8 rack units, which is stated to be 10x more throughput and 6x more IOPs than a previous NVIDIA DGX BasePod configuration based on NVIDIA DGX A100 systems.

This certified reference architecture is all about streamlined deployment and optimized performance for running AI workloads. The solution combines the basic building blocks required for AI workloads—high-performance, scalable data storage—with top of the line hardware. Along with NVIDIA H100 GPUs, the solution features Intel Xeon processors, NVIDIA ConnectX-7 NICs, NVIDIA Quantum-2 InfiniBand switches, and NVIDIA Spectrum Ethernet switches. By utilizing this reference architecture, organizations can get up and running quickly with infrastructure that is optimized for AI.

The other notable takeaway from this announcement is it sets the stage for Weka to receive further NVIDIA certifications. With the announcement of the NVIDIA DGX BasePod certification, Weka has already announced it is now setting its eyes on a NVIDIA DGX SuperPod certification. The addition of the SuperPod certification will bring further flexibility and deployment options for Weka data platform customers.

Overall, the combination of the Weka data platform and NVIDIA hardware makes perfect sense for AI deployments. AI workloads are very reliant on their ability to feed lots of data to GPUs and to do so as quick as possible. This reference architecture is set to do exactly that, by utilizing Weka’s high-performance data platform and NVIDIA GPUs. By providing a certified reference architecture—with a future SuperPod certification on the way—Weka and NVIDIA are providing an efficient way for organizations to quickly deploy infrastructure capable of supporting their AI workloads.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Answer.AI R&D Lab Aims to Bring Practical AI Products

2024 Trends and Predictions for Data Storage

Dell Releases InsightIQ 5.0

Image Credit: Weka

Author Information

Mitch comes to The Futurum Group through the acquisition of the Evaluator Group and is focused on the fast-paced and rapidly evolving areas of cloud computing and data storage. Mitch joined Evaluator Group in 2019 as a Research Associate covering numerous storage technologies and emerging IT trends.

With a passion for all things tech, Mitch brings deep technical knowledge and insight to The Futurum Group’s research by highlighting the latest in data center and information management solutions. Mitch’s coverage has spanned topics including primary and secondary storage, private and public clouds, networking fabrics, and more. With ever changing data technologies and rapidly emerging trends in today’s digital world, Mitch provides valuable insights into the IT landscape for enterprises, IT professionals, and technology enthusiasts alike.

SHARE:

Latest Insights:

AMD Powers El Capitan, Frontier, and 172 Systems in Latest Top500 Rankings
Olivier Blanchard, Research Director at Futurum, shares his insights on AMD’s supercomputing leadership as El Capitan and Frontier hold top rankings, while AMD expands its global footprint across HPC and AI workloads.
Qualcomm Strengthens Its AI Infrastructure Capabilities By Acquiring Alphawave Semi In A Strategic Move To Deepen Its Presence In Data Center Compute And Connectivity
Olivier Blanchard, Research Director at Futurum, analyzes Qualcomm’s $2.4 billion Alphawave deal and how it expands AI data center capabilities with high-speed connectivity IP.
Salesforce Connections 2025 Introduced Marketing Cloud Next—an AI-Powered Platform That Streamlines Campaign Creation and Monitoring, Expands Agentforce for Automated Marketing Workflows, and Highlights Data Cloud Features Like Zero‑Copy Integration, Unified Customer Profiles, and AI Segmentation
Keith Kirkpatrick, Research Director at Futurum, covers the highlights from Salesforce Connections, including agentic AI in marketing, unified journeys through Marketing Cloud Next, real-time Data Cloud activation, and new retail bundles.
Marc Hammons and Professor Song Han join David Nicholson to share their insights on revolutionizing productivity and collaboration with AI PCs, delineating a future where work is transformed by intelligence and sustainability.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.