Search
Close this search box.

Weka Achieves NVIDIA DGX BasePod Certification

Weka Achieves NVIDIA DGX BasePod Certification

The News: Weka announced that its Weka data platform received certification for a NVIDIA DGX BasePod reference architecture. The certification provides a streamlined approach for deploying Weka with NVIDIA hardware for AI applications. It also sets the path forward for Weka to receive NVIDIA DGX SuperPod certification. See more in Weka’s press release here.

Weka Achieves NVIDIA DGX BasePod Certification

Analyst Take: Weka announced a new certification for a NVIDIA DGX BasePod reference architecture. The reference architecture is based on the Weka Data Platform and NVIDIA DGX H100. Weka claims the solution is capable of 600 GB/s throughput and 22M IOPs in 8 rack units, which is stated to be 10x more throughput and 6x more IOPs than a previous NVIDIA DGX BasePod configuration based on NVIDIA DGX A100 systems.

This certified reference architecture is all about streamlined deployment and optimized performance for running AI workloads. The solution combines the basic building blocks required for AI workloads—high-performance, scalable data storage—with top of the line hardware. Along with NVIDIA H100 GPUs, the solution features Intel Xeon processors, NVIDIA ConnectX-7 NICs, NVIDIA Quantum-2 InfiniBand switches, and NVIDIA Spectrum Ethernet switches. By utilizing this reference architecture, organizations can get up and running quickly with infrastructure that is optimized for AI.

The other notable takeaway from this announcement is it sets the stage for Weka to receive further NVIDIA certifications. With the announcement of the NVIDIA DGX BasePod certification, Weka has already announced it is now setting its eyes on a NVIDIA DGX SuperPod certification. The addition of the SuperPod certification will bring further flexibility and deployment options for Weka data platform customers.

Overall, the combination of the Weka data platform and NVIDIA hardware makes perfect sense for AI deployments. AI workloads are very reliant on their ability to feed lots of data to GPUs and to do so as quick as possible. This reference architecture is set to do exactly that, by utilizing Weka’s high-performance data platform and NVIDIA GPUs. By providing a certified reference architecture—with a future SuperPod certification on the way—Weka and NVIDIA are providing an efficient way for organizations to quickly deploy infrastructure capable of supporting their AI workloads.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Answer.AI R&D Lab Aims to Bring Practical AI Products

2024 Trends and Predictions for Data Storage

Dell Releases InsightIQ 5.0

Image Credit: Weka

Author Information

Mitch comes to The Futurum Group through the acquisition of the Evaluator Group and is focused on the fast-paced and rapidly evolving areas of cloud computing and data storage. Mitch joined Evaluator Group in 2019 as a Research Associate covering numerous storage technologies and emerging IT trends.

With a passion for all things tech, Mitch brings deep technical knowledge and insight to The Futurum Group’s research by highlighting the latest in data center and information management solutions. Mitch’s coverage has spanned topics including primary and secondary storage, private and public clouds, networking fabrics, and more. With ever changing data technologies and rapidly emerging trends in today’s digital world, Mitch provides valuable insights into the IT landscape for enterprises, IT professionals, and technology enthusiasts alike.

SHARE:

Latest Insights:

An Analytical Look at Lattice’s Q3 FY2024 Earnings, Strategic Cost Reductions, and the Company’s Focus on Long-Term Market Expansion
Bob Sutor, VP and Practice Lead of Emerging Technologies at The Futurum Group analyzes Lattice Semiconductor's Q3 2024 results, examining the company's strategic cost reductions, AI-PC partnerships, and leadership transition to drive long-term growth.
AMD Is Developing AI-Focused Infrastructure Solutions and Competitive AI PC Processors, Positioning Itself in the Enterprise and Personal Computing Markets
Olivier Blanchard, Research Director at The Futurum Group, analyzes AMD's Q3 2024 performance and AI advancements from the Advancing AI event, emphasizing AMD’s competitive push in data centers and AI PCs against Intel and Qualcomm.
Amazon’s Q3 FY2024 Earnings Driven by AI, Cloud Innovation, and Enhanced Retail Capabilities
Olivier Blanchard, Research Director at The Futurum Group, discusses Amazon’s Q3 2024 earnings, including the pivotal role of AI and cloud technology, AWS growth, and innovative AI shopping tools reshaping Amazon’s revenue and customer experience.
Bob Sutor, VP and Practice Lead for Emerging Technologies at The Futurum Group, summarizes his report on his talk at the Inside Quantum Technology Quantum+AI conference in New York City on October 29, 2024. The talk title was Quantum AI: A Quantum Computing Industry Perspective.