The News: NVIDIA’s MLPerf 2.0 AI training test performance leadership continues in the latest round of MLPerf AI performance benchmark tests, as NVIDIA and its partners earned the highest overall AI training scores in the latest round of MLPerf 2.0 comparisons. The MLPerf benchmarks, collected by the open engineering consortium, MLCommons, measure AI inferencing performance by applying trained machine learning models to new data, which is critical for displaying real-time AI performance for applications and systems. Read the full NVIDIA blog post.
NVIDIA MLPerf 2.0 AI Training Test Leadership Continues
Analyst Take: The strength of the latest NVIDIA’s MLPerf 2.0 AI training test results is no surprise in the most recent round of MLCommons MLPerf 2.0 benchmark tests, which is a testament to the engineering savvy of NVIDIA and its partners in the always-competitive AI marketplace.
This has been a strong pattern for NVIDIA in the previous rounds of MLPerf AI performance benchmarks, with NVIDIA and its technology partners repeatedly displaying the top performance in earlier rounds as well. That is certainly laudable for their combined technologies and platforms.
None of this should be a surprise due to NVIDIA’s engineering strengths in AI, particularly with its NVIDIA A100 Tensor Core GPUs, which were unveiled two years ago to excellent reviews and promise for a wide range of industries and use cases. The NVIDIA A100 GPUs are based on the NVIDIA Ampere architecture.
The importance of achieving strong, high-performance MLPerf results is clear for vendors like NVIDIA. The continuing AI-powered industrial revolution directly correlates to the presence of accurate and high-performance AI models that are proven with such tests so they can be used in fields including language and speech recognition, conversational AI, recommender AI systems, molecular and computer vision, virtual worlds, robotics, autonomous vehicles, and more.
With the latest NVIDIA MLPerf 2.0 AI results, NVIDIA and its partners are again validated as strong leaders in the AI inference market. Particularly notable in the latest MLPerf results is that NVIDIA continues to be the only platform to submit its systems to run on all eight tests in the MLPerf industry benchmarks, and that NVIDIA and its partners still lead competitors in providing the best overall AI training performance in the benchmarks. The NVIDIA A100-based systems showed impressive results, providing the fastest performance on six of the eight tests.
NVIDIA’s AI platform dominated the entries in the latest benchmarks, with NVIDIA and its partners accounting for 90 percent of the total submissions for the tests. Again, that is impressive and points to NVIDIA’s leadership here in the AI accelerators marketplace.
In their testing, the industry-standard MLPerf 2.0 benchmarks represent a wide range of popular AI use cases, including speech recognition, natural language processing, recommender systems, object detection, image classification and more. NVIDIA has been submitting its AI platforms for testing since the first MLPerf benchmarks were established in December 2018.
NVIDIA MLPerf 2.0 Testing Details
Sixteen NVIDIA AI partners submitted MLPerf testing results using the NVIDIA AI platform in the latest round of benchmark tests. Those partners included ASUS, Baidu, CASIA (Institute of Automation, Chinese Academy of Sciences), Dell Technologies, Fujitsu, GIGABYTE, H3C, Hewlett Packard Enterprise, Inspur, KRAI, Lenovo, MosaicML, Nettrix and Supermicro, according to NVIDIA. Most partners submitted results using NVIDIA-Certified Systems, which are servers that are validated by NVIDIA for performance, manageability, security and scalability for enterprise deployments.
NVIDIA’s in-house AI supercomputer, named Selene, turned in the fastest time to train on four out of the eight MLPerf 2.0 benchmark tests. The Selene supercomputer is based on the modular NVIDIA DGX SuperPOD and is powered by NVIDIA A100 GPUs, the NVIDIA software stack, and NVIDIA InfiniBand networking.
Closing Thoughts on the Latest NVIDIA MLPerf 2.0 Results
NVIDIA and its A100 GPUs are proving with these continuing and strong MLPerf 2.0 benchmark results that the GPU maker is further cementing its leadership in the top tier of AI accelerators, AI software and related fields. From AI in manufacturing, industry, retail, vehicles, science, customer service, healthcare, financial industries, the metaverse and more, NVIDIA continues to be a leader across the technology and its future possibilities.
For years, NVIDIA has kept its focus on the future of AI and what it can do across verticals for a wide range of use cases. These latest MLPerf 2.0 results show that NVIDIA’s hard work is paying off and that it continues to play a leadership role in delivering the promise and performance of AI to the world.
This is an exciting time to be watching NVIDIA and its work with its partners in AI, and I expect they will continue to keep delivering more performance successes and “wow” moments in the continually evolving and advancing world of AI in the years to come.
Disclosure: Futurum Research is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum Research as a whole.
Other insights from Futurum Research:
New NVIDIA TAO Toolkit Capabilities Ease AI Deployments
Computex: NVIDIA Grace CPU-Powered Servers Coming 1H 2023