Menu

MLPerf Update: NVIDIA Pushes The Boundaries of the GPU

The News: NVIDIA delivers the world’s fastest AI training performance among commercially available products, according to MLPerf benchmarks released on 7/29/2020.

The A100 Tensor Core GPU demonstrated the fastest performance per accelerator on all eight MLPerf benchmarks. For overall fastest time to solution at scale, the DGX SuperPOD system, a massive cluster of DGX A100 systems connected with HDR InfiniBand, also set eight new performance milestones. The real winners are customers applying this performance today to transform their businesses faster and more cost effectively with AI.

This is the third consecutive and strongest showing for NVIDIA in training tests from MLPerf, an industry benchmarking group formed in May 2018. NVIDIA set six records in the first MLPerf training benchmarks in December 2018 and eight in July 2019.

Read the full release from NVIDIA here.

Analyst Take: This past weeks MLPerf update should serve as a reminder just how strong NVIDIA’s position remains in the AI training space. The company’s overall position in AI and the growing ability to package hardware, software and frameworks to up-level the GPU from training hardware, to full stack acceleration for training and inference is gaining momentum. 

The numbers are fairly self explanatory and can easily be accessed in the release itself. In short, the company was able to maintain ahead of all of its key competitors, while breaking its performance milestones and shortening time to solution at scale–this comes on the back of its A100 Tensor Core GPU as it has been packaged to build powerful supercomputing capabilities. Add the Mellanox acquisition and the future seems bright to be able to continue to innovate at scale. 

Recommendation Systems, Conversational AI, Reinforcement Learning Showing the Depths of AI for NVIDIA

The MLPerf benchmarks — backed by organizations including Amazon, Baidu, Facebook, Google, Harvard, Intel, Microsoft and Stanford — constantly evolve to remain relevant as AI itself evolves.

This time around, the benchmarks added a few real-world use case tests that particularly caught my attention for enterprise AI applications. The first being recommendation systems and the second being conversational AI. NVIDIA has made significant progress in these areas and furthered it with recent announcements of Marlin and Jasper. These capabilities are two of the most sought after applications for AI and are priming further debate around CPU vs. GPU for inference. 

ConvAI RecSys Users FINAL x1280

NVIDIA has been able to proclaim significant wins in both conversational and recommender systems. One of its biggest included using its GPUs to support recommender engines for Alibaba and helping to power more than $38 Billion in sales in a single day late last year on the company’s Singles Day. 

We can no longer kid ourselves thinking of GPU and Training and CPU and Inference. NVIDIA is ambitiously testing that theory with its advancements. 

Overall Impressions of NVIDIA MLPerf Benchmark Updates

NVIDIA continues to show its dominance in the AI training space. This has long been its bread and butter and with the A100, it is seeing even greater momentum as it pulls away from its competition. 

The rapid improvement in its performance at 4x in just 1.5 years is less about benchmarks and more about real world AI performance. The success in high growth tasks like Conversational AI and Recommendation systems serve as indicators of this success.

While I do believe these types of benchmarks matter a lot for understanding innovation and market position. I’m more focused on the applications–and that has been something NVIDIA has continued to excel at, and that is what will be most important in terms of driving revenue, growth and of course value for its shareholders. 

Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.

Read more analysis from Futurum Research:

Mercedes-Benz partners with NVIDIA to Deliver the Next Generation of Automotive Innovation

Google Extends Work From Home Policy Through End of June 2021

Qualcomm Delivers a Big Q3 Powered by 5G and Licensing Agreements

Image Credit: NVIDIA

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

Related Insights
Coherent Q2 FY 2026 AI Datacenter Demand Lifts Revenue and Margins
February 6, 2026

Coherent Q2 FY 2026: AI Datacenter Demand Lifts Revenue and Margins

Futurum Research analyzes Coherent’s Q2 FY 2026 results, highlighting AI datacenter optics demand, 6-inch indium phosphide capacity expansion, and growing OCS/CPO traction supporting margin expansion into FY 2027....
Arm Q3 FY 2026 Earnings Highlight AI-Driven Royalty Momentum
February 6, 2026

Arm Q3 FY 2026 Earnings Highlight AI-Driven Royalty Momentum

Futurum Research analyzes Arm’s Q3 FY 2026 results, highlighting CPU-led AI inference momentum, CSS-driven royalty leverage, and diversification across data center, edge, and automotive, with guidance pointing to continued growth....
Qualcomm Q1 FY 2026 Earnings Record Revenue, Memory Headwinds
February 6, 2026

Qualcomm Q1 FY 2026 Earnings: Record Revenue, Memory Headwinds

Futurum Research analyzes Qualcomm’s Q1 FY 2026 earnings, highlighting AI-native device momentum, Snapdragon X PCs, and automotive SDV traction amid near-term handset build constraints from industry-wide memory tightness....
Alphabet Q4 FY 2025 Highlights Cloud Acceleration and Enterprise AI Momentum
February 6, 2026

Alphabet Q4 FY 2025 Highlights Cloud Acceleration and Enterprise AI Momentum

Nick Patience, VP and AI Practice Lead at Futurum analyzes Alphabet’s Q4 FY 2025 results, highlighting AI-driven momentum across Cloud and Search, Gemini scale, and 2026 capex priorities to expand...
Amazon CES 2026 Do Ring, Fire TV, and Alexa+ Add Up to One Strategy
February 5, 2026

Amazon CES 2026: Do Ring, Fire TV, and Alexa+ Add Up to One Strategy?

Olivier Blanchard, Research Director at The Futurum Group, examines Amazon’s CES 2026 announcements across Ring, Fire TV, and Alexa+, focusing on AI-powered security, faster interfaces, and expanded assistant access across...
Is 2026 the Turning Point for Industrial-Scale Agentic AI?
February 5, 2026

Is 2026 the Turning Point for Industrial-Scale Agentic AI?

VP and Practice Lead Fernando Montenegro shares insights from the Cisco AI Summit 2026, where leaders from the major AI ecosystem providers gathered to discuss bridging the AI ROI gap...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.