Menu

MLPerf Update: NVIDIA Pushes The Boundaries of the GPU

The News: NVIDIA delivers the world’s fastest AI training performance among commercially available products, according to MLPerf benchmarks released on 7/29/2020.

The A100 Tensor Core GPU demonstrated the fastest performance per accelerator on all eight MLPerf benchmarks. For overall fastest time to solution at scale, the DGX SuperPOD system, a massive cluster of DGX A100 systems connected with HDR InfiniBand, also set eight new performance milestones. The real winners are customers applying this performance today to transform their businesses faster and more cost effectively with AI.

This is the third consecutive and strongest showing for NVIDIA in training tests from MLPerf, an industry benchmarking group formed in May 2018. NVIDIA set six records in the first MLPerf training benchmarks in December 2018 and eight in July 2019.

Read the full release from NVIDIA here.

Analyst Take: This past weeks MLPerf update should serve as a reminder just how strong NVIDIA’s position remains in the AI training space. The company’s overall position in AI and the growing ability to package hardware, software and frameworks to up-level the GPU from training hardware, to full stack acceleration for training and inference is gaining momentum. 

The numbers are fairly self explanatory and can easily be accessed in the release itself. In short, the company was able to maintain ahead of all of its key competitors, while breaking its performance milestones and shortening time to solution at scale–this comes on the back of its A100 Tensor Core GPU as it has been packaged to build powerful supercomputing capabilities. Add the Mellanox acquisition and the future seems bright to be able to continue to innovate at scale. 

Recommendation Systems, Conversational AI, Reinforcement Learning Showing the Depths of AI for NVIDIA

The MLPerf benchmarks — backed by organizations including Amazon, Baidu, Facebook, Google, Harvard, Intel, Microsoft and Stanford — constantly evolve to remain relevant as AI itself evolves.

This time around, the benchmarks added a few real-world use case tests that particularly caught my attention for enterprise AI applications. The first being recommendation systems and the second being conversational AI. NVIDIA has made significant progress in these areas and furthered it with recent announcements of Marlin and Jasper. These capabilities are two of the most sought after applications for AI and are priming further debate around CPU vs. GPU for inference. 

ConvAI RecSys Users FINAL x1280

NVIDIA has been able to proclaim significant wins in both conversational and recommender systems. One of its biggest included using its GPUs to support recommender engines for Alibaba and helping to power more than $38 Billion in sales in a single day late last year on the company’s Singles Day. 

We can no longer kid ourselves thinking of GPU and Training and CPU and Inference. NVIDIA is ambitiously testing that theory with its advancements. 

Overall Impressions of NVIDIA MLPerf Benchmark Updates

NVIDIA continues to show its dominance in the AI training space. This has long been its bread and butter and with the A100, it is seeing even greater momentum as it pulls away from its competition. 

The rapid improvement in its performance at 4x in just 1.5 years is less about benchmarks and more about real world AI performance. The success in high growth tasks like Conversational AI and Recommendation systems serve as indicators of this success.

While I do believe these types of benchmarks matter a lot for understanding innovation and market position. I’m more focused on the applications–and that has been something NVIDIA has continued to excel at, and that is what will be most important in terms of driving revenue, growth and of course value for its shareholders. 

Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.

Read more analysis from Futurum Research:

Mercedes-Benz partners with NVIDIA to Deliver the Next Generation of Automotive Innovation

Google Extends Work From Home Policy Through End of June 2021

Qualcomm Delivers a Big Q3 Powered by 5G and Licensing Agreements

Image Credit: NVIDIA

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

Related Insights
CoreWeave ARENA is AI Production Readiness Redefined
February 17, 2026

CoreWeave ARENA is AI Production Readiness Redefined

Alastair Cooke, Research Director, Cloud and Data Center at Futurum, shares his insights on the announcement of CoreWeave ARENA, a tool for customers to identify costs and operational processes for...
Applied Materials Q1 FY 2026 AI Demand Lifts Outlook
February 17, 2026

Applied Materials Q1 FY 2026: AI Demand Lifts Outlook

Brendan Burke, Research Director at Futurum, analyzes Applied Materials’ Q1 FY 2026, highlighting AI-driven mix to leading-edge logic, HBM, and advanced packaging, new product launches, and services leverage....
Arista Networks Q4 FY 2025 Revenue Beat on AI Ethernet Momentum
February 16, 2026

Arista Networks Q4 FY 2025: Revenue Beat on AI Ethernet Momentum

Futurum Research analyzes Arista’s Q4 FY 2025 results, highlighting AI Ethernet adoption across model builders and cloud titans, growing DCI/7800 spine roles, AMD-driven open networking wins, and a Q1 guide...
Can Cadence Shorten Chip Design Timelines with ChipStack AI
February 16, 2026

Can Cadence Shorten Chip Design Timelines with ChipStack AI?

Brendan Burke, Research Director at Futurum, assesses Cadence’s launch of ChipStack, an agentic AI workflow for front‑end chip design and verification, using a structured “Mental Model” to coordinate multiple agents....
Cisco Live EMEA 2026 Can a Networking Giant Become an AI Platform Company
February 16, 2026

Cisco Live EMEA 2026: Can a Networking Giant Become an AI Platform Company?

Nick Patience, AI Platforms Practice Lead at Futurum, shares insights direct from Cisco Live EMEA 2026 on Cisco’s ambitious pivot from networking vendor to full-stack AI platform company, and where...
Twilio Q4 FY 2025 Revenue Beat, Margin Expansion, AI Voice Momentum
February 16, 2026

Twilio Q4 FY 2025: Revenue Beat, Margin Expansion, AI Voice Momentum

Futurum Research analyzes Twilio’s Q4 FY 2025 results, highlighting voice AI momentum, solution-led selling, and disciplined margin management as Twilio positions its platform as an AI-era customer engagement infrastructure layer....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.