Menu

Is NVIDIA’s GB300 NVL72 Power Shelf the Answer to AI’s Grid Strain?

Analyst(s): Ray Wang
Publication Date: July 31, 2025

NVIDIA’s GB300 NVL72 platform integrates energy storage and power smoothing technologies to reduce grid disruption from synchronized AI workloads and lower peak demand by up to 30%.

What is Covered in this Article:

  • NVIDIA introduces power smoothing and energy storage features in GB300 NVL72
  • Capacitor-based energy storage reduces peak power demand by 30%
  • GPU burn and power cap strategies improve load ramp-up and ramp-down control
  • The GB300 system enables more intelligent data center provisioning and denser racks

The News: NVIDIA has rolled out a power smoothing system for its GB300 NVL72 platform to tackle the sharp, synchronized power swings caused by massive AI training jobs. These sudden changes – driven by thousands of GPUs working in sync – strain the power grid and make things harder for data center operations.

To deal with this, NVIDIA has introduced a few key features: startup power caps, capacitor-based energy storage to balance loads, and GPU burn techniques to ease power usage at shutdown. Together, these updates can cut peak power draw by up to 30%, make power loads easier to predict, and help pack more hardware into the same space. The same upgrades will also roll out to GB200 NVL72 systems.

Is NVIDIA’s GB300 NVL72 Power Shelf the Answer to AI’s Grid Strain?

Analyst Take: The NVIDIA GB300 NVL72 platform directly addresses a significant challenge in AI-scale computing: managing extreme and unpredictable power consumption. Unlike older workloads that spread out power needs over time, AI training runs tend to create massive spikes and drops. NVIDIA’s approach mixes hardware and software to smooth things out. The strategic implementation of capacitors, measured ramp-ups, and regulated GPU wind-downs optimizes data center operations for enhanced grid compatibility. These changes make power use more efficient and give data centers more flexibility to fit more racks into the same power budget or reduce the total power they need to run their systems.

Addressing Synchronized Load Volatility

AI training tasks use thousands of GPUs simultaneously, which creates sharp swings in power, something traditional data centers were not built to handle. NVIDIA’s heatmaps show how these jobs cause clusters to jump between full throttle and idle, which puts extra stress on the power grid. This synced behavior can trigger voltage drops or leftover energy during sudden shutdowns. The NVIDIA GB300 is designed to address power grid challenges by incorporating energy storage and enabling smoother ramping, thereby transforming irregular power curves into a more manageable form for the grid.

Energy Storage Smooths Load Profile

The GB300 NVL72 adds electrolytic capacitors inside its power shelves that store energy during low-demand periods and release it during spikes. Built with LITEON Technology, this power shelf design helps smooth power use right at the rack. These capacitors take up nearly half the space in the power supply and provide 65 joules of energy storage per GPU. This buffer helps reduce fluctuations in power draw, as demonstrated in tests comparing GB200 and GB300 units running the same workload. The GB300 cut peak grid usage by 30% while delivering similar power to GPUs.

Power Cap and Ramp-Down Strategies

Besides capacitors, the GB300 limits power spikes using a step-by-step startup system that gradually increases GPU power use. This startup cap is raised slowly to match what the grid can handle. At the end of a job, NVIDIA’s GPU burn feature keeps GPUs active a bit longer, letting power use drop off slowly instead of all at once. These behaviors are controlled with settings like minimum idle power and ramp-down timing, which can be tweaked using NVIDIA SMI or Redfish. This complete control over power use reduces pressure on the grid and makes job scheduling more predictable.

Implications for Data Center Design

With this smarter PSU setup, data centers no longer have to be built around peak power needs. Instead, they can be sized closer to average use, meaning more hardware in the same space or lower overall energy costs. In addition, since the power smoothing happens inside the rack – not fed back to the grid – operators have more control over their power use. This mix of hardware and software gives a scalable way to make data centers grid-friendly, whether using GB200 or GB300 NVL72 systems.

What to Watch:

  • Adoption of similar power smoothing mechanisms by other GPU and data center vendors
  • Broader industry validation of capacitor-backed rack-level energy storage
  • Role of energy smoothing in mitigating grid instability from synchronized AI loads
  • Data center layout and provisioning changes based on revised power assumptions
  • Adjustments to power provisioning and rack density planning in modern data centers

See the full blog post on NVIDIA’s GB300 NVL72 power smoothing features on the NVIDIA website.

Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.

Other insights from Futurum:

Nvidia Q4 FY 2025: AI Momentum Strengthens Despite Margin Pressures

Is NVIDIA Building the Defining Infrastructure for AI-Powered Robotics?

Is GM’s AI Bet With NVIDIA a Turning Point for Its Vehicle and Factory Roadmap?

Image Credit: NVIDIA

Author Information

Ray Wang is the Research Director for Semiconductors, Supply Chain, and Emerging Technology at Futurum. His coverage focuses on the global semiconductor industry and frontier technologies. He also advises clients on global compute distribution, deployment, and supply chain. In addition to his main coverage and expertise, Wang also specializes in global technology policy, supply chain dynamics, and U.S.-China relations.

He has been quoted or interviewed regularly by leading media outlets across the globe, including CNBC, CNN, MarketWatch, Nikkei Asia, South China Morning Post, Business Insider, Science, Al Jazeera, Fast Company, and TaiwanPlus.

Prior to joining Futurum, Wang worked as an independent semiconductor and technology analyst, advising technology firms and institutional investors on industry development, regulations, and geopolitics. He also held positions at leading consulting firms and think tanks in Washington, D.C., including DGA–Albright Stonebridge Group, the Center for Strategic and International Studies (CSIS), and the Carnegie Endowment for International Peace.

Related Insights
Micron Technology Q1 FY 2026 Sets Records; Strong Q2 Outlook
December 18, 2025

Micron Technology Q1 FY 2026 Sets Records; Strong Q2 Outlook

Futurum Research analyzes Micron’s Q1 FY 2026, focusing on AI-led demand, HBM commitments, and a pulled-forward capacity roadmap, with guidance signaling continued strength into FY 2026 amid persistent industry supply...
NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy
December 16, 2025

NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy

Nick Patience, AI Platforms Practice Lead at Futurum, shares his insights on NVIDIA's release of its Nemotron 3 family of open-source models and the acquisition of SchedMD, the developer of...
Broadcom Q4 FY 2025 Earnings AI And Software Drive Beat
December 15, 2025

Broadcom Q4 FY 2025 Earnings: AI And Software Drive Beat

Futurum Research analyzes Broadcom’s Q4 FY 2025 results, highlighting accelerating AI semiconductor momentum, Ethernet AI switching backlog, and VMware Cloud Foundation gains, alongside system-level deliveries....
Oracle Q2 FY 2026 Cloud Grows; Capex Rises for AI Buildout
December 12, 2025

Oracle Q2 FY 2026: Cloud Grows; Capex Rises for AI Buildout

Futurum Research analyzes Oracle’s Q2 FY 2026 earnings, highlighting cloud infrastructure momentum, record RPO, rising AI-focused capex, and multicloud database traction driving workload growth across OCI and partner clouds....
Synopsys Q4 FY 2025 Earnings Highlight Resilient Demand, Ansys Integration
December 12, 2025

Synopsys Q4 FY 2025 Earnings Highlight Resilient Demand, Ansys Integration

Futurum Research analyzes Synopsys’ Q4 FY 2025 results, highlighting AI-era EDA demand, Ansys integration momentum, and the NVIDIA partnership....
Five Key Reasons Why Confluent Is Strategic To IBM
December 9, 2025

Five Key Reasons Why Confluent Is Strategic To IBM

Brad Shimmin and Mitch Ashley at Futurum, share their insights on IBM’s $11B acquisition of Confluent. This bold move signals a strategic pivot, betting that real-time "data in motion" is...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.