Can Synopsys and NVIDIA Redefine Chip Design Timelines with 30x Speed Gains?

Can Synopsys and NVIDIA Redefine Chip Design Timelines with 30x Speed Gains?

Analyst(s): Richard Gordon
Publication Date: April 14, 2025

Synopsys Inc. has announced significant updates to its ongoing collaboration with NVIDIA to accelerate chip design using NVIDIA’s Grace Blackwell platform. The announcement highlighted at the 2025 GTC keynote includes projected performance gains of up to 30x in Synopsys PrimeSim circuit simulation and 20x in Synopsys Proteus computational lithography, using the GB200 and B200 Blackwell architectures, respectively.

What is Covered in this Article:

  • Synopsys PrimeSim is projected to achieve 30x simulation acceleration using Grace Blackwell
  • Synopsys Proteus is expected to deliver up to 20x faster computational lithography
  • 15+ Synopsys EDA tools being optimized for NVIDIA Grace CPU architecture
  • NVIDIA NIM microservices integration expected to double Synopsys.ai Copilot’s output speed
  • Early GPU-based acceleration for TCAD and materials engineering across Synopsys’ portfolio

The News: Synopsys Inc. announced at GTC 2025 the expansion of its strategic collaboration with NVIDIA, focused on accelerating electronic design automation (EDA) workloads using the NVIDIA Grace Blackwell platform and CUDA-X libraries. As part of the announcement, Synopsys highlighted performance improvements of up to 30x in circuit simulation and 20x in computational lithography tasks.

In addition to hardware optimization, Synopsys is integrating NVIDIA inference microservices (NIM) into its generative AI tool, Synopsys.ai Copilot, targeting a 2x improvement in productivity. The company is also preparing more than 15 EDA solutions for NVIDIA’s Grace CPU architecture in 2025.

Can Synopsys and NVIDIA Redefine Chip Design Timelines with 30x Speed Gains?

Analyst Take: Synopsys and NVIDIA’s expanded partnership reflects a deliberate strategy to accelerate multiple phases of chip development – from design and verification to manufacturing and materials research. The collaboration is grounded in measurable performance benchmarks, targeting specific high-compute EDA workflows with clear hardware alignment. By integrating CUDA-X libraries and optimizing for NVIDIA’s Grace Blackwell architecture, Synopsys is methodically upgrading its EDA suite with a balance of compute acceleration and generative AI enablement.

Acceleration Across Design and Manufacturing Workflows

Synopsys reports substantial projected performance gains in two key tools – PrimeSim and Proteus – via NVIDIA’s latest hardware platforms. PrimeSim, used for SPICE-level circuit simulation, is expected to deliver up to 30x speedups when run on the Grace Blackwell platform. The company has already observed up to 15x improvements on GH200 Superchips. These gains significantly reduce simulation runtimes from days to hours, enhancing throughput and enabling more iteration within design cycles.

On the manufacturing side, Synopsys Proteus, which supports optical proximity correction (OPC) and inverse lithography techniques, is expected to achieve up to 20x acceleration using the B200 Blackwell GPU. With current deployments already delivering 15x gains using H100 GPUs and NVIDIA’s cuLitho library, Proteus continues to evolve as a high-performance tool for computational lithography. These two tools serve as anchors for performance-driven validation and process modeling at advanced nodes.

Broad Optimization Across the EDA Stack

In parallel with GPU-focused acceleration, Synopsys is expanding support for the NVIDIA Grace CPU architecture. More than 15 EDA solutions are being optimized in 2025, spanning circuit simulation, static timing analysis, physical verification, and functional verification. This widespread integration reflects a strategic intent to align Synopsys’ full-stack EDA capabilities with NVIDIA’s CPU roadmap. By embedding acceleration into the broader workflow – not just isolated tasks – Synopsys aims to improve end-to-end productivity across design teams.

Generative AI Integration for Engineering Productivity

Synopsys.ai Copilot, the company’s generative AI assistant, currently delivers a 2x productivity boost over prior design workflows. With the integration of NVIDIA’s NIM, the tool is expected to double that gain again, effectively quadrupling the speed at which users access critical design knowledge. Rather than simply embedding AI into backend tasks, this approach aims to bring intelligence directly into the engineering interface, potentially streamlining information discovery and interaction.

Accelerating Simulation for R&D and Process Innovation

Beyond core EDA and AI, Synopsys is also targeting performance improvements in research and materials science. Sentaurus Technology Computer-Aided Design (TCAD), the company’s simulation platform for process and device modeling, is projected to reach up to 10x faster results with GPU and CUDA-X enhancements. Meanwhile, QuantumATK has already achieved up to 100x acceleration on NVIDIA Hopper GPUs, enabling faster atomic-scale modeling of materials. These advancements expand the partnership’s relevance into pre-silicon research and innovation layers that underpin long-term semiconductor advancement.

What to Watch:

  • Synopsys must demonstrate real-world validation of the projected 30x and 20x speed gains across diverse customer workloads.
    Integration of NVIDIA NIM microservices into Synopsys.ai Copilot must deliver tangible productivity gains without disrupting established workflows.
  • Optimization of 15+ Synopsys tools on Grace CPU must meet performance expectations across design, verification, and manufacturing tasks.
  • Competing EDA vendors may accelerate their own AI and GPU strategies, potentially impacting adoption momentum for Synopsys-NVIDIA solutions.

See the complete press release on the collaboration between Synopsys and NVIDIA to accelerate chip design using Grace Blackwell on the Synopsys website.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

Focus on Execution as Synopsys Reports Solid 1Q FY 2025 Financial Results

Synopsys Introduces New HAV Tools to Address Growing SoC Design Complexity

Talking AWS, Intel, Salesforce, Pure Storage, Synopsys, & More

Author Information

Richard Gordon

Richard Gordon is Vice President & Practice Lead, Semiconductors for The Futurum Group. He has been involved in the semiconductor industry for more than 30 years, first in engineering and then in technology and market research, industry analysis, and business advisory.

For many years, Richard led Gartner's Semiconductor and Electronics practice, building a 20-person global team covering all aspects of semiconductor industry research, from manufacturing to chip markets and end applications. Having served on Gartner's Senior Research Board and as Gartner's Chief Forecaster, Richard has extensive experience in developing and implementing methodologies for market sizing, share and forecasting, to deliver data, analysis and insights about the competitive landscape, technology roadmaps, and market growth drivers.

Richard is a sought-after technology industry analyst, both as a trusted advisor to clients and also as an expert commentator speaking at industry events and appearing on live TV shows such as CNBC.

SHARE:

Latest Insights:

Custom Arm Neoverse V2 Chip Posts Gains in AI, HPC, and General Compute Across C4A VMs
Richard Gordon, VP & Practice Lead, Semiconductors at The Futurum Group, unpacks Google Axion’s strong benchmarks across AI, HPC, and cloud workloads, showing how Google’s custom Arm CPU could reshape enterprise infrastructure.
Intel’s New CEO Lip-Bu Tan Spotlighted Physical AI and Its Importance at Intel Vision in Providing First Glimpses of His Unfolding Strategy
Ron Westfall, Research Director at The Futurum Group, shares insights on why Intel possesses the key portfolio building blocks, such as AI accelerators, edge computing expertise, and a legacy of powering complex systems, to become an integral player in the nascent Physical AI market.
Google’s AI Ambitions Expand Beyond the Cloud as It Builds an Interoperable Ecosystem for Agent Communication, Application Management, and On-Premises AI Deployment
Nick Patience, AI Practice Lead at The Futurum Group, shares his insights - along with those of colleagues - on Google’s Cloud Next 25 event, held last week in Las Vegas.
OpenText Releases Titanium X to Integrate AI Agents Across Business Clouds and Operational Functions
Keith Kirkpatrick, Research Director at The Futurum Group, examines OpenText’s Titanium X rollout, which introduces AI agents across enterprise functions to support automation, security, and cloud-native.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.