Cadence and NVIDIA Double Down on AI-Driven Engineering—Accelerated Computing Bridges Simulation and Verification

Cadence and NVIDIA

Cadence and NVIDIA have expanded their partnership to integrate agentic AI and accelerated computing across the engineering workflow, with a focus on bridging simulation, verification, and physical design for the AI era. The announcement, highlighted in Cadence CEO Anirudh Devgan’s CadenceLive keynote, underscores how GPU acceleration and domain-specific AI agents are converging to automate and optimize every stage of design, from RTL to analog and 3D ICs. This move signals a new phase where accelerated computing reshapes engineering productivity and the competitive landscape for Physical AI platforms.

What is Covered in this Article

  • Cadence and NVIDIA’s expanded collaboration in agentic AI and accelerated engineering
  • How accelerated computing is transforming simulation and verification workflows
  • Key announcements from CadenceLive and their strategic context
  • The competitive battle for workflow ownership as AI, simulation, and verification converge

The News: Cadence and NVIDIA have announced an expanded partnership to embed agentic AI and GPU-accelerated computing into engineering design and simulation platforms. The collaboration brings together Cadence’s computational software expertise and NVIDIA’s GPU infrastructure to automate and optimize workflows across RTL design, verification, analog design, and 3D IC simulation. At CadenceLive, CEO Anirudh Devgan emphasized the launch of new AI-powered agent stacks—ChipStack for RTL and verification, InnoStack for back-end design, and ViraStack for analog design—each leveraging accelerated computing to deliver up to 10x productivity improvements. The integration extends to system-level simulation, including real-time, physics-accurate digital twins for data centers and physical AI applications, enabled by GPU acceleration. This partnership positions Cadence and NVIDIA to address the growing complexity and scale of engineering challenges as AI adoption accelerates across industries.

Cadence and NVIDIA Double Down on AI-Driven Engineering—Accelerated Computing Bridges Simulation and Verification

Analyst Take: Cadence and NVIDIA are moving beyond tool integration to create an AI-accelerated workflow that bridges simulation, verification, and physical design. The expanded partnership can redefine how engineering teams approach automation and cross-domain optimization. The risk for buyers is how quickly they can adapt legacy flows to these new agentic, GPU-powered environments, and whether the benefits of tight integration outweigh concerns about flexibility and vendor dependence.

Accelerated Computing: The New Backbone of Design, Simulation, and Verification

Cadence’s strategy, as outlined by CEO Anirudh Devgan at CadenceLive, is to leverage GPU acceleration for AI model training, as well as for design, simulation, and verification across engineering disciplines. The Millennium supercomputer, for example, now accelerates both circuit simulation and 3D IC signoff, delivering up to 10x speedups for tasks like thermal, power, and stress analysis. This shift enables more of the design and verification process to happen in-silico, closing the gap between simulation and real-world deployment. The partnership with NVIDIA ensures that Cadence’s computational software can fully exploit the latest GPU architectures, making high-fidelity, real-time simulation practical across emerging domains such as physical AI, robotics, and advanced packaging.

In data centers, Cadence distinguishes itself by shifting from silicon simulation to the strategic optimization of tokens per watt. By integrating data center simulation tools with NVIDIA’s Omniverse DSX Blueprint, Cadence offers a unique system-of-systems digital twin that captures the complex interplay between GPU logic, cooling architectures, and thermal constraints before a physical site is ever built. This allows Cadence to move beyond technical specs and drive AI Factory ROI, proving that simulation-driven design can unlock billions in incremental revenue and positioning them as an indispensable partner for hyperscalers deploying next-generation Blackwell and Vera Rubin architectures.

Agentic AI: Automating the Full Engineering Stack

The new agents—ChipStack, InnoStack, and ViraStack—announced at CadenceLive represent a step change in workflow automation. These agents use domain-specific AI and GPU acceleration to automate RTL generation, verification planning, analog migration, and back-end implementation. For example, ViraStack can autonomously migrate analog designs between process nodes, while InnoStack orchestrates RTL synthesis, floorplanning, and signoff with built-in and customizable skills. This approach moves beyond point solutions to an orchestrated, AI-driven environment that adapts to user intent and legacy design data. Early customer feedback indicates 3x to 10x productivity gains, especially in manual, analog flows.

Strategic Context: Bridging EDA, Simulation, and Physical AI

Cadence’s vision, reinforced by its partnership with NVIDIA, is to bridge the traditionally siloed worlds of EDA, system simulation, and physical AI. The integration of GPU-accelerated simulation with agentic AI enables digital twins that model not only chips but also entire data centers and physical systems in real time. This is critical, as industries such as automotive, aerospace, and robotics demand higher accuracy and faster iteration in both virtual and physical domains. Vendors offering vertically integrated, AI-powered platforms spanning simulation, verification, and physical design will have a strategic advantage. However, enterprises must weigh the benefits of this integration against the need for openness and workflow portability.

Read the press release on Cadence’s website.

What to Watch

  • Hyperscaler adoption of DSX architecture in data center construction
  • Access to GPUs for accelerated computing
  • Enterprise migration of legacy workflows to agentic AI and GPU acceleration
  • Standardization of real-time, GPU-accelerated digital twins for system-level simulation and physical AI applications

Declaration of generative AI and AI-assisted technologies in the writing process: This content has been generated with the support of artificial intelligence technologies. Due to the fast pace of content creation and the continuous evolution of data and information, The Futurum Group and its analysts strive to ensure the accuracy and factual integrity of the information presented. However, the opinions and interpretations expressed in this content reflect those of the individual author/analyst. The Futurum Group makes no guarantees regarding the completeness, accuracy, or reliability of any information contained herein. Readers are encouraged to verify facts independently and consult relevant sources for further clarification.
Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.
Read the full Futurum Group Disclosure.

Other Insights from Futurum:

Agentic AI Or Pipeline AI For Code Reviews? Why The Architecture Decision Now Shapes Dev Velocity

GitHub Copilot’S Compliance Breakthrough: Enterprise Procurement Barriers Fall, Not Just Features Added

Author Information

Brendan Burke, Research Director

Brendan is Research Director, Semiconductors, Supply Chain, and Emerging Tech. He advises clients on strategic initiatives and leads the Futurum Semiconductors Practice. He is an experienced tech industry analyst who has guided tech leaders in identifying market opportunities spanning edge processors, generative AI applications, and hyperscale data centers. 

Before joining Futurum, Brendan consulted with global AI leaders and served as a Senior Analyst in Emerging Technology Research at PitchBook. At PitchBook, he developed market intelligence tools for AI, highlighted by one of the industry’s most comprehensive AI semiconductor market landscapes encompassing both public and private companies. He has advised Fortune 100 tech giants, growth-stage innovators, global investors, and leading market research firms. Before PitchBook, he led research teams in tech investment banking and market research.

Brendan is based in Seattle, Washington. He has a Bachelor of Arts Degree from Amherst College.

Related Insights
Unlock Faster AI
April 20, 2026

Can Eridu’s AI Networking Break the Data Center Bottleneck—or Just Move It?

With 78% of organizations boosting AI budgets, Eridu emerges from stealth with $200M+ in funding, claiming to break the data center bottleneck—but whether new architectures solve the problem or just...
Hybrid Data
April 20, 2026

Can Cloudera’s Stability Bet Win the Hybrid Data War?

Cloudera's platform enhancements enable hybrid data environments with stability, elastic scaling, and Apache Iceberg interoperability, positioning the company to serve enterprises balancing cloud and on-premises infrastructure....
Meta’s MTIA Partnership With Broadcom Solidifies the Future of XPUs in Inference Optimization
April 20, 2026

Meta’s MTIA Partnership With Broadcom Solidifies the Future of XPUs in Inference Optimization

Brendan Burke, Research Director at Futurum, examines how the Meta Broadcom MTIA partnership expands custom AI silicon and tests whether multi-gigawatt infrastructure can scale efficiently....
Can Claude Opus 4.7 and Ensemble AI Models Finally Make Code Review Reliable?
April 18, 2026

Can Claude Opus 4.7 and Ensemble AI Models Finally Make Code Review Reliable?

CodeRabbit's ensemble AI code review system using Claude Opus 4.7 catches subtle bugs and race conditions that single-model systems miss, signaling a major shift in software quality assurance....
Will GPT-Rosalind Redefine AI’s Role in Life Sciences R&D?
April 18, 2026

Will GPT-Rosalind Redefine AI’s Role in Life Sciences R&D?

OpenAI's GPT-Rosalind marks a pivotal shift in enterprise AI, delivering domain-specific reasoning for life sciences while intensifying competition between horizontal and vertical AI specialists....
Can Real-Time Code Quality Tools Like Qodo and Cursor Break the Pull Request Bottleneck?
April 18, 2026

Can Real-Time Code Quality Tools Like Qodo and Cursor Break the Pull Request Bottleneck?

Qodo's integration with Cursor demonstrates how real-time code quality tools are eliminating pull request bottlenecks by surfacing issues as developers write code, not after submission....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.