Menu

NVIDIA Getting into DC AI ASIC Market?

NVIDIA Getting into DC AI ASIC Market?

The Six Five team discusses NVIDIA getting into DC AI ASIC market?

If you are interested in watching the full episode you can check it out here.

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Daniel Newman: I got like 20 questions. I’m going to make you just… I’m going to grill you on this one because you put out a post on social. It was so juicy and I felt like it didn’t get the pickup because I don’t think most people realize what you were saying. It’s too complicated. But this rumor, man, there’s a lot of implications.

Patrick Moorhead: Yeah, so let’s step back here. So NVIDIA is the data center GPU training and I think the generative AI inference king at this point, I think there was an article that said they had 95% data center market share. Probably makes sense. Maybe it’s closer to, I don’t know, 90, but it’s in the 90s. It’s big, big, big. And as we’ve talked about many times in the show, there are different ways to do training and different types of technologies and chips. And I like to look at it as kind of a barbell, right? Infinite flexibility. You have the CPU, you can run basically anything. The most efficient over here is an ASIC, which stands for application-specific integrated circuit. It does one thing and it does great. Now it’s just hard to program. And the GPU is kind of in the middle, maybe a little closer to the ASIC.

And just to confuse matters more, some people would call the GPU an ASIC, but let’s not go there. So the market right now for custom ASICs, you have Broadcom and Marvell, and then on the merchant ASIC, right, you’ve got stuff like Intel, Habana, you’ve got Groq, you’ve got Untether, companies like Untether AI, and we are seeing where it goes. Now, Dan, you and I have always talked about the AI ASIC being much more efficient and what’s going on here, but the rumor says, and this came out of Reuters, that NVIDIA is chasing the $30 billion custom ASIC market out there. And yeah, I got a little bit provocative, and Dan, you read between the lines of what I was saying, which is first of all, if this is true, totally justifies the market for data center AI ASICs, been all GPUs so far. NVIDIA already ships tiny ASIC blocks on its data center GPUs, and it’s called Tensor Cores.

And by the way, they’re Jetson platform. They have some specific, I think ASICs for convolutional network site, I forget what it’s called, DLA, maybe. If this is true, you would totally expect NVIDIA to play the CUDA compatibility software card, because right now, people are all in on GPUs, because they can get at least three generations of AI goodness out of them, if we look back historically. And naturally, if they were going to do that at, they would make this compatible with the CUDA tool set, the CUDA frameworks, and the CUDA models when it comes to generative AI. And the one thing that just popped into my head yesterday was where’s AMD in all this? AMD is a custom ASIC provider, a custom SOC provider to companies like Microsoft and Sony. It seems like they would be phenomenal in something like this, but hey, we’ll have to see what happens. I give it a 90% chance that NVIDIA gets into this custom market.

Daniel Newman: Hey, Pat, so let me interview you real quickly here, because you haven’t done a CNBC, so I’ll be CNBC, you’re going to be there. Just how hard, Pat, is it to enter the custom market?

Patrick Moorhead: Well, it’s a huge, huge commitment, and typically in any custom ASIC market, there is hundreds and hundreds of millions of R&D that you need to spend upfront before you get anything out there. Timeframe is key, too. I mean, the quickest reasonable ASIC that I’ve seen the pop out of the oven is maybe four years after inception, maybe three on a good day. Some companies will take what’s called NRE, non-recurring engineering, payments upfront. We’ve seen that for AMD. And when AMD did some of the first Microsoft and Sony ASICs or custom SOCs, they got big R&D payments that, by the way, made their gross margin… But they had to take the cost in their gross margin, which made the gross margin look low. But net margin business, it’s really good. So net-net, it’s hard, and it’s a pretty big commitment, because you’re also saying you’re going to support the software for a very long time.

Daniel Newman: Okay, two more quickies. So for companies like Marvell and Broadcom, Pat, is this an attack or is this complimentary?

Patrick Moorhead: So I think the market is so big that it would be complimentary. I don’t think this makes sense for the big hyper-scalers, just to be brutally honest. They already have a way to get custom done. I don’t know. I don’t see yet the incremental value that NVIDIA could bring over something custom, and it’s hard for me to imagine how would it be custom if NVIDIA is layering it in? It’s hard for me to kind of wrap my brain around that.

Daniel Newman: All right, well, listen, I’ve had fun interviewing you, Mr. Moorhead, thanks so much for joining CNBC.

Patrick Moorhead: Please have me on again.

Daniel Newman: Listen, there’s a couple of things too that occurred to me, but first and foremost is what you just said is look, the AI ASIC by the hyper-scalers, they’re going to go down their own route. What there is a little bit of an implication here that’s super interesting, though, is we’ve heard DGX Cloud, NVIDIA partnerships with companies like CoreWeave and others, as if you’re building competition, there is a benefit to potentially having this kind of IP and building. I also just, like I said, wonder, what’s it, about a $30 billion business, if it’s worth chasing, and how much incremental they’re going to get out of it, and where the incrementals can come from. It’s not going to be so much from the enterprise. So I have to imagine there’s some net revenue expansion coming out of this new AI data center concept, and being able to do it at scale.

But it’s going to be really interesting, Pat, because there’s a lot of speculation as to what’s going to happen next. I saw something, and this maybe transitions us really nicely to topic two, which is Intel’s IFS event that’s coming up.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

Related Insights
Google Debuts Pixel 10A Amidst Minimal Hardware Evolution
February 20, 2026

Google Debuts Pixel 10A Amidst Minimal Hardware Evolution

Olivier Blanchard, Research Director at Futurum, dives into the timing, specs, competitive advantages, market positioning, and strategic importance of Google’s Pixel 10A release....
Analog Devices Q1 FY 2026 Broad-Based Recovery with AI Data Center Upside
February 20, 2026

Analog Devices Q1 FY 2026: Broad-Based Recovery with AI Data Center Upside

Brendan Burke, Research Director at Futurum, analyzes Analog Devices’ Q1 FY 2026 earnings, highlighting Industrial and Communications momentum, AI data center power/optics growth, pricing cadence, and a stronger second-half setup....
Cadence Q4 FY 2025 Earnings Underscore AI-Led EDA Momentum
February 20, 2026

Cadence Q4 FY 2025 Earnings Underscore AI-Led EDA Momentum

Brendan Burke, Research Director at Futurum, analyzes Cadence’s Q4 FY 2025 results, highlighting agentic AI workflows, hardware demand at hyperscalers, and portfolio traction across EDA, IP, and SDA that shape...
Cohere’s Multilingual & Sovereign AI Moat Ahead of a 2026 IPO
February 20, 2026

Cohere’s Multilingual & Sovereign AI Moat Ahead of a 2026 IPO

Nick Patience, AI Platforms Practice Lead at Futurum, breaks down the impact of Cohere's Tiny Aya and Rerank 4 launches. Explore how these efficient models and the new Model Vault...
Will NVIDIA’s Meta Deal Ignite a CPU Supercycle
February 20, 2026

Will NVIDIA’s Meta Deal Ignite a CPU Supercycle?

Brendan Burke, Research Director at Futurum, analyzes NVIDIA and Meta's expanded partnership, deploying standalone Grace and Vera CPUs at hyperscale, signaling that agentic AI workloads are creating a new discrete...
CoreWeave ARENA is AI Production Readiness Redefined
February 17, 2026

CoreWeave ARENA is AI Production Readiness Redefined

Alastair Cooke, Research Director, Cloud and Data Center at Futurum, shares his insights on the announcement of CoreWeave ARENA, a tool for customers to identify costs and operational processes for...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.