Broadcom Wins 1st & 2nd Generation AI ASIC Programs From OpenAI?

Broadcom Wins 1st & 2nd Generation AI ASIC Programs From OpenAI?

The Six Five team discusses Broadcom Wins 1st & 2nd Generation AI ASIC Programs From OpenAI?

If you are interested in watching the full episode you can check it out here.

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Daniel Newman: I shared a rumor from a good source, I think, but OpenAI, are they going to Broadcom? Are they going to go to Broadcom for some help on a chip?

Patrick Moorhead: Yeah. So this was a JP Morgan report that came out that said Broadcom has “recently won OpenAI’s first and second generation AI ASIC programs, positioning it as OpenAI’s fourth AI ASIC partner”. So this brings up more questions than answers. But, first of all, if you look at a lot of what’s driving Azure right now as an IaaS service … And, yes, Microsoft has an Azure OpenAI PaaS service out there, but it’s OpenAI. That’s what’s driving a lot of this super growth. In fact, there’s so much growth that Microsoft has a partnership with OCI for GPUs. What it does is it begs the question of, hey, is OpenAI going to be going more outside of Microsoft and Azure for its IaaS services? I think that’s interesting, where they might direct it to these new GPU houses that are out there, like the ones that Chaitin from AWS went to, CoreWeave, right?

Daniel Newman: Yup.

Patrick Moorhead: Then the second thing is, wait a second, one of four ASIC providers? What? If you remember the big cameo picture shot with Jensen from Nvidia was him dropping off the first Blackwell GPU system, or at least that’s what the tweet said or the photos said. It was with Jensen and Sam Altman. But then it’s the fourth ASIC provider. Like fourth, okay. I know that Broadcom is the clear leader in this. I think Charlie had said $10 billion for XPU this year, moving to 11, and the rest, they’re picking up on networking. I think the overall was 50 billion for AI. By the way, that 10 billion number to 11 is just going to absolutely catapult.

So you have Broadcom, and then the other player in here could be Marvell. Marvell is absolutely in the hunt. Broadcom did say it had another consumer play at their AI investor day, but I’ve got to tell you, I had teed that up as Apple. They called it a consumer play, and maybe that’s just Broadcom’s way of putting it, or Meta. Apple or Meta wasn’t thinking OpenAI. So you’ve got Broadcom, you’ve got Marvell. Could this possibly be Intel’s Gaudi fan or is this going to be something like Groq? And that’s Groq with a Q. So very provocative, very juicy. A JP Morgan note is not like a rumor that you pick up from some nameless, faceless thing out there on the Twitter. So amazing opportunity, puts the exclamation point on if you want to do something more efficiently, do it with an ASIC, whether it’s training or inference.

Daniel Newman: Pat, one of the things that’s really interesting too is that these TPUs and XPUs are … They’re not flexible like GPUs, but you’re seeing the logic cores and the combinations of head nodes and logic cores being created where they can be … They’re not so narrow that you can only … Like you’re seeing what we’ve heard about Gemini being trained up entirely on a … People did not think that was really plausible-

Patrick Moorhead: Exactly.

Daniel Newman: … and now you’re seeing mega builds happening on XPUs. So this raises a huge opportunity. So when you heard Sam Altman running around talking about trillions of dollars raised to do the future of infrastructure … And, again, this was not just the silicon. That’s where there’s a lot of metrics and numbers being derived is there’s all the silicon, then there’s the systems, and then there’s the actual infrastructure, and then there’s the cooling, and then there’s the thermal, and then there’s the actual racks and then the cabling, and then you go out to the fricking materials.

I mean there’s a lot that goes in. This is what we’ve been talking about throughout the … This is not just … Because a lot of people are like, “How big is the market?” Well, we’ll talk a little bit about the market itself here soon, but what we’re really talking about is the chip is part of this bigger system and the rack scale, building up the racks top to bottom. What are all the components in the … Nvidia’s got a lot of parts in that now. But, anyways, my point going back in all this is companies, whether it’s the hyperscale cloud providers, they want to vertically integrate. Look, none of them want to say that. In fact, I’m pretty sure some of them are banned from using that word in anything that they talk about, but we can talk about. They want to vertically integrate because they make more money when they do that.

Also, they want to own their own silicon and silicon design. It’s a differentiator, Pat. It’s kind of like the data in generative AI. Having their own silk and their own design is differentiating. I mean part of Google’s prowess and why it’s been able to power up so much into the AI era has been able to … It was doing this for a long time. It was doing this long before it was a thing. Before the cloud providers are really thinking about it, it had its own silicon for its own workloads, that it had designed an ASIC for itself for the TPU. It wasn’t planning to sell it in the cloud. It just so turned out that it was usable when it got to that point and that people wanted to-

Patrick Moorhead: Yeah.

Daniel Newman: So I mean, Pat, you mentioned this. Broadcom is the 800-pound gorilla here. Marvell is the next op right now. They’re fighting for a handful of key designs. But, look, I mean the OpenAI opportunity, the Apple opportunity is a big one. Everybody’s going to be thinking about … I think every major hyperscaler is going to build their own. I think you’re going to even see with what we’re looking at with mega enterprises and with what Synopsys can do and what … You’re going to see mega enterprises starting to build, I think, some of their own … When they understand their AI needs and workloads closely enough. But right now, Pat, this is a really fast-growing market and it’s a really interesting thing.

But one other thing I think you said that is really important is there is real competition, the Gaudis, the off-the-shelf TPUs and off-the-shelf Inferentias and such. OpenAI would be crazy not to look at that. There’s a lot of R&D and work that’s gone into them. So do they want to build their own? My guess, like Meta, Pat, is they’ll end up somewhere in between. They’ll want to have some that’s going to be very specific for their need. They’re going to find some that’s off-the-shelf. They’re going to keep buying NVIDIA for things that need that level of flexibility. But I think they’re going to take more control into their own hands.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Brad Shimmin, VP and Practice Lead at The Futurum Group, examines why investors behind NVIDIA and Meta are backing Hammerspace to remove AI data bottlenecks and improve performance at scale.
Looking Beyond the Dashboard: Tableau Bets Big on AI Grounded in Semantic Data to Define Its Next Chapter
Futurum analysts Brad Shimmin and Keith Kirkpatrick cover the latest developments from Tableau Conference, focused on the new AI and data-management enhancements to the visualization platform.
Colleen Kapase, VP at Google Cloud, joins Tiffani Bova to share insights on enhancing partner opportunities and harnessing AI for growth.
Ericsson Introduces Wireless-First Branch Architecture for Agile, Secure Connectivity to Support AI-Driven Enterprise Innovation
The Futurum Group’s Ron Westfall shares his insights on why Ericsson’s new wireless-first architecture and the E400 fulfill key emerging enterprise trends, such as 5G Advanced, IoT proliferation, and increased reliance on wireless-first implementations.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.