Search
Close this search box.

Talking NVIDIA, Apple, Broadcom, Intel, Micron, Synopsys

Talking NVIDIA, Apple, Broadcom, Intel, Micron, Synopsys

On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss the tech news stories that made headlines this week. The handpicked topics for this week are:

  1. NVIDIA GTC 2024
  2. Broadcom “Enabling AI Infrastructure” Investor Meeting 2024
  3. Intel $20B Chips Act Funding
  4. Apple Sued by DoJ for Illegal Monopoly
  5. Micron Tech Q2 FY24 Earnings
  6. Synopsys Investor Day 2024

For a deeper dive into each topic, please click on the links above. Be sure to subscribe to The Six Five Webcast so you never miss an episode.

Watch the episode here:

Listen to the episode on your favorite streaming platform:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Daniel Newman: Hey everyone. Welcome back to another episode of The Six Five Podcast, episode 209. 209 times on a Friday, Saturday, Sunday, Monday, Tuesday, Wednesday, Thursday that Mr. Moorhead and I have been here in the chair ready to talk tech. Pat, I’m particularly excited this week about the slate that we have. I love this pod. I love all of you.

Patrick Moorhead: Can I get in here too?

Daniel Newman: I was going to say, how are you feeling? Dude, what a week.

Patrick Moorhead: What a week. It’s kind of crazy how, so many things going on. It’s just like a, I don’t know, a cornucopia. It’s like a play toy, infinite play toys of tech.

Daniel Newman: Oh my God.

Patrick Moorhead: So good.

Daniel Newman: So good, and by the way, what a week we’ve had, you’ve had. Brother, you were slaying it on the Twitter this week. Can I call it Twitter?

Patrick Moorhead: Yes.

Daniel Newman: What the heck do we call it?

Patrick Moorhead: I don’t know.

Daniel Newman: I totally understand that X is the ultimate app that we’re building, but I still feel like we’re tweeting. But, I got to say, I got a little Twitter envy this week. I went back and looked at you dropping bombs throughout Jensen’s keynote, some of the best tweets I think I’ve ever seen you put out, and not only timely, but thorough. If I was a technologist out there and I wanted to know what Jensen was up to, I would’ve just followed you right to the moment that you and Jensen were on stage together during his keynote. That was pretty awesome too, by the way.

Patrick Moorhead: Hey, bestie, by the way, I appreciate that, and I’ve got a lot of long form content coming out from the team and myself.

Daniel Newman: That’s great. Sorry, I just wanted to give you a little]-

Patrick Moorhead: Thank you.

Daniel Newman: … because it was super good. I was scrolling this morning as I was prepping for the pod. But, listen, we’ve got a great show this week. We had the NVIDIA event, GTC, Broadcom had earnings, Broadcom had an investor day, Synopsys had an investor day. Micron had a big blowout earnings day. The Apple had the DOG sue it this week for antitrust and monopolistic behavior. And we’ve got one more topic and it slipped my mind, but I’m going to get it back right now. It is, Intel, $20 billion in grants. Pat, that was huge news, but somehow in this week it actually fell right into the middle of the pile.

Patrick Moorhead: I’m certain that Intel, its White House governed on the timing.

Daniel Newman: It is, it just gets slipped in there but, I guess, for everyone out there, just remember, this show is for information and entertainment purposes only. And while we will be talking a lot about publicly traded companies, please don’t take anything we are saying here on the show as investment advice. Pat, I’m going to kick off. I know we’ve got only about 40 minutes to run through the six topics, five minutes each. So, we’ll see if we can actually do it with the slate we have this week, Pat. But, let’s start off and talk about GTC. Was there a bigger moment? It was the Woodstock of AI. And Pat, I got to be candid, man. Jensen was in his element in the middle of the SAP center. We couldn’t even get onto the floor.

Patrick Moorhead: It was crazy.

Daniel Newman: There was so much demand. It was a rock concert for AI lovers and he didn’t disappoint. Now, I’m going to give you two things, and there’s a lot of oxygen here. No matter how much I talk, there will be a lot of oxygen here. But, I want to give you two things to really, I came into this looking for. One is, I felt like this was the moment that NVIDIA had to secure its place as the technological leader in AI. Meaning, what is the company going to put forth that clearly says nobody’s catching us? Sure, there’s other options. Sure, there’s other SoCs being developed, ASICs. There’s other GPU players, there’s software abstractions, but we still are the Apple of AI, ironic this week to say that.

The second thing was, I was really dying to understand how the company was going to advance its customer lock-in, meaning, it’s been so clever with the developers and CUDA building that abstraction layer that basically has made it so sticky and it’s like, can they come up with what’s next? Can they come up with something else that’s going to be as sticky as CUDA has been? Especially because you’ll hear the competition talking about new compilers, you’ll hear them talking about ROCm and oneAPI and JAX and PyTorch, and you don’t need to. You can move the workloads from hardware to hardware.

Well, what if you come up with a solution that makes it even stickier there to be running everything on NVIDIA’s hardware? So, of course, I will comment briefly on Blackwell. Now, again, Blackwell is a chip, but Blackwell is part of the GB, the Grace Blackwell, and basically Blackwell is going to be a system. There’s really, nobody’s going to buy a Blackwell chip. You’re going to buy a system. It’s going to be a large system and it’s going to be, what did someone on the competitive side say? An AI mainframe. We are in the era now where we’re going to stitch all this together.

Patrick Moorhead: That’s funny.

Daniel Newman: It’s going to be connectivity, it’s going to be compute, it’s going to be GPUs, CPUs, cores. It’s going to be a link, NVLink. It’s going to be in InfiniBand. It’s going to be in a massive rack. And by the way, it’s going to be more economical to do it that way if you’re an NVIDIA shop, and it’s going to take up less real estate. And by the way, that’s really important people. Data centers, there’s limited space, limited power, and this is something that he was very prudent to be speaking to. So, long and short of it, I’ll give a couple of specs. You’ll probably give some other ones too. They’re talking about workload performance increase on the inference side with Grace by about 30 times depending on the floating point, and energy cuts by as much as 25 times. So, this was a big topic, because we’ve talked a lot about how much power hogs GPUs can be.

So, they made some big advancements on inference, which has been something that AMD had made some big strides on, and then they made some advancements on lesser power. They also basically, just to give you a relative data point, so on a previous training model of 1.8 trillion parameters, it would’ve taken 8,000 Hopper GPUs and 15 megawatts of power. NVIDIA is saying now that 2,000 GPUs Blackwell, so about a quarter of which, and it’ll do it at a quarter, about four megawatts of power. So, that was a really interesting data point. So, because we’re going time fast, I’ll just tell you one other thing I wanted to talk about was the NIMs platform. One of the things I think NVIDIA really wanted to get stickier too is going to be the on-prem data and the industry specific LLMs. He talked about a weather opportunity.

He talked a lot about healthcare LLMs, but being able to take a microservices’ architecture with a container, be able to put the libraries, the software, the hardware infrastructure, on-prem, off-prem, so cloud and hybrid architectures connected to APIs and enable a company to take like a ServiceNow architecture, combine it with NVIDIA hardware, and do so in basically a drag and drop, “More or less” container, I think that’s really interesting. And Pat, why do I think it’s so interesting? Because, they don’t yet own everybody’s prem data. But, now you take all that prem data, put it in the container and make it available for compute, for accelerated compute, that’s really sticky.

So, final thought, what I said is, did they achieve the two things? One, technological superiority? I think they did for the moment, and it’s not over. I don’t know that they’ve compelled anybody. Stock didn’t move a ton because of this. And two, this whole NIMs architecture, super powerful in terms of connecting the prem data, private LLMs that are going to become more pervasive as these big LLMs are limited to such a small number of customers. Pat?

Patrick Moorhead: Buddy, you covered a lot and there’s just so much to cover. We could have done this entire-

Daniel Newman: It could have been.

Patrick Moorhead: So, here’s what I’m going to focus on. First of all, springboarding off of NIMs. NIMs is the next new lock-in. So, 13 years ago, 15 years ago when CUDA came out, it was literally the driver set, and then that moved to developer tools, that moved to ML frameworks, that moved to models from NVIDIA that you can use to a full up enterprise stack that gets preloaded on Dell, HPE and Lenovo infrastructure. What NIMs does is it really takes that to the next level and makes it easier for application development providers like Adobe and SAP and ServiceNow to connect with data platforms like Cloudera, Snowbricks, NetApp, folks like that, and then connecting the big model builders with the AI infrastructure.

And this will make it easy for people if you’re all in on NVIDIA. You cannot, however, leverage this to an AMD, an Intel, Groq, an Untether AI. So, enterprises and partners do need to weigh the potential lock-in. But, I got to tell you, at the beginning of a boom cycle, you probably have to do this because none of the competitors are close to offering something like this but, so two edges on that. On Blackwell, absolutely amazing. It’s an absolute total beast. I want to get Signal65’s Ryan Shrout on some of the claims. The claims for energy efficiency and inference have to do with not just the chip, but an entire cluster, and that’s the comparison I want to see.

And as a reminder in comparison to both AMD and Intel and all the AI SoCs out there, this is something that doesn’t ship till the end of the year. And what’s being compared, let’s say even on the MI from AMD is what is shipping in right now. But, that’s not to take any credit away from NVIDIA at all. On the hardware platform side, I do consider NIM part of the software platform. The networking in the rack and the switch, and then connecting rack to rack is amazing to see. Once we get to Broadcom, we might have the debate on ethernet versus what NVIDIA is cooking up here, but overall, NVIDIA did nothing but gain ground. They certainly didn’t lose ground.

Daniel Newman: That’s great analysis. Like you said, I think we could do a whole show on this, had to move quickly, but I love the fact that you pointed out what’s to market today, Pat, versus what will be in the market in the future. Let’s move on and talk about a really compelling counterpoint to some of that. When Broadcom hosted its enabling AI infrastructure investor meeting you and I, Pat, were there and I think we both walked away impressed.

Patrick Moorhead: It was an investor meeting and there were three other analyst firms there, which is a very tight-knit group. There were no press, no pictures were allowed. It was quite the event. And this is really a coming out party for Charlie Kawwas, president. He runs all of semiconductors, all 17 business units. And this focus was really on what is Broadcom doing in terms of AI connectivity and an AI accelerator. Coming right off the NVIDIA event people wanted to know, hey, what are all these accelerators that Broadcom is working on? I learned a lot. I’m not one of these analysts who says, oh, I know everything, I don’t, and it was a learning exercise for me. First off, they brought out a third design. All three of them are consumer. Rumor has it that the first two are Google and Meta. What’s the third one? Who knows?

These are also not networking. These are not AI ASICs. Sure, it includes a compute ASIC, but these are full up SoCs without a CPU. It has compute, memory, network IO, storage on there. By the way, the other brain explosion here was the one year from design beginning to ramp. And when you think about an ASIC, you’re typically looking at a three to four years. So, hypothetically they could crank out a new one every 18 months for a customer, and that is just absolutely absurdly fast. A lot of this comes to them doing this for the last decade. So, pretty amazing. I think the second thing was ethernet versus InfiniBand, connecting GPUs, custom accelerators and merchant accelerators. I got to tell you, if you’re a hyperscaler or an enterprise that wants to connect all three types of accelerators, ethernet is absolutely what you need for scale out.

I came convinced. 10% of performance advantage, a 30% reliability advantage, cost advantage, the standards, it’s just crazy. And let’s not forget Thor 2, not a smart NIC, but heavy duty RDMA for a single memory plane, pretty amazing. I did like the going through and showing how many clusters, Amazon, Oracle, Meta, ByteDance, I think 130,000 AI clusters with ethernet. So, I came away really just all in on ethernet. It’s not that fiber channel is bad, it’s just that PCI, sorry, ethernet is even better. So, in the rack, Jas Tremblay got up, did a nice discussion on inside the rack, and essentially that’s connecting CPU, GPU, NICs, AI accelerators and storage, and they dominate this market by the way. You can look at Dell, HPE, Lenovo, Supermicro systems and PCIe switch from Broadcom has been the staple of that, and I think I don’t see any reason why that’s not going to be in the future.

And a little flashback to AMD’s AI event was PCIe Gen7 switch. So, PCIe, low latency. Again, this company just dominates here. Co-packaged optics, right? So, instead of having 140 transceivers in a system, you have this super-duper cool CPO, co-packaged optics module. And this has been in discussion forever, but they’re actually shipping this now. So, if you want lower power, increased reliability, lowest cost per bit, CPO looks like an amazing option. It’s almost like a no-brainer. I left wondering, what am I missing here? What do I not understand from this? And then, finally I’ll end this. SerDes is the basis of all analog goodness, and they’ve got this 200 gig SerDes based on three nanometer that bodes well for the future of Broadcom quite frankly, and gets in the business of Marvell with PAM-4. So, great stuff.

Daniel Newman: Take a breath buddy. Listen, I’m going to make this, I’m going to really try to dumb down my thoughts here.

Patrick Moorhead: And by the way, I meant ethernet versus InfiniBand, sorry.

Daniel Newman: Absolutely. I’m just going to try to, what did I come away with here? I want to do an analogy. We’re going to talk about Apple later, but here’s my analogy is that, we’re going to start to see market shift to a normalization in a two space market. And by the way, there’s two space, two players, but player one is NVIDIA, end to end, everything. Player two is everyone else, literally the rest of the ecosystem. It’s all the other chip makers, it’s other infrastructure providers. It’s going to be a collaboration. This is kind of that open AI, open ethernet, all these consortiums. And so, NVIDIA is Apple in this case, and InfiniBand, NVLink the whole building of chips and infrastructure and hardware and connectors and cables, and it’s going to be this totally vertically integrated solution here. And then, on the other side of it, ethernet is going to be the open, it’s going to be Google and Android.

This is how I’m looking at this thing. And the reason I’m saying this is, in the end they’re both very viable solutions, and that’s really what I came away with. Jensen got on the stage and goes, it’s not viable. And we went to this thing and it comes back. It’s like, well, it seems viable. You get at least equal if not better performance in optimal settings. You actually do it for less money, which is economical, which is important to people, and there’s a couple of other things. I thought it was really prudent that they talked about the fragility of InfiniBand. They talked about that InfiniBand has a higher fail rate, having those cables sitting and ready at any given point versus ethernet, which is a little bit more stable. But, the way I kind of walked away from it is, Broadcom’s going to be at the center of Android. And so, this network back plane and this network plane that’s going to have to connect all these XPUs or all this compute is going to be done one of two ways.

And we’re actually seeing this war playing out by the way, in other places. We’re seeing it play out with the hyperscalers. The hyperscalers are playing out right now, because some are adopting the all in NVIDIA and reselling it and others are saying, well, we want to control the network. We’re not going to use their connectivity. We’ve got our own plane. And I think people could figure out who we’re talking about. But, in the end, I think the market ends up being split and I think it ends up being much closer to parody and NVIDIA is going to be really big. And by the way, this is not like, this is a multi-trillion dollar ultimate market with hundreds of billions of GPUs, and then all the peripherals and stuff , and I think that’s how it plays out. Now, a couple of just really quick things on the overall AI story.

Look, the semiconductor business, everything about Broadcom has been about VMware, but the semiconductor business is growing healthily. They were able to basically say now that in ’24, 25% of its revenue is AI related, and actually they’re accelerating that forecast now to 35% of about $10 billion. They’ve got three mega customers now that are planning to work on their XPUs. No one knows who they are. There is some suspicion of who they are. I think it might be a company that is fruits related. It could be this new big customer. But, again, no speculation, heard nothing. This is just my opinion. But, I want it on the record in case I end up being right about this later.

Pat, there’s so much more. We’ve got a lot of podcast left to do and not a ton of time. So, I’m going to move to the third topic, which is Intel finally gets some money from the CHIPS Act. Now, I want everyone to remember when this CHIPS Act actually came into play. It was approved in ’23? No, it was in 2022. So, you want the government to drive the future of innovation, look how fast they move. Now, just remember, when they actually approved this, nobody was using ChatGPT yet.

Patrick Moorhead: By the way, look who started it too. It wasn’t even under this administration.

Daniel Newman: No, it’s brutal how slow this stuff proliferates and progresses. The good news is, it happened. The bad news is, it’s not enough. And by the way, this was a big part of the Pat Gelsinger discussion. I think they gave them about 8.5 billion in grants, another 11 billion in loans. And based on the way we’re running up our global debt in the U.S., it was a trillion a quarter now that we’re creating in debt. It feels like 52 billion is not enough money for the most important technological revolution on the planet and the U.S.’s ability to substantiate, legitimize, and protect its long-term interests. Now, having said that, this will go a long way to getting Intel superfabs off the ground. We will need somewhere in the U.S., somewhere in the west to be able to produce all these XPUs, to be able to produce all these GPUs to be able to, and Intel is a viable option for this.

Pat, I’ve been on the record for a while. I’ve said the foundry business might be the most interesting business that Intel has. That’s not to say the other parts of the business aren’t interesting. I’m just saying that right now with this AI growth, TSMC cannot take 100% of this on. I’m sorry, it will not happen. Pat, I don’t feel like it was optimal though. I don’t feel like what they got was really what they had hoped for or what they wanted. I feel like this was a little bit, there were some concessions here. 11 billion of loans versus 20 billion including grants felt like a bit of a kick in the teeth for the company that has raised its hand, stepped up and said, we will be the company that will bring manufacturing at the leading edge from a U.S. domestic company back. And now you are also hearing about Samsung getting money, TSMC getting money.

Lots of non-U.S. based companies are seemingly lined up to get dollars here and that’s okay. Those are our partners and allies. But, having said that, I don’t know that we’re solving as much of the problem as we need. And again, I go back and say, there is no way in heck that 52 or 53, whatever it was, 52, $53 billion is enough if we want to maintain global leadership in technology. And the way we spend money on other things, wars and other things that do not apply to us, it just absolutely blows my mind that we are not spending more money to make sure we lead in the most important technological revolution that will drive national security, technology leadership and supply chain resilience around the world.

Patrick Moorhead: I’m going to hit, just do some really quick hits here. I was asked by a press, who’s the loser here? And the loser’s TSMC. TSMC will get money, but based on the fact that TSMC isn’t going to bring their best to Arizona A, and B, their chairman and senior executives-

Daniel Newman: That’s right. They’re only bringing seven.

Patrick Moorhead: Their chairman and chief executives are calling U.S. workers lazy, which by the way, even if it were true, you don’t have one hand, and then slap somebody across the face. But, some even historical stuff is coming out from TSMC in the way that they’ve talked about other cultures and companies. And by the way, it’s driving U.S.-based semiconductor companies crazy at how TSMC, executives pull me aside and tell me how disappointed they are in the way that TSMC is operating here. The other question I get is, is this enough? And the short answer is no. There will need to be a CHIPS Act two and a CHIPS Act three until we get to some form of automation and replication to build these foundries.

Congratulations to Intel. By the way, I was one of the only analysts three years ago that gave Intel a chance. If I got a dollar for every person who came along and said that they should divest, I said, you are completely freaking crazy because Intel’s not ready to do that. Maybe when they get lift off with 18A and they’ve got Columbus up and running, then you might be able to do that, but now, dumb.

Daniel Newman: Hey, I just want to say that there was another analyst that gave them a chance that actually has written the bull case, has taken a lot of shit, a lot of flack from people.

Patrick Moorhead: A lot of beep.

Daniel Newman: Should we beep that out? Nevermind. Let’s just keep going…I want this pod to be authentic, Pat Moorhead, beep. We said it, we stick by it. We get it right a lot more than we get it wrong. We aren’t going to say we never get it wrong. We’ll just never remind you that we got it wrong. You’re going to have to find that stuff yourself. Anyway, Pat, let’s move on to another topic you and I are passionate about. We are not Apple fanboys. We do use Apple stuff. And we have said for a long time that there needs to be some antitrust brought to Apple. What do you think about what ended up being brought to Apple this week?

Patrick Moorhead: So, we don’t do news, but I think we do need to do some background on this.

Daniel Newman: We do.

Patrick Moorhead: So, Department of Justice in 16 states has accused Apple of being an illegal monopoly. And I don’t want to start even where the 88 page DOJ documents, I want to talk about market definition, and that is the core of any type of antitrust case. And they’re accusing the company of having a durable monopoly, durable meaning going on for a long time. It’s not some one hit wonder in two markets, performance smartphones where they said they have a 70% market share, otherwise known as flagship and premium, and then 60% market share in smartphones. I think the DOJ did Apple a favor. They could have done it on revenue share or profit share, which would be like 90%. And then, secondly, the durability if you needed to drill down on that, it’s been going on for a decade.

So, what’s the harm? Sherman Act, section 2, harm competition, less choice, higher prices and fees, lower quality and less innovation. And it focused on five areas. Super apps, think of it like if you’re a game studio and you want to have everything in one app, cloud streaming apps like game streaming, messaging apps like iMessage and digital wallets. So, essentially on the super app side, harms developers by preventing them from innovating and selling products. Cloud streaming apps, harms developers by preventing developers, artificially constrain the size of the user base, sorry, keeps them from selling games to such users. By the way, we saw this with Google by the way, and folks like Epic, and I believe that Microsoft, although they weren’t too loud on it, would like a crack at cloudstream apps like gaming. Messaging apps, it’s iMessage, it’s locked down, and DOJ says it harms developers by artificially constraining the size of their user bases. That makes sense.

Smart watches, this was an interesting one. I didn’t expect this one, Dan, essentially saying, by limiting the ability to respond to notifications, Apple has denied users access to high performing smartwatches with preferred styling, better user interfaces and services or better batteries, harm smartwatch developers. Finally, you can only have one digital wallet on an iPhone and it’s Apple, and they have engorged profits on all of that. Maybe you want Bank of America app that works everywhere. It’s not going to work. Anyways, that’s the upshot. I’m going to throw just one more thing. I’ve left you a ton of oxygen. I did like what it’s saying, because a lot of what Apple says it does because a better experience by the way, they do deliver a good experience like a mainframe, like the NVIDIA full platform, but they have smoking gun emails from Apple executives by the way, going all the way back to Steve Jobs that said, there were planned lock-in of the users.

Apple also talks about, I’m going to read this. Apple selectively compromises privacy and security interests when it suits its financial and business interests, but Apple imposes contractual restraints and restrictions that are more restrictions than necessary to protect user privacy and security. I think Apple is screwed. They’re going to lose. They will fight this. They’re going to get into so much trouble. I’m just trying to figure out is this going to be the three arctic years like it was with Microsoft, or is this going to be a devastating blow like AT&T? Probably not AT&T, they’re not going to get broken up, but this is going to leave a wound.

Daniel Newman: Buddy, great assessment. I’m going to play the basics here, because you and I, by the way, you and I both are very critical of Apple’s business model. I do want to say that I think the DOJ messed up. I think they messed up, because I don’t know why they’re focused on the phone, and there’s so much centrism to this about the phone. And the thing is, the phone, there’s choice, meaning nobody has to use an Apple phone. And what they do in the app store is the most anti-competitive, most aggressive behavior. They’ve jacked up the developer fees, how they lock you into this ecosystem, they create so much your business dependency, and then they keep raising fees and raising fees, very anti-competitive, and these businesses have no choice but to participate. Having said that, when you get to the phone, I feel like there is choice.

You can go buy a Samsung, go use Android. And so, this is why I’m conflicted is, why can the U.S. and antitrust and regulators never get it right? Meaning, why can’t they focus on the right thing? The right thing is the app store. The app store is monopolistic, anti-competitive and they’re absolutely abusive of their power, and it harms consumers because it drives up prices and it ends up meaning people that choose the platform cannot shop. They cannot have any opportunity to pick different options. I like that you pointed out the wallet. As we move to digital currency and people are going to be paying exclusively on devices, why can’t we have different payment options on the phone? Other platforms do allow that. And so, you could see, Pat, by the way, there’s a lot of likeness in this to your merchant services.

They’re saying merchant services have been driven down to a much lower fee because it was seen as anti-competitive for these merchant services to drive up cost to all the merchants, which ultimately drives up cost to the consumer. The problem I’ve always had, Pat, with antitrust against Apple is that people do like Apple. The users like Apple. And so, while there is anti-competitive behavior, it is consumer harm driving up prices. What it doesn’t do is actually often discourage people from wanting to use the platform, because people that use iOS every day mostly for the messaging, which by the way another sort of anti-competitive thing they’ve done with not allowing messaging to be easy between different platforms. Well, the people like it. And so, that tends to put Apple in a good light.

Having said that, all this Pat, Apple’s a disaster right now, I really do. I was going to tweet something earlier, dude. I was like, remember Vision Pro? What happened to that? Is it over already? I haven’t seen anybody driving around town with a Vision Pro, but I do still see Cybertrucks, so I think Elon’s going to be okay. All right, we’ve got a whole bunch more topics. Two more topics and we’re going to have to move fast, Pat.

So, let’s talk about Micron. I shared a chart. I’m going to keep this one pretty simple and easy, because I want to make sure we get to Synopsys, which was just chuck-full of good stuff, but Micron crushed it. They absolutely crushed it, and nobody saw this coming, but maybe, I don’t know, maybe a couple of analysts that said, hey, at some point with all this AI silicon, you’re going to need some high bandwidth memory in order to do this stuff. You’re hearing rumors about high bandwidth memory advancements coming from SK, from Samsung. Well, Micron’s had a whole bunch of innovation with its 232-layer NAND HBM upgrades, and they’re starting to see the momentum. They went from a billion plus dollar loss last quarter to nearly a billion dollar profit in one quarter. The stock jumped to its highest price since I think it was like 2000, Pat. It literally saw a jump.

It looked a little bit like Dell after the earnings when everyone’s like, holy crap, this company has a play. Yes, people, if you are seeing growth in XPUs and GPUs and CPUs doing advanced AI stuff, we’re going to need a whole bunch of high bandwidth memory to get that done. Micron’s got a play, it’s got a big play. It saw its margin grow, it saw its revenue grow, it’s seeing cashflow being generated, Pat. And in a single candle, and I showed that in my tweet. You saw the difference of people finally understanding that Micron is part of the AI story. So, I’m going to keep it fast so we can get to Synopsys, but I’ll pass this one over to you.

Patrick Moorhead: January 2nd, 2024 from Colorado, USA, Patrick Moorhead tweets, the memory boom is back. I put up P&Ls from Micron, SK Hynix, and threw at least the semiconductor P&L in there from Samsung, called it. This one’s easy dude. I spent 10 years in infrastructure, 11 years at AMD and it’s like, it’s boom and bust. There is no in-between, and we are now going into a boom. Everybody’s going to be complaining about lack of availability and all three of these companies are going to be printing-

Daniel Newman: Margin expansion, pricing power.

Patrick Moorhead: Totally. And then, what’s going to be really interesting is with the new storage and memory requirements from AI, smartphones and PCs, how’s that going to work out? But, in the end, this insatiable need, I like to call it the quadrangle between the processing, storage, memory, networking, and here we are. This one is going to have some interesting impacts on pricing. But, listen, these companies put billions of investment and they should be rewarded.

Daniel Newman: Absolutely, Pat, good call. I can’t believe you found that tweet. You know how much this guy tweets. That was a little bit of self-reflection going on there. All right, so Pat, last one. You put out a really thoughtful, hey, after this Broadcom day and all the complexity of PAM-4 and optics, this Synopsys, EDA stuff is actually really easy to get, and you put a really thoughtful post. Why don’t you run us through what Synopsys had to say?

Patrick Moorhead: So, first and foremost, they had to get two things across. One thing, Synopsys without Ansys is doing great. And then, secondly, Synopsys plus Ansys makes it even better. But, one thing I really appreciated is that the company actually described what they do in a way that people, that I think anybody related to chips could understand all the way from design going all the way to the fab and foundry and everything in between. That’s really hard to do. Now, two years ago it dawned on me with more transportable IP chiplets, and by the way, IP that’s qualified all the way to the foundry and the improvements in the tools. And you layer on top of that, AI, literally copilots, and this is something Synopsys was the first one to work on. They’re not the only one that’s doing it now, but they did come out with it first. You have something that’s just going to be incredible.

This is how AWS can do their own chips. This is how Tesla can do their own chips. Once we get farther on, I think you’re not going to need 100, $200 million to do a full chip. And you might see Dell Technologies be able to afford to do their own chips given their scale and capability, a much smaller company. So, this is an exciting time. I’m going to end with Ansys kind of a joke. It took me 15 seconds once the announcement was made. And think of Synopsys as the electronics world, and think of Ansys as electro mechanical, designs like cars, airplane, jet engines, stuff like that. And with the need for time to market and shrinking, having one company being able to connect that if you’re an end user, I can do AWS chiplet, I can do the SoC, I can do the PCA, I can do the rack, I can do the fleet, I can do the fricking data center and all the flow of air going in there. Anyways, that’s a no-brainer to me. Do they pay too much? I don’t know. But, they won a bidding war with Cadence.

Daniel Newman: Well, I’ll tell you what, Pat, I thought IBM paid too much for Red Hat and I think, I actually, they probably could have paid more, because had they not made that decision I think there would’ve been some really long-term troubles for the company. I think AMD what they paid for Xilinx. You could look at that and say that was a lot. And I think in the end, they’re probably not going to regret what they paid for Xilinx. I think a lot of times, Pat, we got to give some credit to these executives, founders, boards and visionaries that they do understand what they’re looking at, and they do understand what’s at stake. You know how smart Jensen was to try to buy Arm. Just imagine had that gotten done, by the way. Just imagine the situation NVIDIA would be in had they actually, so regulators might’ve got that one right, even more so than I would’ve thought.

So, anyways, Synopsys, Pat. You kind of hit the technical stuff. I want to say something that was very interesting to me. It drove a tweet and I wanted to share this. Over the last three years, there’s only one company in the Magnificent Seven that would’ve delivered you a better return than had you invested in Synopsys stock, and that company is NVIDIA. So, just take that into perspective. I have two things to say. One, silicon eats the world. So, everything you just said is because jet planes and cities and telco and banks and everything runs on silicon. Software just sits on top of it people. You got to develop silicon. Where are you going to go? You got a couple of options, and Synopsys right now is the leader. And then, of course, the other side of it is that EDA is cool, because we are democratizing the development of chips.

All these companies that have gotten into “Homegrown silicon,” never going to happen without a company like Synopsys. They’ve made it happen and the market’s rewarding it. And so, I think this tie up is going to be great, Pat, and I think Synopsys had a really compelling day. Are we bullish on everything? No, but are we bullish on this? Gosh, it seems like a really good outlook. So, hey, we did it. 40 minutes, man. We did this pod. Usually with these topics and all the things that I would’ve wanted to say and you would’ve wanted to say, there was no way we could do this in under an hour. But, having said that, Pat, great show, man. I’m in such a good mood. I was in such a bad mood before we started, but this podcast is my favorite thing. And by the way, Daniel, Mike, Keith, we had a great chat going.

I was watching what you all were saying. Thank you so much for participating and kind of giving feedback. Can’t always get the comments into the show, but we love that our audience is in there, is engaged, is tying up and share this stuff. So, hit that subscribe button, join us. Be with us every single week here on the pod. We’ve got lots here. We’ve got The Six Five Summit coming. Bill McDermott at the helm. Lots of other exciting speakers to announce. Pat, we’re going to be at RSA, we’re at Enterprise Connect. The Six Five is everywhere. We appreciate all you joining us, but we got to go. We’ll see you all later.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Sergi Girona, Operations Director at Barcelona Supercomputing Center, and Scott Tease, VP at Lenovo, share insights on enhancing high-performance computing and sustainability through their partnership, highlighting the deployment of MareNostrum 5 with Lenovo's Neptune Water Cooling Technology for environmental efficiency.
Karan Batta, SVP at Oracle, joins Daniel Newman and Patrick Moorhead on the latest episode to share his insights on migrating Oracle Database workloads to AWS, underscoring the pivotal Oracle-AWS partnership.
Seamus Jones, Director at Dell Technologies, shares his insights on Sustainable AI, highlighting Dell's commitment to eco-friendly technology solutions and the role of AI in sustainability.
Avi Shetty and Roger Cummings join Keith Townsend to share their insights on leveraging high-density storage solutions like Solidigm’s D5-P5336 for groundbreaking wildlife conservation efforts, transforming both the pace and scope of global conservation projects.