Search
Close this search box.

Talking NVIDIA, Microsoft, Qualcomm, OpenText, Meta

Talking NVIDIA, Microsoft, Qualcomm, OpenText, Meta

On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss the tech news stories that made headlines this week. The handpicked topics for this week are:

  1. NVIDIA Q3FY25 Earnings
  2. Microsoft Ignite 2024
  3. OpenText World 2024
  4. Qualcomm Investor Day
  5. Clara Shih Joins Meta, Departs Salesforce
  6. Microsoft’s New Silicon

For a deeper dive into each topic, please click on the links above. Be sure to subscribe to The Six Five Webcast so you never miss an episode.

Watch the episode here:

Listen to the episode on your favorite streaming platform:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Daniel Newman: Hey, everybody. It’s Friday. We are back. It is The Six Five Podcast in the chairs at home, what a surprise, long week, on the road, Patrick Moorhead, good morning.

Patrick Moorhead: Daniel, how you doing man? I’m doing great. Did a legs, arms, and core workout today. You know what? I’m pretty fricking tired. I got up to zone five on it. I know it’s not a cardio workout, but that’s a good indication that I’m pushing it. It’s good to be back in Austin. We were in Chicago for three days at Microsoft Ignite, and of course, at the same time, pontificating on NVIDIA earnings, weighing in on stuff like OpenText, some stuff going on at SuperComputing, blah diddy blah. You got Qualcomm Investor Day. But anyways, it’s good to be back in the chair. Looking forward to next week, getting the fam together at the ranch.

Daniel Newman: What the hell is zone five? Should I give you the arm? I’ve only been working out for the last 30 years straight. You talk all this garble. Is this like the chip design of workouts? Are you going to lay down patterns here? What does zone five get all of our audience, listeners?

Patrick Moorhead: Yeah, zone five gets you to max heart rate, and max heart rate’s typically 75% before you would’ve a heart attack. There’s a lot of the VO2 max stuff that’s based on it, but different zones do different things for you. Zone two is tilted to hardcore fat burning. One isn’t really a lot of exercise, not doing much, but then you hit three, four, and five, it’s really hitting glycogen and hitting your heart and lungs. A lot of cardio gets you into that. My only point of zone five doing a weight workout is he’s pushing me. My trainer is pushing me.

Daniel Newman: Sweet, sweet. I spend most of my time in zone two. I just walk up hills very slowly, just incinerating fat while swallowing avocado juice, eating olives, and I just buy it right into… You Ever seen Fred Flintstone?

Patrick Moorhead: Of course.

Daniel Newman: You know when he pulls up to the drive-in and they drop that big-a** piece of meat?

Patrick Moorhead: Yes.

Daniel Newman: That’s me, man. I’m just- all day long, so just sitting there losing weight. No, I’m glad you had a good workout. My workout sucked, to be honest, but I at least got it in and I think that’s one of the things I always tell people after.

Patrick Moorhead: It matters.

Daniel Newman: Sometimes you just got to get it in. Pat, we had a busy week. You and I were on the road, we really didn’t have even a chance to breathe. We got up Monday morning, it was straight to the airport, it was up in the air, it was Microsoft Ignite, and of course, there was all kinds of things going on currently. Qualcomm’s Investor Day, we had OpenText world, where there was a number of our team up and about. We had Supercompute, which you mentioned. We were there. Six Five was grinding away at SuperCompute. Check out all that, listeners.

Patrick Moorhead: Yeah, we were.

Daniel Newman: Six Five was at Microsoft too. Pat and I were there in our dual capacities. For all you that wanted the analysis, check out the Twitter, especially Pat is great at Twitter. Do we call it Twitter? I don’t know, we don’t have to call it Twitter.

Patrick Moorhead: We do.

Daniel Newman: Check out X posts. He’s bringing X-y back on Twitter, on X, whatever the heck it is. But we got a great show today just for everybody. Remember, this is for information entertainment purposes only. While we will talk about the markets and the craziness of NVIDIA earnings, especially this week, don’t take anything we say as investment advice. Pat, let’s warm up the boats. Our buddy Dan Ives couldn’t join us today. He normally comes in for this, but you know what? I think you and I have plenty to say. We don’t need to share the stage with that guy. He gets enough TV, I’m pretty sure.

Patrick Moorhead: Just kidding Dan. We need to bring you on, bestie.

Daniel Newman: We’ll have you back, buddy. We’ll have you back. What happened with NVIDIA? Let’s go. Let’s start there.

Patrick Moorhead: Yeah. First and foremost, you and I were just going broadcast crazy. I did CNBC, I was in the chair for 45 minutes, and you show up and they ask you, “Hey, what do you think they’re going to do?” and then they said how NVIDIA does, and then they get your reaction. More times than not, I hit it. I’m not an equity analyst, but it’s pretty clear on their trajectory, when they beat, beat, raise for the past five quarters, of what they were going to do. The big question was, was it going to be good enough? Was it going to be big enough for the market? After hours, it ebbed and flowed. In the afternoon yesterday, before afternoon was down, then it peaked back, and that was really the time that people were chewing through a couple things. One was the Q4 forecast and the other one was Blackwell’s supply. While NVIDIA handily beat that forecast expectations, they were concerned that they didn’t beat it by a larger margin. You add that to the gross margin discussion and the Blackwell supply conversations spelled a little bit of lowered exuberance, and we hit a couple things.

On Blackwell’s supply, it’s very clear when TSMC says that they’re sold out for the next 18 months on leading edge, that it’s a pretty good sense that supply is capped. It’s not just wafers, it’s also packaging, which packaging used to be a second-class citizen, now it’s first-class citizen, and it uses a packaging called a CoWoS. With that said, I do believe that NVIDIA is going to meet its forecast. There were some questions on this overheating BS and I always thought it was BS, particularly when Microsoft is announcing that it’s going into production for its customers. You have Dell, Michael Dell and Jeff Clark, taking pictures of their racks being shipped off to Tier 2 CSPs. This was an easy one for me, and that information article just didn’t talk to people in the know like Pat and Dan. Typically, what you have is every design has different parameters and each designer has the ability to put in some flex for cooling, for power, and it’s apparent that some people didn’t do that. It looks like Dell did and whoever supplies Microsoft with its infrastructure did.

I had a nice conversation with CFO Colette Kress after the call, and we talked about a bunch of stuff. We talked about gross margins. She reiterated that Hopper and Blackwell were above the corporate margin. That didn’t even necessarily make people feel good. She affirmed the content statement that I always make, that Blackwell has more content than Hopper. She’s got a full rack scale. You have networking and then you have software adders where they built a multibillion-dollar business. The question you probably get and I get, Daniel, is how long does the growth continue? It’s interesting, she said decades. I liked the analogy she used for client-server computing because that was the gift that paid for literally 20 years. But as analysts, we need to peel back. Is it possible? Yes. Particularly when you layer on a robot in everybody’s home and on the factory line, yeah, it’s possible. It is very possible.

Will NVIDIA have the same dominance as it has in training, as it has in inference in every other thing? Probably not, but it can still make quite the business even if others come in. They use the word inference multiple times, really leaning into that. By the way, that was really a carryover from prior days with machine learning when it went from hardcore training into inference. I forget how many times they set it on the conference call but they used it a lot. Daniel, I’m going to turn it over to you.

Daniel Newman: Yeah, I think you hit a lot of the high notes, Pat. This was a case where the whisper number was the guide, not the guide. The whisper number and the highest end of the whisper number was really what the market was looking for, so in order to sort of have that exuberance of charging higher immediately. But I think once everybody digested, they listened to the call, they looked at the numbers, they looked at the results, they looked at the stickiness of what was going on, you saw a bevy of upgrades come out the next day, which was indicative that the market still sees more upside. Remember, this company’s worth $3.6 trillion at this point. There’s a little bit of an ebb and flow, a little bit of a give and take there. You can be really excited, enthused, believe the company’s in a great place, but maybe not necessarily believe that it deserves to run to four or four and a half trillion dollars right away. There is an aspect of growing into this valuation.

Having said that, based on their 50% growth rate expected next year, there is a realistic, on a forward earnings basis, that they’re actually trading cheaper than they did a couple years ago, which is crazy. You’re like, “Whoa,” but remember, this is a company that created more net income this quarter than a year ago that they created revenue, like 17, $18 billion. They’re sitting on an absolute war chest of cash right now. They’re in a great position. You’ve got a macroeconomic situation with Trump entering the White House, where I would bet that regulatory scrutiny will be lesser for M&A. That administration is going to be all about a really, really strong stock market and high returns, which it requires the highly-concentrated contributors to the S&P, like NVIDIA, to perform really well.

From a technological standpoint, the question is can these hyperscalers keep buying at this rate? I mean, you’re talking about 10 or so companies and really five that make up half the revenue of the company, so all we can do is listen to the tea leaves. You and I spent some time this week with Satya Nadella. We can’t speak to what our conversation was specifically, but generally speaking, there was just no indication of real meaningful slowdown. What does seem to be well-understood, having listened to Sundar, having listened to Andy Jassy, having listened to Satya, is that there is an expectation that CapEx might start to level off. That’s the big question mark for where does NVIDIA’s growth come from? Because you can’t keep growing at 50% if they don’t keep spending 50% more each year within this small subset.

I mean, there was some bright spots outside of the data center. Automotive grew 70%. We’ve seen how big automotive has been for Qualcomm, which we’ll talk about later, but it’s actually a really big business opportunity for NVIDIA as well. I also think NVIDIA’s robotics technology is another interesting place. The TLDR maybe on this is that I think investors are also curious. Is there another bucket? Is there another moat that NVIDIA can take all this investment, all this capability, all this R&D, all this tech, and all this cash to create? Because we know that there’s some limitations on training. You mentioned the inference upside, but inference is something that we would probably both agree that there is more opportunity for competition. Whether that’s accelerator chips by hyperscalers, whether that’s Intel Gaudi, whether that’s AMD and Instinct, there is the opportunity for more competition on inference, especially use case specific workloads, and there may be better economics there.

The training is the moat. Inference is a big opportunity and they’re certainly becoming one of the biggest, if not the biggest inference company, but if they could win humanoid robot or some large part of that. We know that Tesla and NVIDIA have a strong relationship with what… Or Tesla and Musk with xAI, or actually both. But the net is will Musk lean heavier into NVIDIA technology as he rolls out what he believes is a multitrillion-dollar opportunity for humanoid robots in the next years? This is where I wonder can it sustain the growth, because it can’t all just be data center forever. There will be some catch up there. There will be a point where all these companies have to be able to note returns on these AI factories. That’s what’s not entirely evident yet, but I think there is an opportunity there. All right, we can talk about this one all day. I’m going to keep this one going, Pat. Let’s talk Microsoft Ignite. How about that? You want to talk a little bit about Ignite?

Patrick Moorhead: Absolutely. I mean, we spent three days there, so-

Daniel Newman: We sure did.

Patrick Moorhead: … let’s dive in baby.

Daniel Newman: Yeah. I mean, look, I mentioned that you and I had the chance with a very small group to spend some time with Satya. Definitely a highlight of the event for me, because as I really always want to understand how do all these things tie together, this wasn’t really a case of, Pat, three-hour long keynote. Three hours, you and I looked and go, “Is it possible to fill all this time?” But it was action-packed, and of course, Microsoft’s portfolio is massive. They moved between this big AI narrative, which Satya kicked off, and then evolved it into other areas like security, hardware, software, services, and basically started to tie this thread together. For me, what was most notable was the company seems to have a really good understanding that the entire dynamics of how data, software, AI, and user experiences are created is evolving very, very quickly.

Satya showed this future UI layer, and this is what I’ve been talking about so I really, really appreciate that. For those of you that haven’t heard me talk about it, I’ve talked a lot about the deprecation of a lot of data services that are now considered our enterprise applications. As we see the future of a Copilot, we have Azure… Is it AI factory? What’s it called? Azure AI Factory. But you’ve got the model factory, you’ve got all this stuff coming together, but in the end, it’s going to be all about enterprises being able to take the public data, the LLMs, the on-prem and private and custom data, and bring it together and be able to build applications that can point to those things, and basically render unique experiences that are meant for a persona, a use case, an identity. He’s really talking about tying that together. That’s the big visionary pipe of what I heard from Satya, and he’s doing it across all the stack. They’re doing it with software, they’re doing it with models, they’re doing it with security, they’re doing it with hardware.

The second thing is really bringing together this Copilot for everyone, the Copilot agents. Because by Copilot, I look at it as the assistant, but copilots to agents becomes this workflow. That’s the workflow that becomes this central, singular pane of glass experience that every company is, I would call, the holy grail, but there’s a lot of complexity there. I’ll talk about how they’re addressing that. I mean, the first is putting copilots in on every different layer of experience as a way to really create familiarity. You have all these enterprises, all these users, this big install base, give them access to a point, click, push a button, and get value from AI. This is a big winning over the customer and starting to prove value. This is also where all that inference comes, all the immediate revenue stream comes, is, “Hey, can you streamline what I’m doing in Excel?” I mean, Excel is hard. Can you ask it a contextual question to build a pivot table instead of actually having to know how to build a pivot table? That would make a huge difference for productivity for a lot of people. Of course, Pat, we have the infrastructure side. I mean, the company is committed to building chips.

Patrick Moorhead: Yeah. We have a special section on chips at the end, just a reminder.

Daniel Newman: Yeah, thanks for the reminder. I won’t dig too far into it, but they made a bunch of announcements on that. We’ll come back to that one later. Also, they were very focused on this trust layer. They call it something different. I think we actually met with their head of responsible, I think the CEO, the responsible AI or chief product for responsible. I mean, they’re very aware that they’ve got this complexity of all this publicly available data, private data, and all this identity related data. Something I’ve got a bit of a focus on right now is how do we create a consistent experience across your different personal and work identities. Pat, I’ll save a little oxygen here and turn it over to you. I could talk about Windows, server, I could talk about PCs, but I think you can too.

Patrick Moorhead: Yeah. I am going to try to tell, from my point of view, my complete story takeaway. Enterprise AI is really complex, and I think that Microsoft did as good a job as it could do on simplifying it, particularly around the agents. Satya came out and there’s a three-layer cake. You got Copilot, Copilot devices, and Copilot and the AI stack. That’s what this event was all about. A lot of the hyperscalers and even some of the on-prem folks are setting up their own AI factories. In this case, it’s Azure AI Foundry, a big statement on multimodel, 1800 models and more. You could just imagine how those horizontal and even vertical, we saw some details on vertical models where you actually have non-technology-related companies building models and putting them into for others to use, sometimes even competitors, which I just find fascinating that you have a company maybe who you don’t look at as high-tech getting into the high-tech marketing. That’s really cool.

AI Foundry also puts together the struts per se. You can evaluate the model, you can customize the model, you can do governance on… Due to its connective tissue to GitHub, VS, and Copilot Studio, and Foundry SDK, you can RAG as well against it. I just love that RAG has become so popular. We saw, on the fine-tuning side, and this is going to be super important for enterprises, companies like Weights & Biases, Gretel, and Stasic. I think you did an interview with Weights & Biases as well, and I did an interview with a company that did governance for AI, one of those startups. One element, and Dan, we’ve been talking about this on the show forever before it was cool, data is everything. Data is the number one impediment to enterprise AI adoption, and the reason is very simple.

With machine learning, it was inside the stack. ERP, SCM, PLM, legal, HR, customer service, but… Sorry, the sun is driving me crazy here. But with generative AI, it’s across all that data. A while back, Microsoft brought out what it calls Fabric, and that is their unified data platform. All the hyperscalers have them. The on-prem version from companies like Cloudera, you have Snowflake and Databricks in the mix, but what they did here is they added the kicker. They put a direct connection to what is called Fabric databases. We’ve seen this at, I think, Google and Oracle. It’s autonomous database, simplified. It’s really optimized for AI. I look at this as the easy button and system of record, system of transaction, really simplifying this capability. The money slide for me on Copilot Studio and Agentic AI was the ability to have… Dan, you’ve brought this up a lot. It’s like, “Hey, in a world of generic copilots, you’re going to have seven of them and keep hitting these.”

But the money slide for me that Microsoft showed, in addition to their own pre-built agents, was connecting with the agents from Adobe, SAP, ServiceNow, Workday, and even Cohere. Therefore, you can have Copilot as your main, but it’s hitting all of these other companies agents out there. I think that is a potential solution to this crazy agents all over the place mentality. Final comment, they talked a little bit about hardware. Obviously, they talked about Copilot+ PCs and how enterprises get engaged and what can happen. They talked a lot about Windows 11 and the benefit from moving there. By the way, if you’re an enterprise and listening to this and you’re not moving Windows 11, you’re literally nuts. I mean, it’s crazy how easy it is to get there. Microsoft has literally removed every single objection, I think out, there.

Final thing, new piece of hardware called Windows 365 Link. It’s essentially a puck that does virtual windows in the cloud. Windows 365, they’ve had that. You could run it on your iPad, your PC. The kicker here is absolutely no maintenance of the client, no management of the client. Microsoft literally does everything. Strategic comment here, I do believe that things like Windows 365 Link and Windows 365 is the future of Windows. Over the next decade, I do believe that, strategically, this is where Microsoft wants to… Gosh. This is where Microsoft wants to go on this. I’m actually pretty motivated. I’m probably going to get up and get a license for Windows 365, kick the tires. The one thing that always breaks down is the latency, but I do know that inside of these Azure data centers, they’re putting a lot of edge capabilities and you can even put an on-prem edge server to do some of the caching. That’s it, baby.

Daniel Newman: Yeah, we could tie a lot of that stuff together. You need to take a breath real quick because you’re back up again.

Patrick Moorhead: I know, baby.

Daniel Newman: All right, let’s go. We’re going to talk about alot we haven’t talked a lot about this year, but talk about information, data, RAG, architecture, path automations. The team was at OpenText World. We didn’t make it. We would’ve loved to have, it was just unfortunately scheduling impossibility, but we wanted to put some thoughts out there on what we thought was a productive event.

Patrick Moorhead: Yeah. We went there last year and everything the team told us, the event was great. Just a little bit about OpenText, they’re all about information, enterprise information, and they’ve been doing this since literally 1991. There’s been a lot of acquisitions, there’s been divestitures, but literally, they did content management. They created that whole category back in the ’90s, so it was pretty cool as you’d expect, which, by the way, I appreciate, which is a, “Here is our strategy.” A lot of the times, you think, “Oh, everybody knows our strategy,” maybe not. I love the layer cakes because I’m a visual learner, but essentially, knowledge management, experience cloud, business network, digital infrastructure, security cloud and developer cloud that sits off of aviators, which is a combination of agents and generative AI capabilities sitting on one of my favorite things, is an open multi-AI cloud, which, Dan, there’s multiple ways you can consume OpenText capabilities.

You can do it on the public cloud, you can do it on the sovereign cloud, you can do it on-prem. You can pretty much do it everywhere. It’s super unique, because typically, you’ll have people that you can only run this software on-prem or you can only run it in the public cloud and they absolutely scale the gamut. A couple standouts for me was Titanium X and literally how… This is version 24.4 Clouds Edition OpenText platform gives you access to 15 of these aviator agents, as I talked about, but the cool part is they have been building this roadmap of multiple agents starting on in 24.3, when they had 27 agents, all the way forecast into 25.3 with a 100+ agents. These are the type of agents that you would expect that if it’s sitting on a base of enterprise information…

Dan, we always talk about the future of having this big huge database and the ability to put agents around it and hit it. This is exactly what this capability provides, and it does it across the multi-cloud. I think the final thing I’ll say is multi-cloud, I talked about physically where this is sitting, but also being able to tap into enterprise applications from SAP, ServiceNow, Salesforce, and Microsoft, and also plugging into Oracle database. Pretty cool demos. I saw a few videos of those, basically agents at work, whether it was a CSR, an information worker. It’s pretty cool. It’s interesting. The company doesn’t talk a lot throughout the year, but they completely blow it out at their event.

Daniel Newman: Yeah. I mean, this is an information company and there’s a pretty substantial argument to be made that playing in this space right now in making your information accessible. Pat, we talked about enterprise search and the importance of being able to essentially access your data, utilize it, use generative tools on top of it to summarize, to create presentations, and of course, doing this all safely and securely. You got to wear shades, dude. You hit the big announcements on the head, but look, they’re building “AI knowledge workers”. I think that’s pretty cool. They call it aviators, because we like to stay away from pilots, for whatever reason, in AI. But the idea is these pre-canned industry or use case-specific that can act as agent. This is probably the most pragmatic way to think about the evolution of assistant to agent. Assistant is someone that helps you with whatever you need and uses all the available information to get you there. The agent tends to be very domain-specific. It’s able to do a specific task to completion and understands the puts and takes, the handoffs.

The agents are like, “Hey, I’m your financial analysis agent. I’m the one that’s going to be able to basically go in, take your request, navigate across a number of different tools, and come back to you with the report, and then I can give you a recommended set of actions and then actually hand this off to a different agent that maybe specializes in talent acquisition.” They can go out, and find, and source personnel for a certain role that you might be looking to fill, et cetera. This is really interesting, Pat. I mean, I think doing it across the different clouds, making it accessible to, “Hey, we’re going to ingest all your data, work across all the different clouds.” I thought it was pretty cool that they are integrating with Copilot. I think that Microsoft Copilot, the aviators and the copilots are going to work together to make sure that companies are flying through their generative AI ambition seamlessly. It’s pretty cool stuff.

I got to put a little more time and get my hands on some of this to see how well it works, but the idea of ingesting all your enterprise data and putting it to work hasn’t actually gone as far in a year as I would’ve thought, and this is probably the big opportunity, as I see it, for a company like OpenText. All right. You and I love to take 45 minutes to do three topics, so we’re going to have to go a little bit faster or we’re just going to run a little bit later. But the fourth topic, Pat, is Qualcomm Investor Day. You and I were not there. It was very sad, but we both had boots on the ground. We had our teams there, our head of our testing performance, Ryan Shrout, was there, Olivia Blanchard was there. Do you have anyone else on your team there?

Patrick Moorhead: I just said Anshel.

Daniel Newman: Anshel Sag was there. Yeah. Not just Anshel, it’s Anshel Anshel. We had a number of our team there. Look, this was an interesting moment for Qualcomm, because first of all, I think it was the victory lap moment from the last ’21 to ’19, where they really came out and said, “We’re going to diversify from handsets. We’re going to diversify from this licensing business,” after they had solved all these different legal battles on felt like a thousand different fronts. They’re going to never be in a position where they were hyperdependent on that one part of the business again, and that under Cristiano Amon’s leadership, a CEO, been something the company’s been very, very focused on. Now, it’s interesting is that I thought the presentation and what they said was like, “Yeah, we did.” It was like George Bush on the ship, like, “Job done,” or whatever. George W.

But the market was sort of mixed, the market responded somewhat sour to it. I was trying to think through why that might be. Sometimes it’s just the market in any given day, sometimes it’s the presentation. But overall, the company I still think did the job. They were able to show that they have a huge addressable market, almost a trillion dollar TAM. All things connected at the edge was confirmed, so there’s a 50-billion connected edge shipments that they’re expecting over the next six years. The financial targets, Pat, the revenue now of their diversified businesses of automotive and IOT combined 22 billion. I want to be clear, IOT, the way Qualcomm describes it, is not really maybe IoT, as you and I would describe it, but it’s basically all the different things that do connect at the edge that aren’t handsets. You do have the industrial IoT business, which they anticipate could reach $14 billion by the end of the decade, automotive, which they have this 45 to $50 billion pipeline that they’ve developed.

They’re estimating about 8 billion of revenue by the end of the decade. The PC number, Pat, was interesting. The PC number came out at 4 billion towards the end of the decade. I think both of us… You are me. Anyways, both of us did feel that was very conservative, and maybe that’s on purpose knowing that there’s still a lot to be learned in terms of winning PC. But given they have had a very promising quality product in market now and we’ve seen some pretty good… I’m on one of their devices actually right now, it seems like that opportunity might’ve been bigger. Of course, XR still looks to be a fairly small business, but something that the company tends to really be in. I’m going to give a little bit of oxygen to you here, Pat, because there is a bunch of other things I could lay into.

Last thought I’ll basically say is I do think data center is a bit of a mystery. They have that Cloud AI 100 part. It feels to me that, especially with edge data center, edge high performing, low power-consuming products that could scan the inference play out to the edge, is an area, I believe, that Qualcomm very likely could get into more, and I think that’s a bigger opportunity with its provenance and its engineering prowess.

Patrick Moorhead: Dan, it’s hard for me to just fill in the blanks because I feel like I’ve got to give a… We started late. I mean, we’re on our… One, two, three. We’re on the fourth topic and we’re 34 minutes in, so I’m not going to rush this. First and foremost, I think the company did a really good job. “Here’s what we said where we’re going to do and we did it,” and then they laid out a technology and IP portfolio that you have to be impressed with. I still have to get through the, “Hey, we’re doing custom silicon for specific platforms, but we can afford to do that part.” I really want to get underneath the leverage, but they really did a great job in that. To me, the vision makes sense. The markets they’re getting into totally makes sense given to me their history, their intellectual property prowess, and then you have this AI overlay.

About five years ago, I was really questioning what could they do with AI? To their credit, they stepped up, hired some pretty incredible people, leveraged into it, and then essentially have an AI platform, which, for phones, and for PCs, and all the platforms, and all the markets they’re in, people can leverage. That’s just really, really smart. On the flip side, I did get some feedback from some sell side folks. Literally, the TAM number is ridiculous. It was large, it was $900 billion. I don’t know what is included in that. I do want to figure that out, just like you and I have picked apart Intels and AMD. I don’t mean that in a negative way, but it’s important that we know what is underneath that TAM. I think the way the market reacted, you had a 2029 view, which is important, but a lot of the investors that we talked to, they only care about 18 months.

How do you make that connection? I think investors just weren’t confident in the run-up to it. I think they may have been looking for a bigger automotive number out there. Final comment is let’s talk about PCs for a second. Their number was $4 billion in 2029. Just to give you a sense, third quarter from AMD was around 2 billion and the last quarter inside of Intel, I think, was 7.4 billion. Yeah, it’s around 10% revenue share plus or minus a couple points, and I think that’s conservative. It’s a cutthroat market, but I do believe that that number is conservative and they can probably do more.

Daniel Newman: Yeah. A lot to unpack there, Pat, and we could. I’m glad you didn’t cut any corners there. Hey, welcome back. Thanks for-

Patrick Moorhead: Yeah, it’s flickering. I have no idea why my video.

Daniel Newman: Because the sun, you got a lot of light coming in. The camera’s getting jacked.

Patrick Moorhead: There we go.

Daniel Newman: It’s like you’re so good-looking, the camera’s like, “I don’t know what to do with this much hotness,” and so it got confused. It’s flickering. It’s wondering if it should be a Victoria’s Secret runway camera. All right, I’m digressing over here.

Patrick Moorhead: Stop. We need to stop at this point.

Daniel Newman: Buddy, I can’t help that you’ve just gone from… You’re like, what do they call it, Beauty and the Beast?

Patrick Moorhead: Something like that.

Daniel Newman: Become beauty, used to be the beast. All right, Pat. Interesting, you and I were on social and just happened to notice that the CEO of Salesforce AI is now the head of Meta’s business AI. By the way, is Meta getting into enterprise apps?

Patrick Moorhead: Wow, wow. Long story, Meta, aka Facebook, used to be in the workspace place. They pulled the plug on that, if nothing else, because it was hard to correlate a social media brand with a business brand. With Meta though, Meta is the odds-on favorite used. I think the downloads are 90% of downloads on hugging face for businesses. The big brands that are integrating them, not only at the, I’ll call it the hardware OEM layer, which companies like Dell, but also across companies like AWS and Google, where you can actually integrate in their models. People are pretty excited about this. I wrote an analysis of that on some of the big brands that they work with, and then recently, they’ve extended that to the U.S. military complex as well.

I always had the question in my mind, well who are they bringing in that’s hardcore enterprise here? Salesforce, from an enterprise SaaS standpoint, to me, did a very good job building and articulating the value proposition. They had real science involved too with their homegrown models and their data cloud. Clara, who ran all that for Salesforce, is now coming over to Meta. I’m just wondering, does this mean… I think you talked about applications, I don’t think that’s going to be the case. I think that this is going to be about models, model management, the value prop for horizontal but also going vertical. It leads me with a question which was what’s going on with Salesforce AI?

Daniel Newman: I don’t have a lot to speculate here. I mean, we’ve taken a pretty good look at Agentforce, and I think the pivot that Salesforce needed to be made is be made. We’ll see how that goes. It’s the right pivot. You could hear it from Satya as well. These apps will change. I guess when you say they’re not going to get into apps, will there really be apps? Will apps be what we know? My point is, is Meta Llama becomes the… You think about that plus a data fabric plus a abstraction and rendering capability of UI. Why do you need an app? What I’m saying is you look at… Meta has a pretty compelling data case, that they have the right data, and that whatever they don’t have, they can get the rest. If you’re really just thinking about a multimodal interface that you talk to and interact with, that can help and handle and agentically work across, there’s got to be a ton of consolidation here. I mean, there’s does.

The data all needs to exist. Where the data exists is a question mark, and then what kind of super agent front end do you point at the data? That’s a kind of a question. Do you have that kind of super interface or is it actually a bunch of distributed? Pat, you and I both talk about the client, the distributed centralized accordion that’ll go on. Knowing our luck, it’ll start out as one and it’ll end up the other over some period of time, and then go back. My last thought here is Meta has an enterprise thing. They do a lot of business for businesses. When I say this, don’t mistake, they’re advertising business is really big, but they’ve kind of had these different entrees into trying to be more of an enterprise company and it’s never quite landed. Llama has been the first thing, I think, where it’s been really clear that they are a key component to the enterprise. Now, where they build from here is really interesting.

The one thing I know for sure is they may be called Meta, but the Metaverse is not going to be their big business, at least not anytime soon. Let’s get to the very final topic. Let me try to remember what it is. All right. Pat, let’s talk about Microsoft’s new silicon. I started this, so I teased this. By the way, anyone out there, not only is he handsome, but he’s also a celebrity around the parts of Microsoft. Apparently, he gave some good advice at some point about silicon, because we did this podcast at the beginning of the week and the silicon folks were in the room and I couldn’t get them to stop cornering Pat. There was like six of them and they were all around him. They were taking photos like Japanese paparazzis. They were all, “Yeah, you’re really good.” I was like, “Hi, I’m Dan,” and they were like, “Oh,” and then they just turned right back around and talked to Pat some more. I’m like, “Hey, I talk about chips too.” The other guy was like, “Yeah, I’ve never heard of you before.”

This has been a big transformational year for Microsoft. We have a debate or I have a debate about the kind of participate through buy and participate through innovation and development. Google and AWS each entered their silicon-making in a different way. Google’s really been kind of ground floor building. They’ve done a lot of acquisitions, but we all know that AWS and you and I spent some time with Annapurna, bought an Israeli company that had some really efficient chip design development and building. That’s become Inferentia. Of course, they’ve also spent a number of years in-house building Graviton. Microsoft was kind of later to the party. They bought into OpenAI, which was an amazingly smart move. They basically now started developing their own silicon much later. I mean, they’ve done stuff with Xbox and stuff. It’s not like they’ve never been in this space, but when it comes to general purpose computing, NIC and switching, offload DPU, security, silicon, and having talked to Satya, the take that I got, and again, can’t really say much on the record of what he said, but the very genericized take is they’re going to build silicon where silicon makes sense.

They’re going to think about it through acceleration, whether it’s DPU Boost for storage and performance efficiency, whether it’s HSM or on-hardware encryption and security. It’s always better to harden with both software and hardware where you can, makes a lot of sense, working on the NIC business. Of course, we know they did Maya and Cobalt. Those were the general purpose. But the new stuff, the Boost DPU and the HSM were the two big new exciting things. I don’t have the specs right off the top of my head, Pat, but I think it was something like four times more efficient, three times more performant on storage offload with the DPU Boost. Pat, you tweeted something to give a bit more detail when you talked to that. Like I said, HSM was really about putting on on-hardware encryption, which is something that, right now, with all the innovation, and AI, and how fast it’s moving, I think adding security for every server that’s hardware-driven, not just software-driven, is going to be of a ton of value.

But hardware where it’s needed, I expect this to be the beginning, not the end. Every company has to be thinking about how to augment the NVIDIA demand, augment the NVIDIA reliance. It’s not a displace-replace. This isn’t one of those great headlines. They’re going to replace them with Maia. No, no, no, they’re not. But they are going to look at where vertically integrating hardware software makes more sense, creates more efficiency, and in certain places like a customer service bot, they can do it a lot inference, a lot cheaper if they build it on their own hardware. That seems to be where they’re going.

Patrick Moorhead: As an analyst, I have no favorites. I pick no favorites, but the level of competition is amazing from the hyperscalers to the merchant silicon providers. I really like the way that Satya laid it out. Before he came into silicon, he essentially talked about the end-to-end optimization that they make everywhere. Throughout technology history, we have aggregation and disaggregation, and we’re in this aggregation point because the efficiencies required to pull this off at the lowest amount of power at the highest performance, it forces you to get more integrated. I mean, that’s literally from the walls. Microsoft makes their silicon bets based on the outside of the walls of the data center in. It’s the data center cooling, and power, and the density. It’s silicon, the networking, the storage, and the software.

I do love this approach. I feel like AWS told the first story related to this for hyperscaler, but adding Boost DPU and HSM, they filled it out now with their own first-party silicon. That’s not that they offer something for everything. They don’t, but I feel like adding HSM and Boost were two things that they had been buying from. By the way, I do believe that that doesn’t mean that… Marvell, I do believe, is the company that they worked with on this HSM chip. Checkout X or LinkedIn, there was a press release, I think in August, talking about them working with Microsoft on it, and I do think there’s correlation there. Microsoft bought, in 2023, a company called Fungible, and I think that that’s the team that cranked out this DPU.

I might be remiss in saying that, from a merchant silicon standpoint, there were two very important announcements that were made. We mentioned this in the NVIDIA earnings piece, but NVIDIA, Blackwell, and Azure is… They actually announced the instance of it, GB200 V6 VM, and that was a big deal. They’re saying that they are first, and I’ll take them at their word. With AMD, it was Azure HB5. This is an AMD-based instance for high performance computing. It’s great to see the company continuing to get even more serious about first-party silicon and bringing innovation out there. Yes, it was great when the designers, the developers of all this great technology, literally came in with a bag of chips and laid them out on the table, and we took pictures of them and we took selfies. It was fun.

Daniel Newman: Well, it’s great. By the way, great rounding out. I’m glad that you talked about the build out of the full data center and how that all comes together, because I think a lot of people don’t appreciate that these trillions of CapEx expected to be spent in the coming years isn’t just the chips. Sorry, it’s not. You got to buy real estate, you got to build buildings, you got to buy racks, you got to run cables, and optics, and heat, and thermals. All right, everybody. Great show this week. Pat, we did it. We appreciate everybody tuning in. Thanks for being part of The Six Five community. Hit that subscribe button. I’ve been practicing talking all night. Tell all your friends about us. We believe this is the best place to hear what’s going on. But for this show, for this episode, for Patrick Moorhead, for myself, it’s time to say goodbye. We’ll see you all later.

Patrick Moorhead: Bye-bye.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Microsoft Announces Key Advancements at Ignite 2024, From a Proprietary Chip to Enhanced Ai-Driven Security Tools, to Help Enterprises Keep Pace With the Evolving Threat Landscape
Krista Case, Research Director at The Futurum Group, examines Microsoft Ignite 2024’s key cybersecurity announcements, such as the AI-powered Security Copilot, proprietary Azure HSM, and its SFI and the bug bounty initiatives.
Exploring How Microsoft’s Latest AI Advancements Reshape Enterprise Operations, Productivity, and Security
Keith Kirkpatrick, Research Director at The Futurum Group, explores Microsoft Ignite 2024's AI advancements, such as Microsoft Copilot Studio, new AI agents, and governance tools, which are transforming enterprise workflows.
Dion Hinchcliffe and Camberley Bates delve into the latest earnings and updates from Lenovo, Cisco, Kyndryl, and the impact and understanding of Large Language Models in today's tech landscape.
Elastic Posts 18% Revenue Growth, Driven by Elastic Cloud While Addressing Leadership Changes and Persistent GAAP Losses
Mitch Ashley, VP and Practice Lead, DevOps and AppDev at The Futurum Group, discusses Elastic's Q2 FY2025 earnings, exploring cloud growth, key AI innovations, and the challenges posed by leadership changes and profitability concerns.