Talking Microsoft, Synopsys, Qualcomm, Supercomputing 2023, Lenovo, Cisco

Talking Microsoft, Synopsys, Qualcomm, Supercomputing 2023, Lenovo, Cisco

On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss the tech news stories that made headlines this week. The handpicked topics for this week are:

  1. Microsoft’s Custom Silicon at Microsoft Ignite 2023
  2. Synopsys.ai Copilot
  3. Qualcomm AI 100 Ultra Card
  4. Supercomputing 2023
  5. Cisco Q1 FY 2024 Earnings
  6. Lenovo Q2 FY 2024 Earnings

For a deeper dive into each topic, please click on the links above. Be sure to subscribe to The Six Five Webcast so you never miss an episode.

Watch the episode here:

Listen to the episode on your favorite streaming platform:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Daniel Newman: Hey, everyone. Welcome back to another episode of The Six Five podcast, episode 193. Here with me, Patrick Moorhead. It’s another week, Pat. Is this a real background? Am I at home? Where am I? I’m going to tell everybody about that in a minute, but first and foremost, buddy, we’re a little late, but we never let our audience down, do we?

Patrick Moorhead: Never. Never. And it’s so great to be here. Listen, I wish I were in Vegas. I did get a very amazing offer to go, but I couldn’t break my … I got to get my haircut tomorrow.

Daniel Newman: Dude, nowadays, if you’re a Silicon Valley tech executive or a tech billionaire, you don’t go to the race because you’re probably at home microdosing ketamine or whatever these people are doing these days. I’m sorry, that’s just-

Patrick Moorhead: I know, those crazy kids.

Daniel Newman: I read something about that in The Atlantic or something and I just was looking for the opportunity to make the joke. But, yeah, I’m here in Las Vegas. I am staying in a beautiful Airbnb. I have a whole bunch of stories in my head of what actually goes on in this particular Airbnb when I’m not around, but none of those stories are safe for the Six Five audience, so we are going to talk about a great and very interesting week of technology here today, Pat.

Let’s get this thing started, man. First of all, we’re going to talk about Microsoft. We’re going to talk Synopsys, more Microsoft. We’re going to talk about Qualcomm. We’re going to hit supercomputing, which you and I were in insie-outsie of, and then we’re going to hit a little bit of earnings at the end. We’re going to save the ground truth for last, we’ll just put it that way.

And I got to do this before we start, the little disclaimer, first time watchers of the Six Five, this is mostly analysis. We’re going to give the depth of the news, so just know that’s what we do here. We break down the news, and the show is for information and entertainment purposes only. And while we will be talking about publicly-traded companies, please do not take anything we say, please do not take anything we say, please, I got to say it three times, as investment advice.

But Pat, it was a big week for tech and it was a big week if you’re in the semi space, applied materials, shipping stuff to China they’re not supposed to be, other companies. You’re writing some interesting opinions about maybe or maybe not shipping things to China. We aren’t going to really talk about the NVIDIA stuff, but we might mention it somewhere along the way here. And then, of course, Microsoft, Project Athena is no longer Project Athena, we have a real name. We have Microsoft Ignite. Let’s talk about Ignite and the custom silicon announcements.

Patrick Moorhead: Yeah, it’s a big week there, and probably four or five years, Dan, I was always questioning how could Microsoft keep moving from a performance standpoint, less of a performance standpoint and more of a cost standpoint, and compete with AWS who has been doing first-party silicon for the data center for close to a decade. And we did get our answer there. And Net-net, there’s an accelerator for AI inference and training called Maya, and then there is a CPU, an ARM-based CPU, called Cobalt. You can go and read all the details. I’ve had over 100,000 people read my stuff on X and LinkedIn. I’ll eventually get it up on my own website, maybe Forbes, I don’t know, but it’s been a very provocative type of thing.

I’m going to try to keep this strategic versus geek. Now there is some information that we don’t know yet, for instance, how does this compete performance-wise versus AMD and Intel on the merchant side, but I think, more importantly, versus AWS and Google on the custom side. Microsoft, to me, I did see, under NDA, some performance figures and I did think they were compelling. Those aren’t public yet, but I’m looking forward to that. From a rollout standpoint, Microsoft is focusing Maya on Copilot. I consider that a SaaS product, and then Azure OpenAI Service. Interesting they said Microsoft Copilot or Azure OpenAI service, which I’ll admit, I don’t completely understand the or versus the and there.

Daniel Newman: And?

Patrick Moorhead: Yeah. And one thing that Sam Altman said that leads me to believe that current and maybe some future trillion, 10-trillion-parameter models might be done for OpenAI, as Sam Altman said, that it was a co-collaboration to produce more capable and cheaper models. I talked about performance, and I talked about cost. The CPU used on Maya’s a fourth gen Intel CPU, go Intel. Note that they did not use their own custom ARM or AMD there. Some details on Cobalt, it’s based on ARM Neoverse N2, and N2 is the higher efficiency, not the max performance that we see from other makers. And they said up to 40% improvement performance per core versus the previous ARM server that they have in there right now.

Couple big things that were interesting is I always thought that if Microsoft were going to do that, they were going to part with a custom provider for maybe AMD or Marvell or Samsung Semi. They did not. They told me they went all the way from core IP design to SOC design to tape out and validation on their own all the way to TSMC. How they stealth that, I have no idea, and I don’t consider one information article as a giant leak. I’d said it, what I want in the future is performance data and pricing versus the competitors. I will also want to know what the dates for full IAS support as well as SaaS support for Microsoft 365 and Dynamic 365 and official GA dates. Congratulations, Microsoft. You gave me more than I had expected. You did more than I expected as well as opposed to partnering, and it is game on at this point.

Daniel Newman: Yeah, absolutely. That’s some good insight, Pat. Look, I’ve had a lot of media reach out to me. This was a big moment. They’re asking questions. Is this the take on moment? Is Microsoft going after Merchant silicon? I do think there’s very much an AWS playbook here, and like I said, each company is their own and Microsoft doing some really innovative things with OpenAI. It’s not a one good, one bad or one copying, but I’m saying there’s some very valuable lessons that could be learned from the past near decade that AWS has spent building its own silicon in the hyperscale cloud and offering that to their customers. This is something that’s been a long time coming. People that are surprised, I am asking them why? Literally there’s been a thousand stories about this particular thing happening and now we know what’s happening and at what pace it is happening.

This is an opportune moment. I mean, let’s be very clear about what’s going on here. Vertical integration is the winning formula, especially for Wall Street. Apple made this model very evidently successful by in the Tim Cook era, verticalizing everything in its supply chain. Now again, we’ve seen there’s been vulnerabilities there if you can’t handle best in breed, but in things like an AI ASIC, this is very achievable. It’s been very achievable, there’s many building it and Microsoft is more than capable to do this. You look at a core compute like what they’re doing with Cobalt, very achievable. The refactoring of workloads for ARM, the optimizing workloads for ARM has been going on for some time now. There was a period of time where this was a much heavier lift. The last several years, AWS has really paved the way to make these workloads run more and more efficiently.

And let’s be very candid about what this is. This is economics. I basically chalk this up to three things. I said it’s performance and optimizing performance of workloads on Azure. Two, it’s power and efficiency. It’s a sustainability story and an efficiency story. Every company’s looking to get more efficient. And three, it’s economics. Those two things drive economics. It’s spend less from a standpoint of Microsoft and potentially offer customers to spend less, and that’s how they spend less net. The idea is they keep raising up their bill, but more and more of that bill is going a hundred percent to Microsoft.

Does this mean they’re going to run away from Merchant silicon? Absolutely not. You saw actually at the event they made announcements of Merchant silicon, AMD and NVIDIA were on display. They will continue to be a general store for consumption. So if you want Intel, you’ll get Intel. You want NVIDIA, you get NVIDIA. You want AMD, you’ll get AMD. This is Microsoft basically saying we have an answer. Choose your own adventure silicon wise, but if you want highest performance optimized for Azure, optimized for the workloads that we’re building around open AI and you want to do it potentially at a lower price while being more cost-efficient, they’re putting their hand up and saying, we’re going to play the game. It’s going to be a lot of fun, Pat to watch where this goes next.

All right, let’s hit another silicon Semiconductor optimization. Let’s talk about Synopsys. Pat. Dude, two weeks in a row Synopsys.

Patrick Moorhead: I know, I know.

Daniel Newman: I know they’re a very cool company and they’re kind of on the rise, the second-largest provider of IP after ARM. Not a lot of people know that. So it’s important that the world is aware that this company’s doing a lot of things. One of the areas that Synopsys really leads is an EDA, which is all about tools for design and in the era of where you’re hearing exactly what we just talked about with Maya and Cobalt, Boost, companies like Synopsys tend to be very important in that process of building and designing and taking next generation ships to market.

And what would be cooler than taking the generative AI trend and coupling it with some very powerful tools that have been built for designing silicon and then putting the two things together to solve the big problem, Pat? You know what the big problem is, not enough engineers. There are not enough people in the business right now that understand how to design semiconductors.

And as the demand is rising and as the need for next generation silicon is moving at a pace that’s much faster than any company can keep up, we need to be leveraging the same tools that we’re going to use as a byproduct of the silicon to help us design the next generation silicon.

So there’s already what they call at Synopsys, AI, and that’s their high driven suite and what they’ve added now to it is the Copilot. So they basically are partnering with Microsoft and there was joint announcements that came out both ways. They’re basically using the Copilot now, engineers, and I would say, Pat, my experience in having been demonstrated this, this is not something that’s going to enable you and me, maybe people that roughly understand, but for someone that is a real engineer that’s part of designing circuits and designing silicon, this is going to allow them to use native natural language to get code sets, to get data sets, geometry, calculus and answers that they need in a quick timeframe where they can use that data, use best practices. It’s almost like having Copilot and GitHub, but it’s for silicon designers where they can basically get the information they need to be able to shorten the time to design and allow more design work to be done with fewer engineers.

I’m going to double down on that right now. There is a massive shortage in the market of engineers and with the speed that we are innovating around silicon and the number of companies that are trying to build ASICs, FPGAs as well as larger integrated designs, Pat, there isn’t enough of these people. So Synopsys has raised its hand and said, Hey, we’re going to be a company that’s going to help solve this problem. We’re going to use generative AI to do it. And Pat, I think it’s pretty cool.

Patrick Moorhead: Yeah, it’s a good breakdown there. And I was really struck, well, actually, let me just bracket it here. This tool is the first version from Synopsys. It helps designers and those doing validation. They’ve been doing this on the machine learning front and the UI was different, but what they have done is they haven’t just stuck an NLP front end, generative AI base front end to do the same things that they’re always doing. They’re hitting different data sets, they’re hitting new proprietary data sets to give answers more quickly and more accurately. I couldn’t help but to be struck as I went through the briefing and as I went through the demo with them and watched that video, it struck me very much as some of these tools like GitHub Copilot that are for software developers. And you might say, well, Pat Synopsys is software. It is, but it’s designing hardware. And the fluidity and how much I recognize that, even not being deep in the weed tech nerd I think is important.

And it even kind of pays off on what Microsoft had talked about, which is it’s a Copilot for everything. And the fact that these folks came out with that and aligned with Microsoft is a pretty big deal. And the way that I like to describe it is, hey, improving workflows in ways you couldn’t imagine before, but now that we’ve seen them, you’re like, oh, this makes absolute sense. Little kudos for Synopsys here is that Microsoft silicon team uses this tool or used this tool. The Microsoft team didn’t get a lot of announcements, but they seem to have a lot of kudos. Now, had their silicon stuff not come out, you’d be like, well, who cares what Microsoft says about silicon? Well, we should care now, as we just talked about, Microsoft has a relevant portfolio of CPUs and accelerators and some stuff they’ve done on the networking side.

Yeah, I’m expecting to see a lot more here. And by the way, there are clients who don’t want to use Azure, who might want AWS or Google, and I would expect that over time we see Synopsys light up capabilities from those two cloud providers. Unclear to me if you can even run this stuff on-prem. I do know a lot of their AI stuff you can run in a hybrid model, which is good. So congrats to Synopsys.

Daniel Newman: Yeah, I think you’re spot on by the way. This isn’t the last place Synopsys is going to put this, these EDA tools will need to exist in different clouds for sure. But you know what, Pat? I love the ingenuity. This is problem solving at its best. And so it’s a first iteration. That’s the other thing you said I really like, this is just the beginning, this isn’t the end of this kind of tool and there will be some work to be done before it’s going to maximize value, but I do like directional, especially the use of generative AI to solve problems where there are real labor shortages in the world. This is an area that just can’t pump out enough qualified people and it’s going to be so important because you know what silicon eats Pat for breakfast? Silicon eats the world. Silicon eats the world. All right-

Patrick Moorhead: I think I heard some smart analysts say that once.

Daniel Newman: Yeah, real analysts.

Patrick Moorhead: Right. Right.

Daniel Newman: Real analysts.

Patrick Moorhead: Exactly.

Daniel Newman: So let’s move on. Let’s talk a little bit about some new Qualcomm updates on the AI 100. Did I say AI or IA? It’s AI 100 Ultra Card, Pat, what’s going on?

Patrick Moorhead: So most of you know Qualcomm as a technology provider for smartphones, and then what we’ve learned if you’ve been paying attention is they’ve been increasing their portfolio into areas like automotive where they have a 30 billion backlog and it’s the first time last quarter that it actually made a highlight reel that the business was big enough that said that it helped the business. Also into PCs and also expanding into IoT. But what a lot of people don’t know is that Qualcomm is leveraging their very scalable AI blocks that they use in a lot of different implementations into the data center or the data center edge.

First we saw the Qualcomm AI 100 and that’s currently inside of AWS and not just for automotive customers like BMW who are going with the Qualcomm solutions for their cars, for self-driving and safety, but also are open for anybody who would want to use them. And as you would expect, the solution is very efficient in terms of what it can do, for lack of a better term, TOPS per watt. I think a lot of people are wondering like, Hey, are they going to keep this going? Well, here we go. A couple of days ago they dropped the A 100 Ultra that cranks out even more performance, trillion parameter models here, which is just shocking and excuse me, a hundred billion parameter model on 150 watt card.

The crazy part about this was, it was more than kind of dog with a note here if any of you speak French out there, but this has actually showed up with two customers. The first one is HPE, and the second one is Cerberus. So Cerberus, everybody knows HPE, very successful on the edge, very successful in high performance computing, and they’re offering an AI training as a service, which I’m still waiting details on pricing in GA. They had acquired a company called Cray who is a leader in highest performant supercomputer. We’re going to talk a little bit about that afterwards.

And then Cerberus is this wafer scale, literally the size of the chip is nearly the size of a wafer, and they don’t just sell the chip, they sell the entire system. That company has seen a lot of activity and interest from US departments of X, Y, Z, and you see the success that HPE has in those same just cloud maybe the US military. Then you combine the trust that a company like Qualcomm has on the inference side, it totally makes sense. HPE has already determined what they want to use for training. They didn’t talk about what they’re using for inference. I’m waiting more details for instance, is this the data center edge? We’re going to see this at Tesla or something like that. It’s a lot clearer cut for Cerberus who doesn’t have an inference capability. They’re more of a training. And now Cerberus can come in with an end-to-end solution leveraging Qualcomm, and I’m super interested to see about what the future holds of this business unit.

Daniel Newman: Yeah, Pat, this is really interesting, for lack of a better word, this is not the part of the market that people think about Qualcomm, but it’s a really useful piece of hardware based on my first assessment. Now, again, I’m reading over what Justin Hotard is saying at HPE, I’m looking at the Cerberus commentary. You’re talking about a very low power consumption, but powerful accelerator for these AI workloads. And it really looks like they’re kind of wedging their way into the cloud. They’re wedging their way onto the prem data centers. They’re wedging their way into the hyperscale cloud potentially here to be offering … to putting their, the way ARM has squeaked its way into the PC and how lower power has found its way.

Could this lower power trend in what Qualcomm has … it has quite a pedigree in this particular space, be the beginning of a new business unit for the company. And it sounds like that’s the direction it’s going. Now, it’s early days here, but from generation to generation, you’re seeing some really good improvements. You’re talking about it looks like something, it’s a pretty significant order of magnitude over that original Edge 100 that they had put out. It’s starting to look quite compelling. They’re finding OEM partners now. You’re hearing from cloud companies that are building as well as accelerator companies. What you’re seeing here from Cerberus that are basically saying, we can partner with Qualcomm here and get the types of gains we need.

It is early for me, I got to get a briefing to be candid, to learn a little bit more about this. But Pat, this could be the next IoT business for Qualcomm. This could be the next, where’s the next big growth come from? And you got to say, it would make Qualcomm more attractive. I know there are kind of heads down all in on AI PC, but let’s face it, data center dollars and margins are just better. So if they can really find their way in here and show that there’s a demand for lower power consumption and obviously high output accelerators, this could be an interesting place. And it’s exactly where Qualcomm is known to be able to play well. So early days. But let’s keep an eye on this Pat, because you know what? From FLOPS to TOPS, it’s an AI world. So it’s time to talk a little bit about FLOPS to TOPS.

Let’s talk about our fourth topic, which is supercomputing. You and I are headed out that way, Pat. I just want to talk a little bit about the evolution of supercomputing before we talk a little bit about themes and stuff we saw. I mean, look, there’s no way in one day that we could get around and hear everything, and there wasn’t there one theme of supercomputing and there wasn’t one announcement of supercomputing, but Pat, very interesting event this year. First and foremost, absolutely jammed wall to wall. You and I got there first night at 7 o’clock PM Mountain time opening, and there was probably 10,000 people lined up at the doors. We were getting shoved over, you and I, the two wafery thin guys we are, we’re getting shoved over by propeller heads, geeks and AI fanatics everywhere that had suddenly returned.

Now again, I talked to some people that had said over the last two or three years … go back two, three years, obviously there was a period of time that that wasn’t going on because shows weren’t going on, but the supercomputing event had gotten thin, it had gotten sparse, it had become very out there and very geeky in the era of AI. And that’s where the FLOPS to TOPS joke came from. We used to measure FLOPS, now we talk about TOPS in case anybody needed an explanation. This was red-hot. And so we saw it wasn’t only the big companies. We had a bunch of Six Five videos with Lenovo. We talked to Lenovo, we talked to Lenovo’s partners, we talked to Imperial College and some of their big users, Pat and we talked to some of their executive team. We talked about everything from liquid cooling to next generation architectures to exponential compute requirements and clusters that had more GPUs than a F1 car has horsepower. Sorry, I had to say that because it’s an F1 week. Yep, you’re wearing it, but I’m living it. I just want to point that out right now.

Patrick Moorhead: I got an invite, but my calendar was too full.

Daniel Newman: Yeah, no, I know something about a sofa, French Bulldogs and I don’t know, fun games.

Anyways, but it was a really, really interesting show. And then there was … so the big companies, the HPEs, the Dells, Lenovos, all very active there. And of course all the silicon, AMDs, NVIDIAs, Intel all on display. And then Pat, there was just this massive ecosystem of what I would call A series all the way to your C and D and E series companies there that were really on big display. And it wasn’t just an HPC show. I want to be very clear, this was an AI show.

There was companies that were in the AI space, GPU powered storage, like our friends at Nyriad that we talked to. You had companies like VAST Data that are building new storage architectures that are powered by AI that were there in a big way on display. You and I both saw the Grok Llama who Grok very focused on accelerated computing language processing units. We were running around spending time. You introduced me to Gopi, from Axiado doing very interesting architectural and security related hardware for building next generation compute network fabrics. So this was just a very interesting show, Pat.

But what I really took away from it, and I know this might be like typical Dan oversimplifying things, but is that the world is really excited about AI and supercomputing is kind of the front edge of it. This is where we’re seeing what all this AI at its maximum deployment with the biggest systems the most cores can do when put to work, to solve problems in healthcare, to solve problems in engineering, to solve problems, even things we talked about with design. This is where that kind of innovation starts, Pat. And so as our friend Pat Gelsinger said something like, the geek is back, this was like the geek is back moment. And by the way, you and I were the coolest guys there because we were bottom third by far of the IQ, but I would put us in the top third on the EQ. But all joking aside, Pat, love to get your takes on what you thought about the event.

Patrick Moorhead: You hit a bunch here, and I do want to give Dan the trademark for FLOPS to TOPS, for those of you who don’t know what a FLOP is, but know what a TOP is. It’s a floating point operations per second. And that was the way that performance was measured in this space for a long time because you were doing visualizations, you were doing simulations, you were doing experiments, you were trying to recreate in the physical world in a digital sense. And then for the past three or four years, AI has plopped in there. And on the machine learning front, HPC experts were using it to narrow down the data set in terms of what the FLOPS needed to work on. And now with generative AI, it’s very different in that they’re actually using generative AI to change the way that they try to solve these problems. And it is truly cool.

Dan, you and I talked with the leaders from Imperial College, the Flatiron Institute with the, I want to call it the Henri system, not the Henry system. We talked to LRZ, the Leibniz Supercomputing Centre leader as well. And they all said-

Daniel Newman: They all had cool names, by the way, I can’t remember.

Patrick Moorhead: They all had cool names and they all said that the AI demands from their users are off the chain and whether that’s their users they serve internally, with the case of Imperial, but also outside like you have with LRZ. Props to HPE, we saw not a ribbon cutting, but a celebration and a toasting. Justin Hotard who runs HPC business unit celebrating the Aurora computer coming in as the number two highest performance, not certified completely yet, but it was only using half of its capability. And if you want to know who’s first, it’s also HPE, the different supercomputer based on AMD technologies. And the one that they were celebrating is based on Intel technology.

It took a long time to get Aurora across the line. In fact, the silicon that was used in there was Knights Landing, which was just completely different architecture that has since been put to bed. And Intel had to create an entire GPU for that system. So it’s good to see it come online, the number one. And number two are still HPE. I want to give the market share leader Lenovo, some kudos in there from a volume standpoint in the top 100. And also thanks to them for letting us interview some of their very important customers and executive.

Yeah, the amount of folks there from national institutes, college institutes, classic supercomputing people, but also like you said, the startups, the Groks, the Nereids, the Axiados of the world. I don’t know if it’s even fair to call VAST to startup given the revenue that they’re cranking out. But their part of the action too is they have a very interesting proprietary way of bundling some of the operating system and the file system actually into the unit itself. And you probably remember that VAST and HPE just did a tie-up, I forget, was it for block storage? One of the capabilities?

Daniel Newman: It was for GreenLake. Yeah.

Patrick Moorhead: Yeah. So interesting stuff.

Daniel Newman: Very, very cool, Pat. 10 minutes to do two topics. If we were The Six Five, that would be perfect, but we’re the six 10 now, so never going to happen. All right, we got two earnings to do, Pat and I do got to stop at the top. I got to stop at the top with all the FLOPS and the TOPS. All right, Pat, Cisco had a good quarter, but an interesting guide. What’s going on there?

Patrick Moorhead: Oh man, you just stole everything.

Daniel Newman: That was it. That was it. You want to go to the next topic?

Patrick Moorhead: No, no, no, we’re good. So Cisco had a phenomenal quarter. They beat by almost 8% on earnings. That’s a record. Highest gross margin in 17 years. They had a slight beat on the top line, almost 0.3%. By the way, that revenue number was a record and you would’ve expected the stock to just go nuts. Well, it tanked about 10% and it’s all about missing expectations on a Q2 guide. One of the first times that’s hit the revenue for the quarter, one of the first times in a long time that every single business unit was up, networking, security, heck observability up 21%, that’s a great thing to see as we get into this Splunk acquisition here. So this was all driven by a ThousandEyes and AppD. Collaboration been a long time, up 4%.

Super nice to see, driven off the back of calling and the contact center re-architectures, right? If there’s anything that can be improved, improving customer service through AI, it’s the contact center. Great progress on software, ARR and RPOs. You can read my analysis on LinkedIn and X, but all with the exception of ARR, which by the way is a huge number of 24.5 billion, everything else was double digits, software, subs, RPOs.

Now what happened with the guide? These are not Cisco words, these are Pat Moorhead and experience words, but it’s digestion in the field, which means I need all this hardware, I need more time to install all this hardware and therefore what are you going to do? Are you going to keep shipping it to them? No, the customers aren’t able to install and crank through all of this. So they’re going to pull back their orders a little bit. Not only direct but through the channel.

Final two interesting comments. An interesting nod to open AI networking. I need to do a little bit more research on that, at least what Cisco’s doing. But what we’re seeing is NVIDIA created its own called proprietary, it’s ethernet and also other variations, but really any type of NVIDIA networking that is connecting cards together or multiple clusters is not going to be Cisco networking and it’s not going to be Marvell and it’s not going to be Broadcom. So it looks like Cisco is getting into the game of more open AI network and my hope is that it’s going across multiple types of silicon. That it’s not only connecting cards, but it’s also connecting clusters, but it’s also connecting data centers in an open type of plane. There is a recent open networking, low latency standard that has come out. A couple of people have come out with that and I hope that it’s in support of that. Final thing, Splunk acquisition on track. Net-net, great quarter, challenging guide.

Daniel Newman: So goes to China to die in all serious, we’ll have to see how that goes. We’re still sitting here, I’m at the edge of my seat on this couch, still waiting to see if that VMware deal goes through. But anyways, that’s a completely different topic for another day. Cisco, Pat, you had a lot of great comments there. I’m going to just maybe add a couple of things. One is people don’t care about the current quarter if the guide isn’t good, except if the current quarter is bad, then they really care about it. But this was a great start. Record starts of the year. Record revenue, record profit, double-digit increase in software revenue, subscription increase by double digits. You mentioned a lot of this stuff. Solid ARR, the RPO looks good. We’ve sold a lot of stuff in advance of all this AI and that was kind of the moral of that story. A lot of stuff sold, now it has to be implemented.

Pat, what do I always say about if you want to sell the next project, you got to finish the current project. This is a problem of every single business and this is the problem that Cisco is having. It’s the AI system created a lot of buzz, drove a lot of revenue, now it’s a little bit of catch up and then forward you go. Bottom line, you adjust down once, they adjusted down somewhat precipitously here and now, I would think that with this kind of precipitous hit they’ve taken, they’ll have a chance to beat the rest of the way, which is good despite the fact that obviously it did see about a 5% drop or a little bit more than that in the name. Pat, I’m overall positive. I mean look, even WebEx grew, I mean you got to look at that as a win because that hadn’t happened in a long time.

Patrick Moorhead: That’s a good trough, yeah, that’s a good trough.

Daniel Newman: I’m strong on security. I’m strong in terms of my opinion of Cisco is strong on security, strong on observability. Network of course is core to the company, but it’s diversification in the software and the Splunk acquisition. I love the Splunk acquisition, I got to be candid. What a great deal for the company. It’s going to be good for Cisco. All right, we’re running out of time again, couldn’t do it in five minutes. I am going to do my best to do Lenovo-

Patrick Moorhead: Do it, here. Do it, do it.

Daniel Newman: … in just a couple of minutes.

Patrick Moorhead: Do it.

Daniel Newman: So here’s a long and short story, Pat. Lenovo, ISG, pretty good. SSG, really good. IDG, not so good. Okay, what’s the long take here? PCs are just starting to come back. Good news, we’ve heard from Intel, Pat, we’ve heard from Pat Gelsinger, we’ve heard from Lisa Su, we’ve heard from Huawei, we’ve heard from Enrique Lores.

The bottom for PCs is more or less in, that is the consensus. Is that factual? We’ll see. Shipment’s way down. The Gartner numbers showed way down. Those other guys showed a number that was way down. I’ll see what the future of intelligence data says when we start to record that later next year. But the numbers are still soft. But coming out, what’s the trend line for PCs? The trend line is the AI PC. This is the sort of inflection moment in which we’re going to see another supercycle that’s going to be based upon these new generative AI workloads that will not perform optimally on current hardware. This is going to force enterprises and consumers to be looking at upgrading and it’s going to move upgrade cycles on PCs to something that could end up looking a little bit more like phones, at least on the cutting edge where people are going to want to get more generation to generation.

The numbers vary across averages, but people don’t update their PCs as often as their phones. It’s just not as frequently. And so we’re going to see that more. Pat, I was very bullish on the 40% non PC revenue. The company has moved away from having as much dependency and the more they move towards parity between the infrastructure, the services and the PC better, they’re seeing strong expansion of their SSG business. The managed services revenue mix is up and they’re seeing growth in the margins. They’re investing billions in AI. The AI investments, the solution building Pat is going to be really important because if you’re just coming to sell hardware, if you’re just trying to sell GPU boxes, that’s going to be competitive. It’s a race to the bottom. But if you’re able to sit, stand up solutions, deliver services, deploy software, you’ll see growth, you’ll see scale.

The overall Pat, the company had record storage revenue, which was a good number. I don’t think people realize it, it’s the number three storage company in the world. That’s really impressive. So shout to Kirk Skagen and to Ken Wong that run those two businesses. You’re on a really good trajectory over there. Moral of the story, Pat, as I walk out of this one, I couldn’t go as fast as I wanted to, but the moral of the story here on Lenovo is the world needs to realize that this isn’t just a PC company anymore. I read a whole bunch of media pieces, it was eight paragraphs about the PC business and then this much of a blurb about the rest of the company. 40% of the company is not PC anymore. It’s time that the world takes notice that this is an infrastructure company, a service company, and that is going to be what rescues it from these vicious highs and lows.

Patrick Moorhead: Yeah, so I like to gauge earnings in a couple ways and one of them, if revenue is down is, is it self-inflicted or is it primarily market inflicted? The PC market is absolutely down. Lenovo is still the number one unit market share. And there’s a claim in here that I need to go and double click that says they have the highest profit for a PC company, and I need to do the double click on that one. And the only reason that I think they’re printing money with ThinkPad and commercial, but they have a very robust consumer business. So I think it’s mostly, again, the market as opposed to what Lenovo is doing.

I think you captured it pretty well on ISG, particularly when it comes to storage. I need to do the double click with Kirk on the server part, but you can’t take anything away on the storage side with record revenue, just an eye watering 46% year over year and like you said, Dan, number three, software up 3%, not great. But I think that those are inextricably tied to the amount of servers that they sell as well. High performance computing, their number one market share, the revenue is up up 12% there.

I want to talk a little bit about the future, which is I think the company does have the ability to take advantage of AI. They’re making some big investments and in a way they’re Switzerland for software. So companies like VMware, companies like Red Hat and data protection companies like Veeam and Cohesity and Commvault and folks like that. Lenovo offers a certain version of data protection, but I don’t feel like they’re going after them. I would like to see Lenovo amp up their partnerships with other types of data companies like the Clouderas, the Snowflakes, the Rubriks, and folks like that because at the end of the day when Enterprise fully realizes the value of AI, it’s going to be about commingling data. Even though the current POCs and the way they’re looking at it is very narrowed on a certain type of data, maybe on CRM as an example, that’s going to move outside and this data conversation is going to be more important. So Lenovo, you did actually better than I thought you would. A couple of things I need to drill down on.

Daniel Newman: Yeah, I think they’ve got a good turn when the PC headwinds clear because the other parts of the business should be a little less cyclical and will be healthy. So there we did it, Pat, we did it. We fit it in. We fit it in because we love all of you the audience. We appreciate you. Pat, we’ve got a lot going on. We’re going to have a ton of content coming from AWS re:Invent. It is the holiday week, so if we don’t get on Friday, there may not be enough to cover, but we’ll be coming back from re:Invent. There will be a ton and then there’s a few busy weeks before the holiday Pat, but we covered it all today. We Microsoft, Synopsys, we covered Qualcomm, supercomputing, and then some earnings from Cisco and Lenovo. But for this episode of the show Pat, it’s time to say goodbye. Right?

Patrick Moorhead: Take care.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Brad Shimmin, VP and Practice Lead at The Futurum Group, examines why investors behind NVIDIA and Meta are backing Hammerspace to remove AI data bottlenecks and improve performance at scale.
Looking Beyond the Dashboard: Tableau Bets Big on AI Grounded in Semantic Data to Define Its Next Chapter
Futurum analysts Brad Shimmin and Keith Kirkpatrick cover the latest developments from Tableau Conference, focused on the new AI and data-management enhancements to the visualization platform.
Colleen Kapase, VP at Google Cloud, joins Tiffani Bova to share insights on enhancing partner opportunities and harnessing AI for growth.
Ericsson Introduces Wireless-First Branch Architecture for Agile, Secure Connectivity to Support AI-Driven Enterprise Innovation
The Futurum Group’s Ron Westfall shares his insights on why Ericsson’s new wireless-first architecture and the E400 fulfill key emerging enterprise trends, such as 5G Advanced, IoT proliferation, and increased reliance on wireless-first implementations.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.