Talking Micron, Microsoft, Google, Lenovo, Synopsys

Talking Micron, Microsoft, Google, Lenovo, Synopsys

On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss the tech news stories that made headlines this week. The handpicked topics for this week are:

  1. Micron Q3FY24 Earnings
  2. Our First Personal Impressions On Copilot+ PC
  3. Google Kills Endless Scroll
  4. Lenovo AI Announcements
  5. Google Updates Vertex AI
  6. Synopsys At DAC

For a deeper dive into each topic, please click on the links above. Be sure to subscribe to The Six Five Webcast so you never miss an episode.

Watch the episode here:

Listen to the episode on your favorite streaming platform:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.


Patrick Moorhead: The Six Five weekly podcast is back. It is Friday around 9:00 AM Central, it’s episode 222, and I’m just glad to be here. Dan, we’ve gone 222 episodes without being canceled, it’s a great feeling. You know what the other great feeling is? Sleeping in my own bed for this week. And we’re going into July 4th, I figure take a day off, I’m working up to the end, buddy.

Daniel Newman: Yeah, it’s the last day of a quarter. It depends on how you think about it. For me, last day of the quarter, been a banger, been a great first half of the year from a business standpoint. It’s been exhausting, Pat, I think you and I have talked a lot about managing health and wellness in an era of being on the road. I don’t know, it felt like… I think I was on the road every week except two the first half of this year, and so that has definitely taken its toll. I plan to have a little bit of downtime in July, we’ll see if the world allows for me to do this. But yeah, I mean, look, it was my son’s birthday this week, tomorrow is my anniversary, for those out there that want to maybe use it to see if they can hack anything I have. I don’t use that in my passwords, but you can try. And I’m going to celebrate my son’s birthday party today at five o’clock, I’m getting out of here, I’m going over to play some capture the flag. So good week, good week.

Patrick Moorhead: Dan, you’re not a robot, are you?

Daniel Newman: I am a soccer mom.

Patrick Moorhead: You really are, I love that. No, it’s great to be back, feeling really good. On the health front, it’s been a super week. I get back… With my doctor, get 50 or 60 blood tests back hopefully before I talk to him, and see the progress. July 5th is my one year, I’m getting… Try to get my health back. I’m going to be sharing some of the details on it, doing a victory lap because we love victory laps. Hey, if it’s the first time to this podcast, first of all, welcome, where have you been? We missed you. A couple of things here, we’re going to talk about publicly traded companies, but don’t take any of this talk as investment advice. The other thing I want to say, and we’re going to talk about it a little later, both Dan and I are using new Copilot+ PCs as part of this, you might have noticed that Dan’s camera is aligned a bit differently.

Daniel Newman: Haven’t hooked it up to the big system, but I’m also just playing with it.

Patrick Moorhead: Yeah, it’s good stuff, and we’re going to go into our personal impressions on that. So we’re going to talk about Micron earnings, we’re going to talk about, like I said, our first personal impressions on Copilot+ PCs. I’ve got the Microsoft Surface and I think Dan has a Lenovo Yoga. We are going to talk about Google killing the endless scroll in search, what does it mean? Why is it an actual podcast chapter here? Let’s find out. Lenovo had some AI announcements, not to be outdone by Dell and HPE. Lenovo actually had their big 10 event first, so coming in it was a good thing, and we’re going to break those down. Google updated its Vertex AI, there’s a couple GA, a new things in there, so important they rolled out Thomas Kurian to the analyst community. And we’re going to talk about some Synopsys announcement at the DAC conference, and I’m going to tell you what the DAC conference is if you don’t know it. So hey…

Daniel Newman: How could you not know? I mean, everybody knows what the DAC-

Patrick Moorhead: I mean, Design Automation Conference, I don’t know.

Daniel Newman: Even a soccer mom knows what this is.

Patrick Moorhead: I know DAC’s been around forever, sponsored by IEEE, ACM, blah, dee dee, blah. Can you believe we didn’t go? I mean, I don’t know, Dan, if chips are eating the world, which they are, the design tools that go into them… I don’t know, it’s a combo deal. But hey, let’s jump in.

Daniel Newman: Sorry, Pat, can you do that intro again? I got distracted, I was updating my completely made up NVIDIA price target that I’m about to put out this morning.

Patrick Moorhead: It’s the thing to do, just make it up. And even if you’re not a certified financial analyst, you can do that these days.

Daniel Newman: Yeah, it’s going to be $1 billion, Pat.

Patrick Moorhead: Yeah. Well, hey, let’s dive in. I mean, we’ve got the AI craze, we’ve got AI PCs, AI smartphones, is Micron performance actually reflecting this?

Daniel Newman: Yeah, Pat. So I think there was a lot of what I would call first network effects of the NVIDIA craze, the AI craze, and who are the companies that are immediately benefiting? That’s probably the question you and I are being asked more than anything by the media, and of course by our customers and by end customers, is, “Okay, where else is there value?” Not to say that there’s not enough NVIDIA to go around, but as that has rocketed, there’s a lot of questions. CapEx, is it all being pulled forward? Are people actually consuming this stuff? And if it is, where is it being consumed? But in the meantime, whether it’s being consumed or not, if these companies are going to stand up services around AI, around H100s and Blackwell, and in the future around Rubin, they’re going to need a whole bunch of what’s called high bandwidth memory. And who are the companies doing that?

Well, obviously we’re talking Micron here. You got companies like SK Hynix, you got Samsung and others that are all going to play in this space. And so people are really on the edge of their seat looking for Micron to have a banger of a quarter, because if NVIDIA had one, the expectations, especially because of the pairing of Micron with NVIDIA hardware, would be that this would’ve been huge. So the result, it was good. I wanted to be like, “It’s great.” But it was good, it was good, they beat on earnings at 62 cents. They had earnings, which by the way, for Micron wasn’t something they’d actually had for some time, they’d had losses. And on the revenue side, they came in at 6.81. Look, the actual capacity for this particular type of memory is sold out. So probably one of the reasons that there was a 7% up to maybe even a 10% decline in the wake of this comes down to a couple of things.

Long and short it came down to in the process of Micron selling this out, they did this through a vehicle called long-term agreements. And they set up a number of long-term agreements, where effectively they probably got 5 or 10% price premiums over where they were at. But I think the market really was hoping that they would have some capacity that they could take advantage. Because anyone that knows the memory market, heavily volatile, heavily cyclical, and very commoditized historically. So this was that inflection for the first time in a long time, that memory had the opportunity to be so important and so unique to this secular trend that is AI and GPUs, that I think the market was hoping that Micron was going to be able to come out and say they were drafting some huge contractual growth in their LTAs or long-term agreements at a big, big premium.

And I think that’s really why the market soured a bit, was they did sell out, they’ve sold out for the longer term through 24 through 25, but not at the premium the market had hoped that they would sell out at. Having said that, I’d also attribute some of the sort of selling just to this exuberant expectations. Meaning people wanted a big beat, they got a beat, it wasn’t a huge beat, the guide was okay, these LTAs, all this together. So people are like, “All right, it’s good. I made some money, I’m going to get out.” But thesis, for us, we’re not equities guys, I’m not actually adjusting a spreadsheet here or setting any price targets despite my joke earlier. But what I will say is, how important is this particular technology? It is very, very important. And this whole wave… And by the way, it’s not only about AI, and GPUs, and data center, it’s also about AI PCs. Which, Pat, you and I are playing on here right now doing our podcast.

It’s about next generation smartphones, super cycles of iPhones and Android devices that are going to be AI powered, all of which means more content from Micron, not just HBM, more memory content across the board to support all this additional computing. So it’s hard to not think Micron has a bit of a trajectory over the next few years, but it may not run as far as fast as an NVIDIA. First of all, memory never gets the same cool factor as logic. And second of all, these LTAs unfortunately sort of strapped them in. It was a hedge in the short term, but it might’ve been the wrong bet in the long term, and I think some investors are sort of putting Micron in the penalty box. But overall, what a run. I mean, look, they went from huge losses to huge profits, good turn, they made some good decisions, I’d give it a B plus.

Patrick Moorhead: Yeah, I mean, I look at year-on-year, and they’ve nearly doubled revenue. They were losing 668 million on the gross margin line, they’re now making 1.8 billion. I mean, Non-GAAP EPS went from losing $1.43 to making 62 cents a share, and I just think it’s a phenomenal turnaround. The market had a lot to do with it. But also if you remember, Micron was very conservative. They pulled back on some CapEx, they pulled back big time on expenses. And like we’ve talked many times on here, this is the wheel of memory and storage market, it is boom and bust. And the interesting thing is that I think there’s a lot more to come. What hasn’t hit yet is the effect of the AI PC and the AI smartphone, and that comes in two forms. Let’s just pretend for a second that we can take one year off accelerate PC and smartphone run rates. I mean, that would be phenomenal, that’s a 25 to 30% boost right there. We’re already seeing pricing going up.

And then the other element is the memory capacity and the storage capacity for AI PCs and AI smartphones are going up. We saw with Apple’s disclosure, you have to have at least eight gig of ram, most Apple phones have four and six gigs of ram. And then if you look at Copilot+ PCs, it’s very typical to see an eight gig of RAM Notebook. I never recommend it, but you see them out there. Heck, you even see four gigs of RAM. And minimum base, there are 16 gigs. Well, let’s talk about the storage size. A good thing I like to look at, I downloaded LM Studio from the Snapdragon site, and the size of these small models are between four and six gigabytes. And if I need to put 40 of those on my system, and by the way, there’s 40 of these on the new Copilot+ systems out there, you can do the addition yourself. So these are elements that I don’t think are factored into anything from Micron. So good job, Micron, keep it up. Really interested to dive into their hypothesis on AI PC and AI smartphone.

Daniel Newman: Hey, Pat, just a quick one before we jump over. For everyone out there, are you showing us as live on LinkedIn? Because I don’t see us today, I’m wondering if we got disconnected somehow. We can share this up later, but just something I wanted to make sure.

Patrick Moorhead: Wow, yeah, I think you’re right. Yeah, we are not live, that’s a bummer. We’re going to have to talk to the control room about this.

Daniel Newman: We’ll hit them up later, we’ll get it posted on a delay for those in our LinkedIn community, because we’ll need to get this up and shared over there, buddy.

Patrick Moorhead: Yeah, totally. I mean, do you want me to just go live right now?

Daniel Newman: I mean, would it be just lagged live? Would the whole thing show up later for people? Or how does that work? I don’t even know.

Patrick Moorhead: I have no clue, but I’m looking…

Daniel Newman: Let’s just put it up later, and we can share it later.

Patrick Moorhead: That sounds good, thanks for the heads-up. But control room producers, you guys are fired. Hey, let’s jump into the next topic, and that is our first personal impressions on Copilot+ PCs. So Dan, you and I were both in Las Vegas last week as people were getting shipments of their systems. I had my shipment of system, and the problem was it was sitting in my office being unboxed. So here we go. So first off, we are doing this broadcast from our Copilot+ PCs, I’m on a Surface and Dan is on his Lenovo Yoga. First of all, if you want performance data benchmarks, battery life, thermal scans, the data, hit the Signal65 website.

Signal65 was the first company to come out with any validated benchmarks that didn’t come from a manufacturer. I’m glad to say those benchmarks are holding up with the benchmarking community. So my use case is going to be office, and it’s a high performance. So next week or this weekend, I’m going to be diving into the battery life. But what I did, is I had a four-year-old, the highest performance, most expensive desktop that you could get. You can imagine how many threads that might’ve had, it had $1,000 graphics card in it. But four years ago it was the highest performance desktop.

So what I did is I essentially changed it out, and I bought a power USB-C adapter. I have four cameras, sorry, four displays, combination of 3K and 4K displays, external USB camera, external USB mic, RJ45 network. Now, my use cases were very productivity-centric, I’m not doing gaming, I’m not doing workstation, and I’m not doing video editing, even though I think this would be a great video editing machine. It feels just as fast as my giant desktop. Now, people might be saying, “Well, yeah, but you didn’t load up the threads.” I get it, just the fact that I don’t feel any difference between the two to me is absolutely amazing. Now, look at the applications that I’m using, and this whole, is it compatible? Pretty much every application, nearly every application is ARM64 native, and that essentially means it’s going to be more efficient on an ARM-based platform.

It’s funny, the biggest power hog in my first couple days was iCloud, and that was an x64 or x66 app, and it was syncing my video library with Apple Photos, so it was kind of ironic. The only application I think that I’m using that should be ARM that’s not native is Adobe Acrobat, at any time I open up a PDF. But Edge, Outlook, Slack, OneNote, Teams, Zoom, Word, PowerPoint, Excel, WhatsApp and Microsoft Photos, all native ARM application. So you might be asking, “Hey, Pat, how was your AI experience?” Well, to be honest, I want Recall, and we all know the story about Recall that we have. But I used Image Creator, which definitely hits the NPU. I haven’t found the need to use transcriptions yet, because I haven’t been doing an international conference call or something like that.

And because I have an external camera, I’m not using Studio Effects. So I really haven’t hit the A… I did download LM Studio, and I’m hearing in a couple weeks you can download the best models from Microsoft, Meta, and you can do on-device AI. You can do it today, but it’s not hitting the NPU, I’m told in a couple of weeks it will support the NPU. So I’m pretty excited, I can’t believe the performance that I’m getting out of this. And what can I say? Qualcomm, Microsoft, HP and Lenovo are delivering here. Why am I excluding Dell? I have not received my unit yet, the XPS, so I can’t comment on that. Dan, how’s your experience been?

Daniel Newman: Yeah, I mean, you hit a lot of the things. Look, I got hit a lot in… I shared a picture of me eating my new Yoga, and got a lot of DMs, people asking me, “Hey, can’t wait. When are you going to review it?” By the way, I don’t write reviews, so I won’t be reviewing it, this is my review for everybody out there that was waiting for my review. I’ll be talking about it pervasively, because that’s the world we live in. I’ll be TikToking and Instagramming, go over and follow me on TikTok, I’m big over there.

That’s a joke, you won’t find me on TikTok. I use the China version of TikTok to get better at calculus, that’s the version I use. But anyways, look, I did get hit up a lot. I got hit up by some CEOs from some of the actual companies in this space asking for that initial impression. It was funny, because I started hitting them back with some of the data specs. And one of them came back to me and said, “Look, I don’t care about that stuff, we have that stuff.” He’s like, “Give me the layman’s response, how does this thing work for you?” And so let me tell you how I qualified it in that conversation.

I basically said, “I have an M4 iPad Pro.” I said, “It’s got the same sort of on-off feel to me, it’s got instant on, it fires right up, I can get right into my apps, the latency feels very low, the user experience feels very frictionless.” That’s my initial impression of the Snapdragon X Elite on this Lenovo Yoga battery life. I’m still playing with it, Pat. I don’t code, I don’t game, I’m really not any fun. I just sort of endlessly doom scroll, read, and then comment on X. By the way, I take a lot of selfies, apparently, I’ve been criticized for that. But it turns out that there’s more value in the selfie than the actual analysis, because that’s what people engage with is the picture. And if anybody watched the debate last night and doesn’t believe that we’ve gotten stupider as a society, you’re not paying much attention, which would make sense based on what I just said.

But the overall impression that I have, Pat, is I haven’t charged it in three or four days since I opened it. So I’m literally trying to use it the way I am in an event. So I’m bouncing between the iPad, my phone, this device, opening it up, doing some emails, jumping on a Zoom call, closing it back down, throwing it in the bag, and again, not charging it. And three or four days now… And let me just pull this up right now of using it this way. I’m still at 72% battery life, that’s totally different. And I have a Surface device that runs on a different architecture, there’s no need to call it out here. But let’s just say if I close that thing down and put it to sleep in my bag and don’t charge, it’s not uncommon it’ll come back dead if I don’t truly power it down.

Patrick Moorhead: Yeah. And by the way, your Surface has the largest battery you can get on any Surface.

Daniel Newman: Yeah. And I mean, I like it, I’ve used it for a couple of years. And if you actually see the screen, it’s embarrassing how dirty it is. But that’s more of an artifact of me, I don’t carry a glass cleaner like my bestie does around with me and a little spray bottle, and a rubber band, and a container inside of –

Patrick Moorhead: I don’t do that either. Dude, you look like you vomited on your display, it literally makes me sick when I’m sitting next to you on an airplane.

Daniel Newman: There’s a lot of great work being done over there, and sometimes it doesn’t leave a lot of time for cleaning. But let me just say the early impressions are that this thing has met some of the objectives. If the objective was to create a better experience than what we’ve had historically, I think so far so good. I guess without Recall, the only thing I’ll say is I’m kind of sad, because that’s the one feature I was really sort of eager to use. Because I can’t tell you how often I can’t find stuff on my devices, because the traditional sort of file search systems do not work really well. Based on like, “Hey, remember that one doc we were working on where we were talking about this thing the other week and I was here?” And that’s how I remember things, I can’t remember how I named it or if I named it.

So let’s get that out there, okay people? Because that’s the one thing. The other thing we do need to do, Pat, is I need to get some models run locally on here. And basically what I’m trying to parse on that topic, and I know we got to move on here, but is, do I need to do that? When am I going to… Because like I said, I run Perplexity, Gemini, I’m running OpenAI, I’m doing them all. The latency, I have good connectivity almost everywhere now, so I’m trying to still figure out that running on device, is it a latency thing? Is it a power thing? Because obviously that’s been a sell point, but I think when you’re being realistic, the question I got about the practical everyday user, we got to make them care or they’re not going to buy it. So let’s get Recall out there. But so far, battery, instant on, frictionless, lightweight, and they look great. So I’d give it a… I’m into giving B pluses right now.

Patrick Moorhead: Yeah. Buddy, I’m on a search too, I want RAG, on-device RAG, I can’t believe it’s not integrated already into Windows Explorer, that’s where you would expect it. I mean, you can do stuff like… Well, actually, I haven’t seen any Copilot capability with RAG at all. Google just lit theirs up for me inside of Workspace yesterday, and I can’t do any RAG across all my files. Sure, I can pick this file, this file, this file, and because it’s web-based it’s difficult. But there’s got to be more to come, and this stuff has got to come out fast otherwise people are going to lose interest in it.

Daniel Newman: And maybe we can do it in Vertex. We’ll talk about that later, talk about that later.

Patrick Moorhead: No, sounds good. So hey, Google is killing endless scroll. Everybody knows how to do Google search, and you can just go down, down, down probably for 25 years you could keep scrolling down. Dan, why does this even matter? Why is this even a topic?

Daniel Newman: Yeah. Well, listen, I often use what the media’s inquiring about to understand what people care about, because they’ve got a really good gauge for that and they’re obviously trying to drive a ton of clicks and demand. And so NPR had reached out to me and asked me for some comments about this. And at first, by the way, first blush I was like you, I was like, “Who cares? Whatever, it’s a feature.” But here’s the interesting thing, it was only a couple of years ago that this endless scroll came. Because you remember you used to get to the end, unless you chose the endless scroll capability, you would search and you’d go to page two, and to page three, and to page four. And at first it was a mobile thing, on a mobile device, who wants to pick the next page? It’s hard to click next button, you just want to get the next results.

And obviously, as you know, Google uses an algorithm to try to determine the best results and put them at the top of the page. And if you remember, Pat, we use the term when we talk about client, and edge and computing, we talk about the accordion of everything gets centralized, everything gets decentralized, everything gets centralized. Well, if you remember in the web era there was what was called the fold, and they were always about what shows up about your site or about whatever company that’s above the fold. Basically open up the site, what do you see? Well, the endless scroll was in an era where websites were scrolled, and searches were scrolled, and social media was scrolled. When you’re looking at your Twitter account, your Instagram account, you’re scrolling, you’re scrolling, you’re scrolling. So when you start to tie this back together to the behavior that we’ve been trained over the last decade plus with social media and how we interact, can you imagine getting to the end of your page of Facebook posts and they say, “Click to the next page.”

I mean, that’s back to the MySpace era, maybe, if that even existed. So they’re going to sunset this feature. And so I instantly, as an analyst, when I was done setting my NVIDIA price targets, I started thinking about why this was going on. And Pat, the reason this is important is Google has hit the inflection where they’re ready to retrain our brains for the next era of search. This is the moment, people. The end of endless scroll is the beginning of the abstract period in which Google is going to dictate on the fold which results we care about through an algorithm powered by a large language model, where they’re going to be deterministically choosing sources, summarizing the inputs, and basically giving us less and less access to do self-determination, and more and more algorithmic and model driven search results to move people in this direction.

Second thing here of note and of importance, is we’re also about to enter a new era of monetization for search. The old way that Google made money, and by the way it makes most of its money, is through where people click, where you place ads and how you interact. Well, when you summarize stuff, it reduces the urgency for ads, it’s going to create a new sort of population model of how we decide what gets displayed, and then what people are going to click onto when they can book an entire trip and get all their insights without having to go onto a price line or instantly. And by the way, it’s going to also change an era of content creators, where we still have this whole sort of content ownership debate that’s going to go on, is who’s the creators? Who owns content? Who has rights to content? How are they compensated?

Because all these large language models that are giving us all this value require people like Pat Moorhead writing amazing insights on X or on his website, that they need to be able to then scrape so that they can get an answer that is going to be considered the highest quality that then summarizes on a page. So the end of endless scroll is not that important in itself, but what it means about the future of how we’re going to interact with search and interact with data on the internet, Google is turning the page, the future is here.

Patrick Moorhead: Yeah, Google’s an interesting… And I want to separate the business side or the enterprise side of the business, just talking about the consumer side, it is an advertising beast. And if you think about that, the basis of that has been all about Google search, obviously YouTube is a huge driver. But you’ve got a $250 billion business here with super high profits, and generative AI comes in and potentially it could be a complete disruptor. I actually start most of my searches now on generative AI, whether it’s Perplexity or… And I’m not starting with Google that much. And if you think about that, you can’t just throw a technology that is 10 X more expensive and call it a day, ceteris paribus, all things equal, your expenses go up 10%, and that would be cataclysmic to Google’s profit. So they have to very carefully… They’re not a startup. Perplexity and OpenAI are losing billions, or OpenAI is losing billions, Perplexity is probably losing hundreds of millions.

And so Google has to look at this very carefully how they integrate this in. LLMs are not the answer to everything, machine learning or standard index search could be the right way that’s dramatically cheaper. So on the hardware side, you have the TPU, which is their own homegrown silicon, an infrastructure to be able to shoulder as many of these machine learning and generative AI capabilities at a much lower cost. And then they have to integrate generative AI into the search results in a way that doesn’t break the company. And they don’t want to over-provide or over-serve the market when you’re doing something that could be better served. And then you got to figure out how you layer in the content providers, because you can’t piss them off, otherwise they’re going to block your robot.txt scraper.

So yeah, a lot of moving things going on here. It has been nice to see on a classic Google search the… I think they’re called AI snapshots or something like that, and they’re doing reasonably well. I mean, I’ve been disappointed with Gemini on earnings data, it’s been wrong the last couple of quarters, it took about 12 hours for the system to catch up. And the bizarre part, again, that’s Google metering the way that it’s using it here. But anyways, good analysis, I feel like we have drained this topic for all its worth.

Daniel Newman: That’s what we do, we give the best analysis, Pat. And by the way, that content thing, I don’t know if you saw, but TIME and OpenAI… OpenAI Did a deal yesterday they announced. So we are going to see more of these content deals, because it has to happen if this search model is going to change. Sorry. But Pat, that was some pretty damn good analysis you gave there.

Patrick Moorhead: Listen, it beats selfie analysis I think.

Daniel Newman: Oh, no, let’s go.

Patrick Moorhead: Okay, let’s move to the next topic. So Lenovo came out with some new AI announcements, they’ve been coming out with announcements every quarter. So as you know, Dan and I go to these big tent vendor events, and I believe that Lenovo had their biggest tent event in December, Jensen got up on stage. But it’s been around six months, and again, I forget the exact month. And then we saw Jensen get on and do a group hug with Michael Dell and also Bill from ServiceNow. And then recently we saw Jensen get up on stage in the Sphere, gosh, was it… Yeah, last… No, week before last with HPE. So it’s always important for competitors and tech companies in the ecosystem to keep it top of mind. So Lenovo did make some affirmations, and they did make some announcements. And one of the affirmations was just a, “Yes, hybrid approach is the way to go.”

That’s music to my ears. I mean, I’ve been talking about that on standard computing for a decade, and hybrid approach is leveraging the best of the public cloud and the best of the private cloud. The second thing that we saw is the services team is optimizing. And I got to tell you, I’m impressed with what the service team has done here. And they have some new AI advisory. They have fast track, they have fast track, they call it fast start for NVIDIA Enterprise, for NVIDIA NIM, and for AI innovators. This is just a fast way to get up and running on these, which I think is music to everybody’s ears. I’m hopeful that part of that is there’s a data management portion in there. The other thing was liquid cooling.

I mean, Lenovo has been doing liquid cooling since it acquired the IBM assets of the X line I think a decade ago. But as we’ve seen from Dell and HPE, liquid cooling is cool now. And everybody is coming out, and this is just the fundamental trend that we need to bring in water cooling to be able to properly cool the AI infrastructure. There’s the sustainability element, there’s a power draw element, there’s a heat element. And Lenovo talked a lot about their sixth generation liquid cooling, they believe that they’re literally quote, unquote miles ahead of everybody else.

I’d love to do a Signal65 analysis to match these claims, they have cold plates on-memory cooling, warm water cooling across pretty much all categories. And I had somebody from Lenovo share this with me, a pretty high point, the install for Digital Realty… And Digital Realty is a little bit similar to Equinix, a little bit not, but a place that enterprises can go to stand up their own infrastructure without having to have their own data centers on site. And it was a high-end liquid-cooled POC, about one and a half million dollars with Exalted. And I’m just interested to see, once this get fully installed, I’d love to send the Signal65 team in there to see if there’s some there.

Daniel Newman: Look, Pat, you hit the topical. I like the campaign and the concept. We just got to spend some time with the leadership team, the ISG CMO, Flynn Maloy, leading the charge on this smarter AI. And they’re really focusing on the for everyone part. And by the way, this is a trend line of a theme that these companies are going to have to battle. You mentioned the Dell conversation, you mentioned the HPE keynote, you mentioned what Lenovo’s… They’re all battling for this idea that they can democratize and simplify the deployment of prem-based or hybrid cloud-based AI that can be integrated quickly for enterprises 50 through 5,000. The biggest companies are going to figure this out, they don’t need help. They got teams, they got the people, they got the access. But as you move down to even companies in the hundreds down to the thousands, there’s a lot of work to be done, and this is really complicated.

And so you heard three clicks to launch, you’re hearing instant fast start opportunities to take a proof of concept using NIMS and using AI enterprise and putting this thing together, server storage, software, services, consulting. How do we deliver this stuff fast? How do we make this stuff seamless? How do we drive more value? And how do we get it out super-duper quickly? So looks like I just lost Patrick, hopefully he’ll be coming back soon. But the overall impression that I have… Oh, hey, you’re back, I thought maybe I’d lost you forever. I thought that was so good, you’re like, “You can do this alone.” I don’t even need you here.

Patrick Moorhead: Did you miss me? Did you miss me, bestie?

Daniel Newman: I was getting nervous, because I was like, “I’ve almost said all I’m going to say, and we’re going to have to move to the next topic, and we don’t have a host. And I’ve barely ever done this before, first time ever on a podcast.” How am I doing, everybody? Am I doing okay? I appreciate you. But look, I mean, in Lenovo, they’re aggressive, they’re ambitious. What they’re trying to accomplish, they’ve done really good on overcoming some of the international challenges, domicile-based challenges they have, they’ve gotten some of those the right sort of approvals from a security clearance level. They’re working with the biggest enterprises, they’re competing, and they’re selling into hyperscalers too, which is something that’s been pretty unique from the OEM perspective.

So look, this is going to be played out in the public over the next several quarters. But making it easy, making it accessible, and doing so in a way that helps companies get off the ground very fast with their AI ambitions is what the market is going to be looking for. So now it comes down to one thing, Pat, results, do they get the results?

Patrick Moorhead: Maybe that’s a great way to end that up. So let’s move to the next topic. Vertex AI is the AI platform for both machine learning and generative AI at Google, and the company rolled out Thomas Kurian for some updates.

Daniel Newman: Yes, we got some updates this week. And look, Pat, I’ve been impressed from sort of day one, day zero on Vertex. They really did set out… You heard my whole diatribe about making it easy, making it simple. Well, look, part of what the challenge is for the OEMs, is that the cloud providers are really ambitiously and rapidly working to make these solutions easy to digest and consume in the cloud. Of course, the cloud providers are making it connected and accessible to hybrid and on-prem. But where the workloads start and end and where AI starts and ends, it’s really shifted the entire cloud space. I said, “The cloud world order has changed in the era of AI.” I’m still assessing exactly where everything lands, but it’s changed. And that’s because multi-cloud is proliferated really quickly in the era of AI, because different clouds have different capabilities. But we are starting to see companies trying to figure out which tools do they want to standardize on, which environments do they want to build on?

And Vertex was compelling coming out of the gate, it’s Google’s cloud development platform. And really what did they focus on this week? It focused a lot on grounding, but they’re also focusing a lot on these enterprise-ready experiences and creating higher fidelity, easier to connect. And there’s two big things I took away from this week, Pat. One is moving beyond that broad internet LLM search, because Google of course has to keep working on grounding and the quality of outputs, they’ve had some stop starts there. But one of the things that they’re doing is they’re bringing these high fidelity important data sets that are outside of Google’s data, meaning Moody’s data from a financial services company, that data being made available. Reuters data, Zoom info data, that can be now part of the Vertex and the search experience so that enterprises can get more value, higher fidelity answers, and then grounded to the type of outputs that can be trusted.

The second thing that they’re working on is a high fidelity moat. And this, Pat, you talked about RAG, but is the ability for enterprises to start tapping into not just Google’s broad available internet search data and the third party data that I just mentioned. But also you can source your information, your own corporate data sets to tie to what Gemini is doing, and tie to what those third party is doing to create the highest quality outputs. Pat, you and I have talked a lot about this, but the winning formula for AI and generative AI has to be a combination of some proprietary data that no one else has coupled with well-designed accurate language models, and then of course to what they just did, coupled with other maybe for sale public private data sets that can then complement to create the best outputs, what Google is calling highest fidelity.

You and I test this stuff regularly, you actually publish these tests from time to time when you’re asking about things like earnings, you’re asking about things like company product launches, trying to get to the right answer, you’re seeing these things are still not accurate enough for us to trust. Hopefully nobody’s writing articles with this crap. I’m kidding, it’s not crap, it’s good stuff. Because it needs accuracy layered on top, Pat, so these are ways to get us to that accuracy faster. Some good steps for Google, appreciated them putting Thomas Kurian forward and sharing some of this with us. Pat, over to you.

Patrick Moorhead: Yes, this was a great follow-up to Google Cloud Next, because a lot of the updates here were, “Hey, we’ve taken it to the next step.” It’s generally available, or it’s in public preview, or if something was teased, it’s in the beta category. But their announcements really were about, again, making the results better through bringing in different data sources. It was also about lowering cost, if you look at Flash as an example, the SLA, which is more like a provision throughput, and that also hits on capacity and price. And also teasing or reinforcing that, “Hey, Google DeepMind, we are keeping the cool stuff coming.” And that might be maybe… I hate to say this, maybe reactionary to what we saw from OpenAI, who still we haven’t seen all of what 4o can do. And I know we’re not talking about OpenAI here, but I do feel a little bit deceived by what OpenAI showed on stage and what is reality right now.

One thing that was not a part of this that I do think Google should consider a victory lap is, as I’ve said very publicly, the front end of my enterprise is Microsoft, like Word, and PowerPoint, and Excel, and even Outlook and OneNote, but the backend is Workspace. And I have multiple modalities. For instance in Gmail, when I want to get through something very quickly, boom, boom, boom, boom, boom, boom, I used a Gmail front end. By the way, of course my Outlook front end is interfacing with the Gmail backend, but Workspace, Gemini hit all of my Workspace applications, I’m very excited to put more thought into this. It’s also into Gmail by the way. And if there’s something that could… What’s the right word? Kick me off of a Microsoft front end and move me to a Google front end, could be if Microsoft takes too long to integrate this capability into let’s say Outlook. Where you can go into… I can go into Gmail right now and tell it to get… I literally went in and said, “What were all of the announcements that I received this week?”

And boom, it spit out most of our topics, Dan. Because I had received emails… By the way, it was after our conversation of, “What are we going to talk about this week?” It was a slow news this week, and they all magically popped in there. I can now put an email in, “Hey, summarize any deliveries that I should have received.” And boom, it’s like, du, du, du, du, du. So man, there’s something there. And Google didn’t talk about that, but they should be doing a victory lap on at least getting out there. I’m a little underwhelmed with the RAG-based capability. And gosh, please just put it in Explorer where all my files are there and let me do this on device.

Anyways, let’s move to our last topic. And that is some of the announcements Synopsys made at DAC. You may be like, “DAC?” DAC is the Design Automation Conference, it’s been out there forever, it’s essentially all about EDA related to chip design, platform design and end product design. It makes perfect sense for companies like Synopsys and Cadence, and companies like this to go to. And I would say… So first of all, it was reaffirmation of what the company is doing well, and also I would say reinforcing why Ansys is a great acquisition. You can’t build a chip or a chip fab without it, and you can’t build a system without EDA or design tools. One of the things that popped out, it’s so funny, you see how companies say things without saying it because they really need to watch how they do that. And there was a little blurb in their blog or press release that talked about AMD and NVIDIA cranking out a new AI processor every year.

Going from… And the punch line I think is Synopsys. And again, I don’t know where Cadence fits in this. But it was really cool how nobody had spotted that, and when Jensen got up on stage and AMD made a little of announcement on going to an annual Cadence, it’s design tools. They’re not adding 10,000 designers, what they’re doing is they’re leveraging AI capabilities of what companies like Synopsys are doing with their AI-related design tools. So there was some news here, and that’s that Synopsys.AI is fully qualified for Intel’s IFS’ EMIB packaging technology. And by the way, they already do Foveros, so that’s included, I think the company should have pointed that out. And that’s on the follow on and getting certified Synopsys.AI for Samsung’s two nanometer GAA process. So it’s just fascinating to see how quickly this is moving.

And it’s funny, Dan, you and I have gone to these enterprise SaaS companies, and we’re like, “What customers? How much money did you save?” I feel very confident saying I’ve seen the best examples from Synopsys. So again, I’m looking at a slide right now, 13 X improvement in verification, 10 X improvement in design, 15 X advancement in simulation, 15 X improvement in manufacturing through computational lithography. With examples, NVIDIA L40, NVIDIA Grace Hopper, NVIDIA Grace Hopper, NVIDIA cuLitho, NVIDIA DGX. And it’s really cool to see from an EDA company, in this particular one, it’s about silicon because they haven’t bought Synopsys yet, sorry Ansys yet, but to see the benefits and be rolling up there. Now, I’d love to talk to some of these customers. In fact, I would love to do a Signal65 economics analysis across multiple customers to validate these claims. Anyways, great show for Synopsys. I’m getting the download from Cadence a little bit later, I just haven’t had the time to dive in there yet.

Daniel Newman: Yeah, you hit a lot of the high notes, Pat. I mean, I think this summary, the TLDR here, is that we’re not going to get yearly massive generational improvements without efficiencies coming from partners like Synopsys. So when you hear Jensen touting a roadmap at the NTU, at Computex, and everybody gets excited, and when you see these market models that show annual upgrades and the hyperscalers making massive investments generation to generation. They need tools, they can get these things verified faster, they can get them simulated, tested, designed and out the door. And Synopsys is building some of the most powerful tools in the market to be able to accomplish that, Pat. So that was really the focus that I saw here. This has been something that you and I have been talking about for a while.

If I could be a little silly for a minute, I really hate the name cuLitho, it looks like the Spanish word for something that’s in your rear. I don’t know what they were thinking with that one, that seemed like a miss to me. Sorry, I couldn’t help myself when you said that. I think we’re all pronouncing it C-U Litho, because we don’t even want to say it out loud. But look, I mean, these results can’t be argued with, double-digit efficiency and performance gains that companies… And like I said, right now, doing this alongside NVIDIA is pretty much the… There’s just not really a better testimonial you can have in the market for what you are trying to accomplish. Of course, certified across NVIDIA, and AMD, and Intel, it gives them the platform, and adding all these generative capabilities, documentation, etc, it’s very, very powerful, Pat.

So I think you hit on most of it here, so I’m going to just give you the wave and say good job. And obviously Synopsys, there’s a reason you’re growing so fast, there’s a reason that the market’s been incredibly excited about you. And by the way, going back to how I started talking about first, second, and third network effects, this is definitely a upstream first network effect of the AI boom because somebody needs to be expediting the design, and verification, and validation for these companies, and Synopsys is helping them do that.

Patrick Moorhead: Love it, great show. Dan, what are you doing for July 4th here?

Daniel Newman: I was thinking about taking a half day, maybe barbecuing. No, I’m going to get out of the country the week after. So my daughter from college is back with some friends, I’ll have the whole family together, which is going to be nice. Since I only eat one meal a day and I only take 500 calories in as I’m trying to get my weight down under 120 pounds, I’ll probably eat one bite of potato salad as a big cheat moment. Just kidding. I give Pat a lot of crud out there people because he’s gotten so skinny and so beautiful so quickly, and I just sit here and continue to think about pizza all day long.

Patrick Moorhead: I got to tell you, I don’t even remember the last time I had pizza, it’s pretty sad, pizza is so good, like Homer Simpson. I’m going to be spending… Hey, thanks for asking what I’m doing for July 4th, Dan.

Daniel Newman: You’re welcome.

Patrick Moorhead: I’m going to hang out at the lake with the family. The whole family’s coming out, and each one of them I think had one friend they were bringing, so it’s going to be interesting, get out on the boat, hang out, try not to get too much sun. I need one of those big brim… Are you done?

Daniel Newman: I was so bored, I can’t believe we’re talking about… All I was thinking about was, are you going to tell everybody about the yacht? Because when you say the boat, you really undersell it. I mean, what is that…

Patrick Moorhead: Are you kidding? It’s a freaking pontoon boat.

Daniel Newman: No, no, it’s 140 footer, and I think you had the bow extended out to put the chopper.

Patrick Moorhead: Yeah. Dan, don’t tell all of our secrets. I mean, it is a company asset, so it has a Six Five flag on it.

Daniel Newman: It does, it says, “Signal.” And it says, “Six Five.” But unfortunately I can’t seem to get any use, because Pat keeps it out on the harbor of one of his various mansions around the world. I’m kidding, everybody. Just know if you haven’t figured it out watching the show, I joke a lot.

Patrick Moorhead: It is a pontoon boat. Gosh, it does fit 13.

Daniel Newman: You know what? I’ve had a great time out on your pontoon, I appreciate that, Pat.

Patrick Moorhead: You got it. You didn’t get the invite?

Daniel Newman: I didn’t get the invite for 4th of July.

Patrick Moorhead: I’m sorry. Hey, I want to thank everybody for tuning in. We hope you have a good July 4th if you’re here in the United States, if you’re outside of the United States or if you’re in the UK, just be glad that we’re not part of the same thing. If you watched the debates last night, I saw some of the funniest memes where the UK wants to… They’re going to need to send over the king because we obviously can’t govern ourselves effectively.

Daniel Newman: I like the ones with Putin, and Xi Jinping, and Kim Jong Un all calling each other like, “Are you watching this? Bro.”

Patrick Moorhead: No, that was so good. I mean, I don’t know how to put a filter for memes in on X, because I filter so much content it’s not even funny. But anyways, y’all have a great weekend, thanks for tuning in, we do appreciate you, bye-bye.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.


Latest Insights:

Organization Seeing Success with RISE and GROW with SAP
Keith Kirkpatrick and Daniel Newman at The Futurum Group, share their insights on SAP’s Q2 2024 results fueled by strong cloud revenue growth, significant customer wins, AI integration, and strategic restructuring efforts.
Nivas Iyer, Sr. Principal Product Manager at Dell Technologies, joins Paul Nashawaty to discuss the transition from VMs to Kubernetes and the strategies to overcome emerging data storage challenges in modern IT infrastructures.
Shimon Ben David, CTO at WEKA, joins Dave Nicholson and Alastair Cooke to share his insights on how WEKA's innovative solutions, particularly the WEKApod Data Platform Appliance, are revolutionizing storage for AI workloads, setting a new benchmark for performance and efficiency.
The Futurum Group team assesses how the global impact of the recent CrowdStrike IT outage has underscored the critical dependency of various sectors on cybersecurity services, and how this incident highlights the vulnerabilities in digital infrastructure and emphasizes the necessity for robust cybersecurity measures and resilient deployment processes to prevent widespread disruptions in the future.