Search
Close this search box.

We are Live! Talking Intel, Broadcom, VMware, Luminar, Groq, Apple, AWS

On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss the tech news stories that made headlines this week. The six handpicked topics for this week are:

  1. Intel’s IFS and Arm Collaborate on 18A
  2. EU Issues Broadcom Statement of Objections Over VMware
  3. Luminar Completes Bring-up of Mexico Plant
  4. Groq Day
  5. AWS Shows Its Generative AI Hand
  6. Apple Mac Sales Decline

For a deeper dive into each topic, please click on the links above. Be sure to subscribe to The Six Five Webcast so you never miss an episode.

Watch the episode here:

Listen to the episode on your favorite streaming platform:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Pat Moorhead: Hi, this is Pat Moorhead, and we are back for another action-packed Friday morning Six Five Live with my bestie, Daniel Newman, founder, CEO, chief poobah, no longer of Futurum Research, but of the Futurum Group, a very diversified array of services companies to bring us joy and happiness. Daniel, how are-

Daniel Newman: Do you want to read the whole press release? Let’s do this. Let’s make the show this week all about me. But because it’s not all about me, Pat, but you know what? Why not? So, hey, good morning. You know what? The only thing better than a self-driven victory lap is a you-driven victory lap on my behalf. So this is a good one. You can just tell a show’s going to be good when it starts off this way, then just seeing how much love we can swing back and forth, sling back and forth. But yeah, buddy-

Pat Moorhead: Dan, it’s your spry age. I mean, you’re kind of like a kid, one of my kids, and all the success you’ve had, I’m proud of you in everything you’ve done. But, hey, if you’re new to the Six Five, you know that Dan and I banter back and forth, but if you’re new too, I’m gonna have to wonder what your problem is.

Daniel Newman: What are you doing?

Pat Moorhead: Because we are leading technology analysis video and audio source of truth. So anyways, we’re going to talk about six topics, 5 to 10 minutes each, maybe longer if we want, if it’s interesting, sometimes less. We’re also going to talk about public companies, so don’t take any of what we talk about as investment advice. And Dan, as you know, I like to say just do the opposite of anything I might even infer.

Daniel Newman: Yep, yep. I think that might be even too much guidance. It’s just not investment advice. It’s just really great analysis.

Pat Moorhead: If you knew how much money I lost in the last two years or in the last two weeks, you would definitely not be following out there. And if-

Daniel Newman: Nobody made money the last two years that was bullish.

Pat Moorhead: I appreciate that. Dan, you know what? I feel better. So let’s jump in. We have a great show for you. We’re talking Intel and Arm playing kissy face. We’ve got EU issuing a statement of objections to Broadcom over VMware; the saga continues. Luminar completed its bring up of its Mexico plant, stunning the haters. Groq day four was up too, some awesome AI goodness. AWS shows its generative AI hand, the complete hand, not two card Monty or anything like that, the full array of generative AI goodness. Then we’re going to end with Apple doing a face plant this quarter on MacBook.

So hey, I’m going to jump in. I’m going to call my own number. Let’s talk Intel IFS, which is Intel Foundry Services, and Arm announced “a multi-generational agreement to enable chip designers to build low-power SOCs on 18A, mobile SOCs first, but opens up the capability for auto IoT data center space and government.”

Daniel Newman: Hey, buddy.

Pat Moorhead: Yes?

Daniel Newman: Do everyone a solid, what is 18A?

Pat Moorhead: I’m going to go into that. 18A is 18 angstrom, a little bit less than two nanometers, but quite frankly, we used to have truth in millimeters, nanometers, and angstroms. It is not actually the width between different transistors. It is just kind of a made up type of thing now, whether it’s five nanometer, and that’s why it was nice that Intel went to that.

But here’s the backdrop to really understand what’s going on here. So you have the majority of leading edge wafers are manufactured in Taiwan. You have China that’s circling gunboats around Taiwan in the spirit of reunification. Rumors out of DC, there’s a plan to blow up TSMC wafer fabs if China attacks. The result that happened, well, all leading edge chips that go into NVIDIA, AMD, Marvell, Qualcomm come to a screeching halt and we go into a giant recession, maybe even a depression. Yeah, I don’t know.

Enter Intel, and its IDM 2.0 strategy. IDM 2.0 was a plan to boost its own internal manufacturing, create a foundry for the third time, and by the way, they’ve signed up NVIDIA, Media Tech, AWS for patch packaging, and a mysterious large CSP, whose name will not be uttered yet. But so that’s the Intel strategy to come in, perfect timing, shores up their own business, but also helps solve some of the geopolitical challenges with the US and Europe.

So let’s break this down. So first off, this is mobile first. Is this to not take on too much or at lower risk? But then again, mobile is the most competitive sector right now, right? What about Arm-based PC processors too? Makes you think. So you have Media Tech, right? You have Apple, you have Qualcomm, but I’d love to see this narrow in to talk about Apple M Series, PC processors and Qualcomm Snapdragon Orion processors as well.

The other thing I said it’s multi-generational. And what that means is nothing more than multiple versions of RMIP. Arm has different instruction sets and generations, but they also different versions of IP, particularly in mobile that they bring out on an annual cadence. Also note US and Europe, not just US. You have Western Europe organizing and wanting its own factories in their own regions as well. One thing I noticed too, Daniel, kind of who was quoted, right? The level of the person being quoted is always important, and in this case it was Intel’s Pat Gelsinger and Arm’s Rene Haas.

Rene Haas actually called Intel a critical foundry partner for our customers, which I thought was huge. So I think this is great news for both Intel and Arm. Arm wouldn’t provide the resources if it didn’t see a chance of IFS success with 18A. That’s the good news. And Arm, while I don’t see Arm getting any more market share, I think they have 99% mobile market share, I could see this as an opportunity in the PC space with Apple and Qualcomm if they are in fact fabbing here to gain PC market share. So that’s it. Big announcement.

Daniel Newman: Yeah, and while 99% sounds like a lot of market share, we can expect the market to get bigger and that is another opportunity that obviously as mobile continues to expand and the opportunities for mobile. You mentioned PC for instance. Well, mobile compute is changing. So Arm’s potential participation in the PC market is going to grow substantially.

And so I think it’s a good alignment for Intel too. Intel has to be thinking about hedging. For a long time it was deeply rooted in x86 or nothing or die. And we’ve seen what happens when you start to see shifts, behaviors away from one particular processor or architecture to another, and the foundry strategy of Intel to be able to manufacture chips for Arm or RISC-V is obviously something that the company can hedge on. If market share of dwindles in any part or x86 architecture loses momentum in certain key markets, they’re able to continue to monetize that strategy.

So I think it’s smart on Intel’s part to be aligned and be diversified. And so with the company making huge bets and huge investments on manufacturing in the US and in other allied nations as the Asia risk continues to heighten despite not much conversation about this, they are well positioned if something was to happen to be the de facto because there’s really no one else to pick up the slack there.

So I think there’s a lot more partnerships with Arm too. I think Arm will continue to be thinking about alignment. The company is in the middle of trying to trying go public. It’s going to be important that the market sees how they’re going to align and compete and how they’re going to participate in the ecosystems. And so I’ve been impressed with Rene Haas as the leader of Arm, and this was a partnership, like you said, that clearly is gaining attention at the very top of the organization with both of the number one leaders of the companies participating. So while it’s a little technical in nature, it’s a big moment. Note it, pay attention to it, watch it. This should be something good so-.

Pat Moorhead: Yeah, so this fills in a lot of the blanks too around the original announcement a couple of years ago, supporting different instruction sets including x86, Arm, and RISC-V. Good stuff. So next topic, EU issues to Broadcom a statement of objections over VMware. Is this new? Is this something, a retread? What’s going on here? It seems like Broadcom’s competitors got in earlier.

Daniel Newman: You and I are both tracking this very closely, and this is kind of what I would call a long tail of continued regulator interjections to this deal. This is not going to fly through. I think what we’ve realized now is that a deal of this size in this particular economic moment with so many implications is going to receive significant scrutiny. I made a comment yesterday on Twitter along the lines of, “Well-informed regulators will realize that there is not a lot of antitrust risk here.” Our friend Ben Beherron came back and put a laugh, sent a tweet back, and he said, “‘Well-informed,’ smirky face.”

This is the challenge. Regulators right now are almost like they’re looking for something to object to and they’re looking for a deal that shows that they’re taking some sort of proactive action, but this, to me, still isn’t the right deal. We’ve been on this show, Pat. This is just another very similar regulator that’s coming into what the UK did a few weeks back with comments that suggest that there could be some restriction of competition. I just don’t see it. It’s not even in the same stratosphere of risk as something like the App Store that has been completely unregulated or minimum regulation.

This is enterprise and there’s choice in this space, Pat. Nobody has to use VMware. There’s not a single enterprise that is stuck in VMware and can’t use anything else. There’s so many open source options with Kubernetes, you can go to Red Hat, you can use SUSE. There’s Docker, there’s lots of different ways to go multi cloud.

And of course with some of the legacy technology, this is the Broadcom strategy. The Broadcom strategy is legacy technology gets less investment. They see it to sunset, they price it optimally as a business, and if you don’t upgrade or optimize your IT architecture to the newest thing, you might pay more. But that’s not necessarily – they have to support it. They have to support something that’s not really planned to be supported. If you’re updating and upgrading and you want to use the newest and greatest from VMware, then I think it’s going to be a perfectly good deal with minimum antitrust instances, if any, and of those instances, like I said, every enterprise has the controls to do something else. You can work with a different public cloud provider. You can work with an open source provider.

So Pat, I feel like this is just regulators looking for something to regulate. And of all the deals, the only thing that makes this one really prudent is the size. It’s a huge dollar amount. And like I said, it’s something that I think regulators can beat their chest and say, “Look, we’re doing something about this.” But in the end, Pat, I still stand by my belief that this does go through, but it’s going to take some real diligence and patience from Broadcom because the regulators are going to keep it challenging.

Pat Moorhead: Yeah, so the EU had talked about that it had concerns, but it hadn’t listed out a formal statement of objectives. I think the good news is, last December, the commission identified five issues with the deal, but now this latest one is two. Now interestingly enough, it’s using the same logic of its initial challenge on this one. It took out nicks, but it talked about this foreclosure on two kinds of products, fiber channel host bus adapters and storage adapters. So I wrote a piece on this, gosh, back in February. It all applies. So literally it would be economic business suicide for Broadcom to do this.

So Dan, a vSphere license per CPU is $10 grand, okay? Storage adapters, fiber channel HBA, talking about $500, okay? So essentially, EU is saying that Broadcom, there’s a risk of them choosing this $500 piece of hardware and putting $10,000 vSphere licenses per year, which makes absolutely zero sense. And as you said, it is not, let’s say, easy to get off of VMware, but onto either open source or another competitor. But there are literally handfuls of companies that just do that. And there are some companies that do.

Therefore, the switching cost and time are not necessarily excessive. And net net, if we know or have watched anything about how Broadcom CEO Hock Tan operates, I got the opportunity to talk to a lot of the leadership team, it’s clear to me after watching that company for 15 years that it’s all about the money. And if you follow the money and where they can make the most for their shareholders, it isn’t even in any of these risk areas. It’s in the cloud, it’s in multi-cloud. And that’s not even part of the challenge here. We’re talking about fiber channel HBAs and storage adapters. It’s nuts, right, comparing $500 versus $10,000. So I think it’s silly. I think the EU is lost on this one, and I’m hopeful, and I do think that Broadcom has the patience to wait this out.

Daniel Newman: Really quickly though, an analogy is a little bit, what they’re saying here about the lock in would be a little bit like telling a company that if you’re not happy with your Salesforce instance or your Oracle instance, that you have no choice. I mean, look, it’s expensive to change providers if you don’t like what you’re getting, but if the technology’s not meeting your needs or if you feel the companies are… I mean look at the lock in you have with your CRM. Once you commit your company to running software for something like your Salesforce and they come to you and say, “We’re going to raise prices 8%,” or whatever they do, they do this. This is how SaaS companies work. Are you going to be like… It’s not anti-trust, but I mean it’s sticky, right? And that’s good business.

So I think what I’m really wondering here is stickiness of a product and pricing power of having a good product that an organization depends on the same thing as using kind of anti-competitive behaviors? If Microsoft Dynamics or Salesforce or SAP raise our prices, is that antitrust or is that, hey, they’ve built a product, they need to keep investing it and improving it to make it usable for you and someone’s got to pay for that. And these companies have a job to be profitable. I don’t know, it’s an interesting inflection, but I think it’s a bit of a stretch.

Pat Moorhead: Yeah, good adders there, Daniel. Let’s move to the next topic. And automotive lidar company Luminar announced this week the bring up of its Mexico plant. And as I pointed out in the headline in my Forbes article, Luminar proves the pundits wrong with successful bring up of a Mexico plant. You might be like, “Well what are you talking about, Pat?” Well, if you remember, I don’t know if it was last week or the week before on the pod, we talked about Luminar’s CFO actually writing an article having to respond to some of the claims from two competitors, and also a very interesting call from a sell side analyst that he directly responded to.

But there were a ton of people that said, “Hey, Luminar, there’s no possible way that you can get this to high volume manufacturing.” And then boom. So Luminar has not only signed up Volvo and Mercedes and is actually shipping to them, but also kind of the high-flyers out there like Sykes Rising Auto, which offers the R7 in China today with Luminar lidar, and holistically, the company has signed up 20 vehicle models. And to do that, you need to have space. So this space which is run by Celestica, a very well-known ODM, can create between 250 and half a million sensors a year. And I know that the company Luminar is also building some really big capacity in Asia as well.

But it’s funny, I wrote this at the end of my article, Daniel, on hey, is this company going to get any credit for what they’ve been doing? They talk about these customers doubling down their stock catapults, the haters come in and then it goes down to below where they were before they made the initial element. And the way that I net this out, it’s really tough to be a leader, right. When I was at Alta Vista, there were people Yahoo, Excite, Ask Jeeves and their investment bankers were trying to figure out how Google was so good at what it did.

It’s a very similar quote from they ripped off the technology to these kids will never be able to run a business like that. There’s no way they can afford it. Even investment banking firms lined up to take potshots at Google who had aligned with different competitors. So it really took me down kind of a memory lane here. But at the end, what I know is this, is that Luminar is in high volume manufacturing and their competitors are talking about manufacturing on PowerPoint slides. And the other thing I know is that when it comes to early types of technology, you do pay a premium. Surprise. Look at NVIDIA, look at Qualcomm and 5G. They’re ahead, they’re ahead, they’re ahead. They rake in very high prices in the beginning and then the other folks like Media Tech come in and fill in the gaps and prices go down. And guess what? Qualcomm’s on to 6G or something after that.

The difference here though is that these contracts with these auto manufacturers could be up to 15 years long. Luminar doesn’t have to move to somebody, and whatever’s baked in these long-term agreements has a certain pricing stipulation. And at this point it’s like, “Do Volvo and Mercedes, do they want the cheapest and tomorrow or the best and now when it comes to the safety of their other passengers?” So anyways, interesting stuff. A lot of intrigue and a lot of interesting stuff going on there.

Daniel Newman: The automotive space will continue to be a hot button item with all the discussion of AI. And I think we have at least one, if not two more AI-related topics here today, we’re quickly forgetting that we were working rapidly towards full self-driving. And this is really kind of the ethos of Luminar, is to enable it to deliver the technology that can truly and safely enable vehicles to do full self-driving in the future. But even right now, just the L2 plus L4, the ability to do assisted successfully.

I mean, Pat, we’ve seen the dummy run. Your son’s multimillion hit video from CDS. The Tesla just keeps running over the kid. And this isn’t us. We’re just passing along what we saw. And when Tesla shows its evidence that it’s doing something differently, but it’s not going to be a single type of process that’s going to enable the safest driving. It’s going to be a combination of multiple sensing technologies. It’s going to be radar, it’s going to be lidar, it’s going to be vision.

And I think that’s what Luminar understands. That’s what the vendors or the OEMs or the manufacturers that are signing up with Luminar understand. And so seeing them execute against their vision is great. Like you said, the market hasn’t appreciated growth for a long time. Maybe we’re getting to the end of the interest rate hike cycle. When that rate hike cycle ends, you will see tech growth bounce. That is just… That’s happened every time. So if it doesn’t happen this time, it’s an aberration. So overall though, I like him and I like his house and succession by the way… Looks good. I want to visit it.

Pat Moorhead: Hey, let’s talk about the next topic. We’re going to go into two AI topics and the first one is the fourth Groq Day. So Groq Day four. So what did Jonathan and the company have to say, Daniel?

Daniel Newman: Well look, Pat, we’ve got this sort of all is hot and basically Microsoft, now AWS, we’ll talk about that. And Google, they’re kind of dominating the discussion around AI right now. And then on the hardware side, basically it’s been all NVIDIA. I mean NVIDIA rules the roost. But Groq’s story is all about bringing the cost to compute to zero. And it’s really about new ways to enable large language models to run on silicon that’s lower power, that cost less, specialized.

And so Jonathan Ross CEO kicked it off and really was saying what you and I have been saying, there’s no way unless you’ve been living under a rock that you haven’t heard about what’s going on with generative AI. It’s going to change everything. It’s going to change the world. And of course it’s sort of a moment for a company like Groq. This is their moment. That’s what I would say. This is their moment.

Up to this point doing high performance computing, specialized workloads, compiling, these are things that they’ve been talking about but with a sudden onset of whether it’s OpenAI or Bard or LLaMa, having different silicon is going to become really important, Pat, and I think you and I could probably argue that either through some very confidential conversations we’ve had so we can’t reveal details, we’ve heard a lot from CEOs that are trying to deal with this problem that they need to figure out how to do it cheaper, because GPUs are expensive and they use a lot of power, and they need to be able to do it more sustainably, because no matter where you sit on the continuum of believing in ESG and those initiatives, the data centers use a lot of electricity and they use a lot of water.

And so that is a real problem. And if everybody at once was to start pivoting over to these large language models using generative AI tools on a daily basis, it’s going to use a ton of horsepower. This is where Groq thinks it could really play a part. And I think it’s showing that both through its ASICs, the silicon it’s developing as well as its compiler – and that’s a big part of the Groq day story, was the company’s ability, and we talked about this on the pod once before – but to really quickly be able to enable a large language model to run on a specific piece of silicon that isn’t NVIDIA. I mean that’s basically right now most of these large language models are being built to run on NVIDIA. And if you want to move it off NVIDIA, that process can be somewhat cumbersome. So this is where Groq really focuses all into what they’re talking about, their compiler, the kernel-less compiler that is able to really, really quickly adapt and the company can basically deal with the fact that these language models are not going to be finite.

They’re going to be continuously changing. And so in the current setup, the amount of work you have to do to compile and be able to continue to update and modify and expand on these models would be expensive, would be prohibitive. You’d need developer resources you don’t have. And they’re saying, “Well we can make this really fast, we can make it really easy. And of course you can run it on lower cost silicon.” So the company is focused on basically, they’re kind of leaning into ML agility, which is their whole kind of a measurement to make AI more accessible and make it lower cost. And so they’re kind of leaning in here, Pat. And so there are several players in this particular space. We’ve talked about some of them. I’ve been pretty vocal. I think everybody not named NVIDIA right now has an interesting opportunity to grab market share because one, the market’s going to get bigger, and so it’s going to be very hard for NVIDIA to keep up with all the demand they’re going to drive.

But I really do think if I double click on what I said earlier, the cost and sustainability issues are going to be substantial right now. And GPUs are power hawks. They just are. There is no way around it. Now, they are getting more efficient. They are coming down in price for performance, but the volume and the amount of racks of A100s or 800s or H800 is palpable. And so can we get these to be smaller, lower profile, lower cost, more efficient and then of course faster for companies to be able to move data and stand up models. That’s a really big story to watch for, Pat. And so I think Groq’s in an interesting inflection. They need to get more customers, they need to get more adoption, they need to get more case studies in market.

But I do think especially if what I mentioned before, if we really are at the end of the interest rate cycles, if we are, going to see VC and investment come back into growth, I would have to imagine this is an area they would want to invest right now because on top of all this great software and all these great generative apps that we’re all talking about, none of this stuff runs without silicon. And so are we going to try to find more efficient ways to do that or are we just going to kind of go status quo? And I’m not sure, Pat, but this is where I think Groq has a great opportunity.

Pat Moorhead: Good analysis, Dan. There was a lot of things that I liked. So first off, let’s look at the approaches out there in the market. You have NVIDIA in particularly with its training that really leans into using its GPU. It does have small blocks of ASICs like for transformer models to give it acceleration, but NVIDIA is all about programmability, really based on the hardware itself. And then they build on top of that in an environment that can change. And then everybody else uses, I would say more of an ASIC approach, but they put a programmatic model that’s different.

Now ASICs by design are always going to be more efficient on running a certain set of code. And the ASIC folks, and I put Groq in that, and also Tenstorrent, put different ways of programming in there. So everybody is headed in the same direction to accomplish it, they’re just doing it in a different way. And one thing that I really appreciated when Jonathan brought up that very few people are talking about, we are, that people are losing money in the generative AI space. It is really expensive. Now, I remember on the plane-

Daniel Newman: Including Microsoft.

Pat Moorhead: …being on the plane, you and I talking, going out to Microsoft for their big event and saying, “Hey, my number one question is affordability.” And I think it was the number two question at the Microsoft Financial Analyst day, but they really didn’t want to go in that direction. They did talk about, hey, if I can get 10% market share in the ad business, look at how much money is here. So in other words, yeah, it’s expensive, but if I can take money away from Google and Facebook for advertising, don’t even worry about it. Doesn’t matter what it costs, but it is expensive. And I think we’ve also seen Google’s very cautious wade in this because I mean they own search, moving a search that costs 10 cents and moving it to somebody that costs a dollar does not make monetary sense.

So it is expensive. And the other thing that came out, which I appreciated too is energy. We have yet to add up the amount of energy that this generative AI with these foundational models is going to cost. And that is an area where I do believe that Groq is focused. The other thing, the other directional space that I think they’re headed that I like is it’s all about the software. The hardware is important and in fact the company talked about bringing a new piece of hardware for generative AI out. But this notion of a compiler that can automatically generate code for generative AI models is the direction. And in fact, Groq believes its compiler and its software, you can actually use their software to spit out models to be used on other people’s hardware, which I think is important as well because it takes away an objection that says, “Hey, if I standardize on Groq, it doesn’t mean that I’m locked in.”

So I’m super interested to see customer stories, and by the way, not every AI company has customers out there. I know Groq does, but some of their competitors can’t cite a single customer. And the other thing is some haven’t actually gone to production with their chips, so there’s different levels of reality about this. The final comment about Groq is they’ve kept their cash burn low, which I think will help them long term. I do believe that some of the competitors out there are teetering. The ones that came out super-duper early and who spent a ton of money on Opex. So Daniel, let’s shift to the next topic, another AI topic, and I’m going to call my own number on this. So you had Azure and you had Google Cloud come out with some of their answers to generative AI and AWS laid out its generative AI play yesterday in fact.

And I had a chance to talk to AWS Vice President and General Manager, Bratin Saha who runs AI/ML and most importantly, he’s a Six Five alumni. We had a conversation with him at their conference that had space and AI and I’ve forgotten re:MARS, apologies. So here’s what they came out with. So here’s the news. So they brought out what’s called Amazon Bedrock, and that’s a limited preview of foundational models, best in breed foundational models from people you would know, like AI 21 Labs, anthropic stability AI, the folks that use stable diffusion, and they brought out their own foundational model. It’s called Titan, and they talked about two of them being in preview. So they’re running best of breed track and they’re running their own. I’m not saying that that Amazons aren’t best in breed, we just don’t know enough about them at this point.

The company also said that it went GA on its Trainium based instances talking about delivering up to 50% savings on Trainium costs over any other EC2 instance, which by the way, that includes NVIDIA and Intel. Amazon also went GA on EC2 in two, and this uses the new Inferentia 2 chip out there and they’re claiming up to 40% savings on Trainium costs over any other, sorry, 40% better inference price performance than any comparable EC2 instances and the lowest cost inference for the cloud. And I think that the first statement I think is in comparison to NVIDIA and the second statement I think is in comparison to Google’s TPU. So-

Daniel Newman: Read that off again. Read that off again. I just want to hear that one more time.

Pat Moorhead: Yeah, 40% better inference price performance than other comparable EC2 instances and the lowest cost for inference in the cloud.

Daniel Newman: Is this a swing, Pat?

Pat Moorhead: This is a huge swing Daniel, and I’m going to get into more of that. But then they brought out Code Whisperer, which is essentially this companion for programming where they claim on average participants using Code Whisperer completed tasks 50% faster on average and were 20% more likely to complete them successfully who didn’t use Code Whisperer. Again, huge measured claims that the company is making here. So here’s my net net on this. First of all, this is big, big, big for foundational models. This is more details and more holistic offering than I’ve seen seen yet to this date. Companies bringing out a complete line of best of breed foundational models and two of its own. Bring out a complete line of homegrown training and inference services based on its own silicon with huge claims on lowest cost. By the way, the company did claim highest performance based on a lot of its technologies with the super clusters, it’s networking and I believe that is likely NVIDIA based.

And then finally a coding tool that supports a freaky amount of languages and IDEs. Literally, I don’t know how they did this, but almost every language that I’m aware of and every modern IDE that’s out there – by the way, including Visual Studio from Microsoft. So from what it looks like to me, the company is in a good place. Now, the Trainium and Inferentia 2 inferences, those are GA, but Bedrock is in limited preview, but it did talk about some customers, so it’s not vapor. And by the way, I have never seen AWS bring out anything that ended up being vapor. And you can expect Bedrock to be GA in probably a year based on how long it takes AWS to go from preview to GA. Good showing.

Daniel Newman: Yeah, absolutely, Pat. It would’ve been a ridiculous notion for anybody to count out AWS and Amazon in this play. Remember the amount of data just from Alexa that Amazon has though to play with for its business. And obviously I’m not trying to conflate AWS and data center large language models and open source, but what I’m trying to say is it’s been kind of interesting because different companies have sort of been rolling out their first iterations at different paces and Google with its kind of market position felt a little bit more pressure to show its AI leadership. I don’t think AWS feels exactly the same way about it. I think they’re running their own race a little bit more. I watched Andy Jassy’s interview yesterday kind of looking at it. He said something really profound. He kind of said, “Look, we have about 1% penetration of retail right now and the rest is still brick and mortar.”

And then he said, “10% of IT spending right now is cloud.” And he said, “The other 90% still is on prem.” And he said, “If you believe that those two markets, e-commerce and cloud, are going to expand in the future, then Amazon’s a pretty good bet.” And I’m pointing that out because they did a shareholder letter. He went on CNBC, he doesn’t talk much. But what I guess I’m saying is Amazon has a lot of data, a lot of training data, a lot of reasons to try to create an efficient offering for all of its enterprise clients to be able to utilize large language models and stay on Amazon and AWS platform. Additionally, AWS I think had a little bit of a bone to pick. We’ve talked quite a bit about NVIDIA today, but has a little bit of a bone to pick after the DGX cloud offering and the decision not to offer that.

And it’s sensible. AWS is the only one right now that has GA silicon for training and inference and they’re more and more becoming competition. So yes, you can obviously run EC2 and you can do all the instances are available with NVIDIA, with Gowdy or Havana, with the different offerings from the other silicon makers. But AWS plans to make its own hay in the silicon space. And I’ve been saying that pretty specifically. So when you look at companies that have massive sets of enterprise data in the cloud proprietary data, I don’t think there is a public cloud provider that has more data than AWS. It’s just the largest public cloud provider by a distance right now. And so the ability to turn that into a product that can be utilized by enterprises, government entities, et cetera, is going to be material.

It’s palpable. So I like it. I love the competition, Pat. I’m having a lot of fun kind of watching this. As analysts, this is the best. We opine. We kind of put our thoughts out there on who’s winning. I think AWS was ruled out too soon. I think they’re going to be making a bigger impact. And by the way, watch out for every cloud provider. I mean, you heard Oracle and NVIDIA just put their thoughts in the market this week. Everybody’s going to play, everybody’s make a play and it’s going to be a lot of fun to watch what happens in this space in the coming months.

Pat Moorhead: Yeah, I don’t think anybody with credibility ever had any doubts over AWS. There’s a lot of people out there who can’t disconnect. They look at this as one homogeneous blob, when in fact you have the consumer market, you have the B2B market. First of all, they’re very different. And then you have the B2B market who might be serving a B2C company. So it’s so much more complex doing a Bard search and a Bing chat and saying, “who won the AI war?” is lazy. You might be able to make that from a B2C statement and look at what Google and Microsoft are doing in the consumer space and say, “Where’s Apple?” Apple has not talked about anything. They are clearly behind in terms of at least announcing these things. And-

Daniel Newman: I asked Siri. I asked Siri which one’s better.

Pat Moorhead: Oh gosh, I don’t even use it anymore because it’s so bad. It’s such a waste.

Daniel Newman: Hey, you want to poop on Apple a little bit?

Pat Moorhead: No, I’d like to complete my thought if you don’t mind.

Daniel Newman: Oh, you’re not done?

Pat Moorhead: No, no.

Daniel Newman: Oh.

Pat Moorhead: So probably on your comment about NVIDIA, not only did NVIDIA put an IaaS service that they sold with their own salespeople distributed through CSPs like Google Cloud and Azure and Oracle, but they also brought out NVIDIA AI foundations. These are their own foundational models that you run on top of DGX Cloud. So IaaS and PaaS. So yeah, it doesn’t look like that AWS was interested yet in that type of deal. Maybe we’ll see it, maybe we won’t, but it’s good stuff. It’s interesting. And like you said, great for analyst fodder, that we can have. I mean, listen, analysts, it’s great in pandemonium and we have pandemonium here in generative AI ville, but you know where else we have pandemonium, Daniel? In fricking PC sales, okay.

Daniel Newman: PC is dead, Pat.

Pat Moorhead: Which brings us to our next topic. What’s going on? Apple’s down, I mean, wait a second. M1, M2, but they’re down like 46%. What’s going on here, Dan?

Daniel Newman: Less channel to stuff maybe to keep their numbers strong?

Pat Moorhead: Possibly.

Daniel Newman: Yeah, so I don’t think it’s any secret, Pat, that the PC space has been hard hit. We also have to remember the PC space had explosive growth when everybody went home and everybody had one, two, three more PCs, tablets, smartphones, devices. So on one side of every boom is a bust. And in silicon, when you watch the silicon wave, well the devices have to follow. When you see the shipments from TSMC and Intel go down and AMD and everyone else, guess what? Those devices are going to follow. And so the most recent market share report that came out, the Q1 2023 market share report that came out from Canalys basically said this market is in disarray. I mean it is, Pat, it’s kind of a disaster. 32.6% down across the board.

But Pat, I mean nothing looked worse than Apple falling nearly 50%, 45.5% drop. This is just incredible. What is the deal here? So we have M1, you have M2, like you said, you have all this demand for new devices, Pat, but is this an economic thing? I mean, Apple is the most expensive alternative to PC on a Mac. Is this a lack of business spending that’s going on and companies have stopped spending? Was this an Apple had a huge wave of end of year and Christmas selling through the holidays and basically just zeroed out in the next quarter? Just as a comparative basis, they dragged the average down somewhat substantially. I mean, HP led the pack with only a 24% negative annual growth, but we saw pretty consistent among the rest. It was around 30, meaning Asus, Dell and Lenovo all around 30%. So all the other were sort of seemingly sustained about a one-third drop or a little bit less, while suddenly Apple fell way, way harder.

Now, I gave all those different conditions. I would say I’m confident that it was a little bit of all of those factors. I think at the end of the year, Apple is a bit more of a retail phenomenon than other PC makers. They probably did see some pull forward of people who bought Macs for holiday presents. I think enterprises are spending less and being more frugal. So meaning when an enterprise maybe at one time was offering an option of a Mac for their employees, maybe they’re considering not offering that option right now when a Mac might have an ASP of twice what the PCs that they would offer their employees cost.

But these are, like I said, these are not hard truths. These are just my analysis of what likely happened, Pat. But here’s the one thing I will say. I think we’re going to have one or two more quarters of pretty difficult conditions, but I do think you’ll have a cycle turnaround again. New processors, faster applications, generative AI, Pat, AI on silicon, things that are going to need to be on these devices to optimize these workloads are going to require another wave of PC purchases. So I do think we’ll come back from this. I think the long withstanding the PC is not really dead, the thing that you and I like to say, is true, but we are in a gully. It’s the gully at the outset of an amazing run-up, and I think we’re going to be in good shape.

Pat Moorhead: Yeah, overall, the PC market is still above where it was pre-pandemic and the amount of PCs that were purchased to be able to work at home and then have one PC per person at home. So you stopped going to the movies, but you gained more. You watched more movies, more Netflix on your notebook, and I think the market as a whole is going to be solid, particularly when you look at the enterprise market. I have some doubts over consumer. On consumer, my expectation is the growth is going to be driven by AI on the platform. We’ve seen chip makers, particularly Qualcomm, you look at what AMD and Intel are doing on the platform itself with Windows that has an 80, I don’t know, 93% market share versus Mac OS. A big thing is going to happen. And then on the enterprise side, I still believe that we have yet to come up with an optimal platform for this new hybrid work era.

It’s not just about what can I do at home that I did at work, how can I effectively go from work to home, home to work, being on the road and having an optimal experience where it’s more like being there? I think the early moves on making the experiences are good and they’ve really focused around improved cameras and improved audio capabilities. But I don’t think we’re nearly where we need to be to be just there. And this is going to sound a little bit wackadoodle, but to be there, I think you need larger displays, you need higher resolution displays. Dare I say this is a potential for XR. I know, sounds crazy. You can’t even talk about the Metaverse and XR before being laughed out. But how am I going to feel like I’m in the same room with you, Daniel, on this tiny display, in this grainy video and this terrible audio?

It’s just not going to happen. Our brains don’t work like that. The other thing we need to do is have a lot of sensors on the system as well to be able to secure this PC, whether it’s inside the perimeter, outside the perimeter, not on a VPN or on my home router. So I think there’s going to be a ton of growth in there. Let me hit the Apple thing. Apple absolutely fell on their face. And if you remember two or three quarters ago, they were the darlings that had the highest growth or the ones that sucked less than the Windows folks, related to the decline. And everybody’s like, “This just shows, this just shows the resurgence, baby.” No, Apple stuffed the channel. Okay? No bones, black and white. I think their consumption through the channel is pretty good, particularly when you see deals, Daniel, $900 bucks off a Mac, okay.

My entire career, I’ve never seen that aside from blowout at the very end when they didn’t do a good job on doing that. I think net net it shows that Apple is human, that Apple’s not going to run up the score on the industry related to the PCs. Note of caution, Apple consumes most or all of the profits in the consumer PC space. I think the only exception will be some of the high end gaming rigs. The Windows folks still have the dominant share of profit dollars in the enterprise. But make no mistake, Apple is gunning for that area and they’re willing to spend, what? Their 10th year of pounding their head against the wall? And they used M1 and M2 and the Arm architecture that has a great high degree of security to be able to roll those numbers up. And no longer is Apple an embarrassment in the enterprise. They’re actually a formidable opponent who’s willing to go another 10 years to go slowly perk up market share. Dan, that’s all I got, baby. Great show.

Daniel Newman: We did it.

Pat Moorhead: You got anything else about this Mac stuff? Anything?

Daniel Newman: No, man, it’s a Mac attack. It’s an attack on the Mac. It’s going to be fun to see if they could recover here, Pat, but-

Pat Moorhead: Yep. Hey, great show. Great show.

Daniel Newman: Off the dome.

Pat Moorhead: Intel, Broadcom, VMware, Luminar, Groq, Arm, AWS. I mean, we talked about freaking everybody here. I want to thank everybody for coming to the show, tuning in. I don’t think we went over, I think we started really late, but-

Daniel Newman: We did. We did.

Pat Moorhead: I talked a lot. So here we go.

Daniel Newman: Hit that subscribe.

Pat Moorhead: Anyways, thanks for coming. We really appreciate you. Send all the compliments to me on Twitter and all the critical feedback.

Daniel Newman: Criticism, Pat. It’s criticism.

Pat Moorhead: Thank you. Appreciate that. Anyways, thanks for checking in, Episode 164. Have a great weekend wherever you are on the planet. Good morning, good afternoon, or good night. Take care.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Ron Westfall and Tom Hollingsworth discuss the National Spectrum R&D Plan and the latest 5G innovations, revealing how these developments could redefine connectivity and operational efficiency in the telecom sector.
Krista Case, Research Director at The Futurum Group, provides valuable insights into the evolving data security landscape and highlights the role Commvault and NetApp are playing in shaping solutions for a cloud-first world.
Bargav Balakrishnan, VP of Product Management at IBM, joins Steven Dickens to share insights on the evolution and future of IBM Power servers. Discover the milestones since Power10 and what's next for Power, including the exciting Power11 processor.
Francine Katsoudas, EVP at Cisco, shares her insights on the imperative of skilling and upskilling in an AI-dominated job market, emphasizing strategies for the future workforce.