Search
Close this search box.

Talking Adobe, China’s Crackdown on U.S. Chips, Intel, Apple, NVIDIA, Databricks

Talking Adobe, China’s Crackdown on U.S. Chips, Intel, Apple, NVIDIA, Databricks

On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss the tech news stories that made headlines this week. The handpicked topics for this week are:

  1. Adobe Summit 2024
  2. China’s Crackdown on AMD & Intel Chips
  3. Intel Gaudi Performance – Beats NVIDIA?
  4. Apple Investigated Again by EU & Consumer Class Action Suits Start
  5. Databricks New DBRX Model- Enterprise Worthy?
  6. Is NVIDIA a Monopoly?
  7. Lattice Leveraging the AI PC Wave?

For a deeper dive into each topic, please click on the links above. Be sure to subscribe to The Six Five Webcast so you never miss an episode.

Watch the episode here:

Listen to the episode on your favorite streaming platform:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Patrick Moorhead: The Six Five is back for our weekly show. We’ve gone 210 episodes and we have not been canceled yet. Dan, we have a G-rated program here. Sometimes we throw the F-bomb and we get the explicit part, but Dan, it’s great to be back sitting in the condo, just chilling. Didn’t make it to the office this morning. Must be my day off.

Daniel Newman: What is this like you’re starting late to make up for leaving early? I am not quite sure Taking a light day, pat is the market’s closed? Pat’s closed kind of thing, like 9:30 to 4:00. What are you thinking today? What are you thinking, buddy?

Patrick Moorhead: Here’s what I’m thinking. I got about eight meetings set up back to back.

Daniel Newman: Nice. Late day.

Patrick Moorhead: No, I know, I know, but I’ve got the whole family coming into town. It’s Easter weekend. Pico’s coming in from college. Catherine and my wife Paula are coming back from the horse show. We get Lauren, Christian, everybody together.

Daniel Newman: I like that. I mean, look, it’s good to rest a bit. You’ve earned it. You’ve worked pretty darn hard this week. It’s been a busy tech week. Not as busy, maybe as last week, but we still ended up with more stories I think that we can possibly cover, which is really, really interesting. And it’s not bad to get a little time to decompress. I’ve got some thinking, and planning and scheming going on. It’s been a great month, been a great quarter. It’s been electric, I may say in many, many ways, but at the same time, these nice weekends, family weekends, getting everybody together. It’s always really, really nice. I had that last weekend. This weekend, not so much, but maybe I can just come over and have Easter breakfast, Easter brunch with you, huh? I’ll just swing on over?

Patrick Moorhead: No, I have been known to make a pretty awesome breakfast for Easter.

Daniel Newman: Yeah. Really?

Patrick Moorhead: Yeah.

Daniel Newman: What do you make? Tell me really quickly, I know we don’t have a lot of time, but tell me Easter breakfast.

Patrick Moorhead: Well, what is it? Eggs benedict I’ve made. I’ve made steak and eggs, so I don’t know what I’ll do. I’m in temporary housing right now, so I don’t think I’ll do all my stuff to make hollandaise, and all that stuff. And these special egg cookers we have are all packed away, so it’ll probably be steak and eggs. Dan, what do you think?

Daniel Newman: I think that’s probably a bit of a delicacy for you. Probably something you don’t have very often because normally you have steak and eggs.

Patrick Moorhead: Right. No, that’s my thing. That’s my jam and I’m losing weight on it, so I’m keeping it

Daniel Newman: Going. You look good, dude. You look good. You look thin. I mean, you do have to splurge though. What I do figure out once a week, you just need to overeat a lot.

Patrick Moorhead: No, I have these little Crustables in my freezer, you know those little peanut butter and jelly sandwiches that have the crust taken off. That’s my vice, dude.

Daniel Newman: All right man. Well listen, it’s time to rock and roll. We got to show to do, don’t we? What’s up?

Patrick Moorhead: Yeah, yeah. That’s Dan’s subtle way of saying-

Daniel Newman: That’s my way of saying, “Let’s go.”

Patrick Moorhead: “Stop trying to be human and let’s dive in.” No, we’ve got a great show for you. Dan and I spent a couple of days at Adobe Summit out in Las Vegas. We’re going to be talking about their event. We’re going to be talking about China’s crackdown on AMD and Intel. Is it really a crackdown or is it not? We’re going to talk about Intel Gaudi performance Gaudi2. Does it actually beat NVIDIA? Wow

Apple’s being investigated again and sued again. Consumer class action lawsuits. We’re going to talk about interesting, a lot of messages that I got on NVIDIA, that NVIDIA is the real monopolist, not Apple. We’re going to debate that. Databricks brought out a new model, looks like it is doing well on the benchmarks but is it any good for enterprises? We’re going to be talking about that. And then finally, bonus topic: Lattice leveraging the AI PC wave. Wait a second, they do FPGAs. What’s going on here? Anyways, let’s dive in. Dan, Adobe Summit, we spent a lot of time there. We shot a Six Five video with CMO, Eric Hall. Wait for it. It’s coming. Let’s go.

Daniel Newman: Listen, Pat, by the way, just on your intro, just remember no investment advice, people. We’re not doing that. Okay, I want to talk about the investment summit at Adobe or the investor event is maybe the right way to explain it. So I call it the Money Slides. Look, there’s a lot they got announced. So broadly speaking, Adobe is building this content supply chain. And this is really interesting because we’re entering this generative AI era where the idea of saying, “Hey, I want to create a coffee mug that says I am a unbearably cool dad and I want to figure out a way to shoot this in 2000 different rooms with 1000 different color variations and I want to be able to test it on 500 different websites in 300 different languages in 15 different geographies. And I want to be able to do that in days, hours, minutes, not years, months, weeks.” This is what’s going on.

And so Adobe is really, really interestingly positioned for this because you got this what I would call super disaggregated, fractalized marketing technology stack, and we are entering the era where people want to streamline this. You’ve got a content supply chain, you’ve got companies that have, usually many agencies, many marketing teams trying to do a single thing. And anyone that’s ever tried to do marketing, whether it’s in our companies, a smaller business, all the way up to a large enterprise knows it just seems to take too long. And so, Adobe is out there to fix this with generative content studio, they had their new Gen Studio, which is really their most prolific end-to-end product and solution. That’s what they announced here.

Now, they didn’t really address video yet, but it sounds like they’re going to, they did address a lot about content rights and ownership, which is very interesting. And again, they had so many announcements, but what I wanted to focus on what I called the Money Slides. So the Money Slides were what we had when Shantanu presented to us their CEO at the investor event inside of Adobe Summit. And basically there’s three focal points that I leaned into and you can follow my little tweet stream here, but how the company is going to meet its expectation. It is raising its $205 billion TAM to $293 billion between just now and 2027. First of all, I love the fact that they did a shorter window of time to explain to investors, it’s great when people are putting 15 or 10 years into the future for TAM. It’s better to me when it’s something that’s digestible. Most people can’t survive the week, the month, let alone thinking three years out, five and 10 is even worse. So where does their growth come from?

First of all, they’ve got three clouds, very clearly demarcated. This is what I think Gen Studio can help tie together because I think their creative, their experience clouds really are a left to right function of delivering the marketing experience. Having said that, they also have the document cloud. So people have done things like sign and PDFs that they’re used to. That’s another really big part of the business. What they’re going to do though is A, they’re going to drive more revenue from their current customers, which is my favorite thing to do. And two, they’re going to expand their offerings. Last year they had six and a half billion images generated with their Firefly product. That’s their generative AI image generator, which is a super cool product. And then their Gen Studio, which I’ve talked about a couple of times.

The second focal point though is that they’re going to expand the TAM, which I mentioned 293 billion. The third is they’re going to build an ARR machine. This is really an interesting thing, pat, that I don’t think most people understood about Adobe, but over the last decade plus, this is a company that went from about 25% of its revenue being recurring, to 90% currently huge pivot for the company. In five years, the company has gone from $10 billion in revenue to $20 billion in revenue. It’s growing almost as fast as The Futurum Group, which I really appreciated. This is a really, really important transformation for the company because, basically they’ve made themselves more valuable, they’ve given themselves a more compelling multiple, they’ve created a stickier product and a consumption model, and now of course they are capitalizing on this powerful generative AI movement that is going to create more need for visual content, more need for video content, more need for distribution.

And of course, Pat something you and I believe very strongly in; more need for analytics data and measurement. The experience platform with the Gen Studio is really a compelling offering that gives end-to-end capabilities and leans into the fact that Adobe may just have all the technology required to play a role in the B2B space, which many people for a long time have questioned. There’s so much more to cover, but I want to share some oxygen, Pat, and I’m going to pass this one back to you.

Patrick Moorhead: Yeah, so no, that was a great breakdown, Dan, and I went to the event with a few questions. So first of all, is AI just a, “Gee whiz” feature or is it truly delivering business value? The other one I was looking at is does this move the needle for Adobe and B2B? And that’s the key here. This is not a consumer show. This is not a creator show, Summit is B2B. There’s not a whole lot of competition on the creativity side, but there’s a heck of a lot of competition on the experience side from companies like Salesforce and HubSpot. I also wanted to see if we get a better picture of how much AI was providing financially to the company, and I got to tell you, I walked away with some pretty clear answers.

So first of all, I can’t tell you which one of Adobe’s largest customers said this, but essentially they said that they cut 50% of this content supply chain. So think about any type of imagery, any type of alt formats that you wanted in and also emails. And this company said they’ve already cut 50% of those costs already and are on track to cut 80%. Now, can’t give you the exact number, but it’s a nine figure investment here, which is gargantuan. Yes, hundreds and millions of dollar savings already. Let that sink in, folks. That’s a huge deal.

On the part about could Adobe, can investors really understand it? I do think the company did a good job, but I think they could have done a better job on this. If nothing else, percentage of revenue that’s AI last year and what’s going to be this year, carving out AI only services. I know that’s tough, particularly when you’re in Creative Cloud, and you are making things like Photoshop better with those tools, but the companies that are getting life on their stock price are the ones that are able to best articulate that.

Daniel Newman: I’m glad you said that, Pat.

Patrick Moorhead: Yeah, and by the way, one I’ll call it… Yeah, okay, let’s talk about Experience cloud versus let’s say HubSpot. And so, I’m taking this a lot more seriously at the point. It wasn’t something that my company had covered, but we need to do the double click on that. What I really like about this too, and this really hit me when we were talking with Eric Hall, the CMO of Digital Experience that the end-to-end capabilities of Adobe, Sam Altman talked about this 10 person billion dollar company.

Daniel Newman: One person, by the way. Yes.

Patrick Moorhead: Oh, one person?

Daniel Newman: He said, “One person.”

Patrick Moorhead: Okay. I think that’s kind of a little crazy, but let’s just call-

Daniel Newman: I mean, I would like to try it though. I wouldn’t mind it.

Patrick Moorhead: No, I wouldn’t mind it at all. But you can imagine a 10 person billion dollar company, one person is in charge of marketing, right? And these are the types of tools when you put Gen Studio on top of it, because right now, digital experiences is 15 different modules and that is a no-go for a small business or one person doing marketing in a company, but you put Gen Studio on it and you could see that built out across all three of the clouds that they have. You have something incredibly powerful.

Final comment, this is a bonus comment. What were the first companies that were done in the first keynote that were brought out? It wasn’t OpenAI, it wasn’t Microsoft, it wasn’t even Adobe, it was NVIDIA and Qualcomm. And Dan, you and I have talked about semiconductors eating the world here and we can debate who said it first, who wrote it first. I’ll give you-

Daniel Newman: Market Watch 2019.

Patrick Moorhead: Pat Moorhead 2017.

Daniel Newman: Dan Newman before he was born, 1979.

Patrick Moorhead: I was in the womb baby when I said it, 1967. So.

Daniel Newman: Anyways, I lied about my birth year, by the way.

Patrick Moorhead: Anyways, just a total sign of the times, right? And it’s why we cover semiconductors so much on this show. So hey, let’s move to the next topic and oh, surprise, it’s two elements. We’re talking chips, and we’re talking AMD, and Intel and China. So I got on CNBC to break this down. And it’s interesting, I think the news reports, the FT got a little ahead of itself. The headline said, “Government,” but the reality is it’s Defense. And Defense is a large part of government, but it’s not talking about universities, it’s not talking about critical infrastructure like telco. That’s the one thing. And there’s a waiver that says you can go for 50%, and this is the rough translation, “Non-compliant chip makers for Defense.” So I mean it’s like a fraction, of a fraction of a fraction. And this is just a long line of tit for tat between the US and China.

10 years ago we had a IBM and Cisco banned from critical infrastructure. The US banned Huawei, and ZTE, and a bunch of camera related properties, I think there were 75 Chinese companies on the Do Not Buy List. Then you had Micron that was punished. And then, here we are with AMD and Intel. So you might be asking, “Well, who the heck could fill the gap there?” Well, companies like Fidium, they’re arm-based, long zoom, MIPS, and I think they have risk five. Then you have HiSilicon, which is by Huawei, which is arm and risk five. And then finally you have a JV Zhaoxin, which is X86 out there. There we have it.

Daniel Newman: Yeah. So I mean look, there’s more to this story, Pat. I mean China’s not banning everything. It kind of runs across sometimes that way. You see a headline and we know that the decisions that have been made in the US and in the West to try to create friction and try to create a challenge for China to keep up with global technology leadership. It’s a decision I’ve said many times on the show made out of three main tenants, right? National security, supply chain resiliency and global technology leadership. AI is now the most coveted competency in the world. And of course, building advanced silicon and giving access to the most advanced design and then being able to deliver these fabricated chips to China gives them a chance to catch up.

And we know that Xi Jinping and the leadership of the CCCPE will spend amount, Pat, to get access and they’re building and they’re seeing success. And whether it’s been marginal successes with the Huawei, and the Harmony OS and being able to get less Apple, that’s not necessarily the only thing, but there’s a big focus in China to derail the US’ strength. And of course you’re seeing them in limited capacity being able to build lower nanometer nodes without EV, and they’re nowhere near where the West is with three nanometer. And so it’s going to take some time.

Having said all that, Pat, I mean look, I look at this as ping pong, less about what the new specific rules are, but there’s a ping pong here. I mean, why micron? Well, it’s just kind of interesting. I said, “Micron” because memory was something that probably China was okay if Micron got cut off and didn’t need, why have we seen crackdowns on the most advanced NVIDIA chips? Because it’s really obvious why we don’t want to provide that capability and technology to China. What’s the intent of crackdown AMD and Intel? Well, I mean look, these companies have legitimate, multi double-digit percentage revenue coming into China. This hurts, it stings, and although it’s not every skew in every part, the impact of the West knowing that China might cut it off when we have so much supply going in and so much revenue in our company’s strength is so dependent on China, it creates consternation, it creates chaos, conflict and uncertainty, which again is the specialty of China, is trying to keep our markets a little bit unstable whenever possible.

But be very clear; this is all a traditional global trade war that is going on. And now we have a new kingmaker in AI and everybody wants to be the king. So China’s finding its own path, it’s going to do its best to find its own way, it’s going to continue to find ways to create chaos for our companies, the US-based companies. I’m looking at this as just one little microaggression of many more to come, won’t be settled anytime soon, Pat, but hey, at least Intel has Gaudi2.

Patrick Moorhead: Gosh, if there’s going to be microaggressions, I want to find my safe space now.

Daniel Newman: I’m feeling unsafe, but probably for reasons that are unrelated.

Patrick Moorhead: Exactly. I love you, bestie. Hey, let’s move to the next topic. And guess what we’re talking about chips again?

Daniel Newman: Well, I said, “Thank God Intel has Gaudi2.” Did you see how I did that? Did you catch?

Patrick Moorhead: I did, I did. I was looking at my dog, but I was paying attention.

Daniel Newman: Which one? Show everybody your dog. Can you pick it up?

Patrick Moorhead: No, no. He’s going to get all crazy and start pawing me and barking.

Daniel Newman: All right. All right. I mean, is it Bert or Ernie?

Patrick Moorhead: “Bert or Ernie?” I love that. I love that, man.

Daniel Newman: All right man. Listen, I tweeted something out the other night. I think this is probably where this topic came from, and I kind of said something along the lines of, “We don’t talk about Gaudi enough.” The last several months there’s been kind of this weird gap that’s been created. We talk about NVIDIA, H100 now the B Series and the Grace Blackwell, and then we talk about homegrown silicon being provided by the cloud providers and it’s NVIDIA AMD. It’s like NVIDIA has AMDs, they’re looking at them and that’s the competition. And then over here we’re looking at… But we do talk a lot about accelerators. You actually had a great tweet this week about ASICs and the need to create standards so that we can scale the development in that particular area, Pat. But one of the things that we haven’t talked a lot about is Intel and whether or not… I know we talk about 2025 and their potential GPU, but Pat, we’ve talked a lot on the show about how ASICs and even the XPU can be very competitive in certain cases to NVIDIA.

And this week Intel put out a newsroom post, this probably isn’t like a 20 minute discussion, but it’s a few minutes here. And they basically talked about the MLCommons putting new results industry standard MLPerf benchmark for inference. And it basically noted that the Gaudi2 and fifth gen Intel with their AMX, which is the accelerated extensions essentially can be a very good alternative to H100s for generative performance as it relates to inference, Pat. And I guess I just was thinking to myself when I’m looking at this is, “Gosh, why does nobody talk about Intel? Why is Intel being written off?

Now, of course, I can give you a quick argument of that because they haven’t talked about enough big cloud wins yet. And I think the fact that we’ve heard about Gaudi, we’re seeing its performance. By the way, this is a really strong performance with their Gaudi2 and guess what’s coming?

Patrick Moorhead: Oh gosh, Gaudi3.

Daniel Newman: Gaudi3.5. No, I’m kidding. That’s GPT. Gaudi3. So the point is, with their almost last generation, you know how we love to do the generations thing, Pat, we love to talk about, “Well gosh, NVIDIA’s chip that isn’t even shipping yet is kicking AMD’s butt. Well, hold on a second. H100s were outperformed in many ways by the new AMDs. And now yes, NVIDIA’s answered that with a product that’s going to ship in the future, but same thing here. So now we have an Intel product that’s coming that’s more performance in certain inference cases than the NVIDIA chip. Now, said that Pat, you and I think have to be very, very clear because we know a lot of people in the chip space listen to us, this is not a GPU, it does not have flexibility and programmability like a GPU, but in cases where inference in language is super important, this is a really efficient performance alternative with strong specs, strong metrics, and they talked about it on LLaMA, on Stable Diffusion, on Hugging Face text generation, so on a number of different workloads, this particular chip performed.

So the moral of my story is the world loves to write off intel, and I’m sure Pat Gelsinger loves what he calls the permabears. I just think between now the Gaudi3 and then ’25 when they start to deliver their GPUs, if there really is a 250 and upwards of potentially $400 billion TAM for GPUs over the next four years, five years, is what we’re hearing, I think there’s a real shot. Intel is going to get a piece of that business. And I know I’m a little too positive on Intel, I hear it sometimes from people. But people like to always tell me why they’re right and I like to mark it as this date, 3/29/2024 when I told them I think they’re wrong.

Patrick Moorhead: Wow, you left me a little oxygen. Let me take a little bit of a difference. So first off, the claim was not that it was better performance with Gaudi2, it was that it was best price performance and it’s 40% more. And when I stand back and say, “Hey, would I shift for 40%?” I probably wouldn’t if I needed three years of different types of models, but if it’s a steady state workload, 40% is a ton. The one thing that got a little bit buried in the lead was that Intel Xeon was the only processor tested or SOC tested with and like you said, AMX extensions and think of AMX as a little accelerator that sits on the Xeon SOC. And I think that’s a major accomplishment in that we didn’t see anything from AMD. Now, a MD does not have acceleration capability like AMX. It does have a massive FPU, and then a massive matrix engine that’s leveraged by SSE2, but that’s very different and less efficient for many workloads compared to AMX.

Dan, we have debated on this show that if only two people showed up for a gunfight, was there really a gunfight? And one thing I did appreciate from MLCommons, this is David Cantorc xxd, we’ve all been on briefing calls together and he said, “Submitting to MLPerf is quite challenging and a real accomplishment Due to the complex nature of ML workloads, each submitter must ensure that both their hardware and software stacks are capable, stable, performant for wanting these types of ML workloads.” And that message was directed at Dell, Fujitsu, NVIDIA and Qualcomm that submitted data center focused power numbers, but those power numbers had to be run while you’re running the ML inference out there.

So I think first of all, it’s good to acknowledge why others weren’t on there, but I still kind of question that if you only have two people show up for a certain benchmark, what’s the value of that? So, I mean we’ve already debated that I think on these MLCommons benchmarks, but I think it is a reflection of the difficulty of AI in totality. So Dan, let’s move to the next topic.

Daniel Newman: Can I say one thing?

Patrick Moorhead: Please.

Daniel Newman: I’m glad you called it out. I want to make sure I’m correct when I said it, I said on par, not equal, but I said that, and I believe it’s A100s, that it actually outperformed H100s that it was near par. So I should say, “Near par” not, “Outperform.” If I said outperform, I was wrong. I’m correcting myself.

Patrick Moorhead: That’s okay. We can do this. We’re slinging stuff pretty quickly. And by the way, we really don’t have any show notes aside from links. So we-

Daniel Newman: No, we know everything.

Patrick Moorhead: We have no producers that are giving us a script. We are literally riffing here and we will make some mistakes. Hey, let’s go into the next topic, it’s not about chips, it’s not about SaaS, but it’s about Apple getting sued again and essentially the EU suing company. So last podcast, it’s funny, I forget if it was podcast or interviews that I’ve done or tweet, I said, “Hey, here’s what to expect next in the Apple saga because these things work like an orchestra here.” I said, “Class action lawsuits will hit and then investor lawsuits were hit and then other regions will start suing like Japan and Korea.” And like clockwork, this week, the US class action lawsuits came out that piggybacked on the US Department of Justice investigation on there, and the same week we had the EU, essentially file a suit to investigate Apple for its lack of compliance with DMA.

And I think a lot of this stems from their reaction to Epic. If you remember, Apple banned Epic’s Sweden publishing arm from publishing applications on iOS. EU said, “We’re going to open up something against you for that,” and then a day later, apple reinstated epic. But this is just a long line of bad things that are going to happen to Apple related to them using their monopoly power to stifle competition, and raise prices and limit innovation. So what’s going to be the next step here? You are going to have Japan, Korea, Australia, who will eventually file suit or invoke their DMA like policies. TBD on China. For China, it’s probably going to be a negotiation, but I have to tell you, with Huawei taking market share away from Apple, I could very much imagine China going after Apple and it becomes more like a nationalistic thing. But that of course, is balanced against the million people that China has even doing final assembly for iPhone at Foxconn in South China. So there’ll be a lot of back and forth on this.

Daniel Newman: Yeah, there’s a lot of push and pull there, Pat, right? I mean there is some codependencies that nobody wants to really address. Now, again, we’ve probably created more dependency on China than China’s created on us, but they sure do like their iPhones there, although it appears to be about 33% last quarter that they liked them than the quarter before that, which is very, very interesting.

Patrick Moorhead: It’s staggering.

Daniel Newman: Staggering.

Patrick Moorhead: Bloomberg reported that, right? That’s not official.

Daniel Newman: Actually, I think that was Counterpoint’s data maybe that came out. It might be 30, it might even be a little higher. I think it was 28% the first two months and it dropped, went up to 35 in the third month and 33 net.

Patrick Moorhead: That’s ugly.

Daniel Newman: But don’t quote me on that. You can quote me on that, but make sure you quote my quote that I’m not sure that I bought that quote. It’s a lot. It’s staggering. Look, this is like the pile on effect that we’re getting right now. This is a pile on effect. Apple’s like, it’s like, “He’s bleeding,” it’s like a player that’s gone on tilt in poker, and everybody wants to come and take some money off them. Apple’s got its App Store issues, it’s got its DOJ issues, it’s got China issues, it’s got a questionable interest in its newest product. It’s got a lack of gen AI issue. I mean, are they really going to go to Google to solve their AI strategy? I mean, can you imagine what that would be indicative of? They’ve got a lack of growth problem. They just killed their car. I mean, it’s really funny, but have you ever seen Apple reeling like this? I have not, in a long time, seen Apple reeling like this.

Patrick Moorhead: It has been a long time. They have reeled like this, but they have bounced back.

Daniel Newman: Yeah, back in the first time Jobs got his $100 million loan that saved the company. It’s been a while. I mean-

Patrick Moorhead: Well, even AntennaGate was the big one where Steve Jobs and notoriously got on stage and said, “You’re holding it wrong.” And they gave away free bumpers. And anyways, that was a-

Daniel Newman: Yeah, I mean it’s been imperfect. I think Steve would be rolling over in his grave right now, just even about the product and innovation. I think if you’d have told Steve that, “You guys are going to spend a decade to build an XR product and this is what you’re going to come up with, it’s just this thing that looks…” You know what? I wore that thing. I’m pretty sure I wore something just like it when I was playing laser tag like 20 years ago. But anyways, I just think this is a pile on. I think now I always call the EU, it’s a taxing authority. They don’t care about competition, they care about creating tax. Now, apple did give a massive middle finger to the EU when it rolled out its response to its DMA and to the concessions it was supposed to make. It really didn’t make them very well. But look, this is the classic bifurcation between consumer, which mostly love Apple, love their products, and love to use it and have very little to complain about, and competition where you are trying to create more level playing fields, you’re trying to allow innovators to innovate.

And so we talked a lot about the Apple suits. This is specific to Europe, but this is my opinion. It’s a pile on. I think you hit a good point. Once everyone sees them bleeding, if Europe wins, it starts to see dollars come out, you can expect parts of Asia, there’s a little ala Qualcomm. When Qualcomm was at its worst, and it was just kind of as a, “Follow the sun, we’re all getting a piece.” It seems to be where we’re at with Apple right now. So they’re going to have to keep their heads down. They’re going to have to out innovate and maybe try to piss a few less people off, somewhere in the process.

Patrick Moorhead: Yeah, it’s interesting you see, there’s so much competition for talent in the Valley, and as Apple is trying to dig out of their generative AI hole here, you can imagine the competition for AI talent being fierce. And a lot of that is stock-based compensation. And if you look at the top AI numbers, Apple is just not doing well. Now at WWDC, they’re going to come out with their uber strategy for that. I don’t think that Google will be the on-device AI. They’ve published a paper on models, but Apple doesn’t need to be first in many markets, they just need to do it better and they need to do it better over time. I mean, look at Maps. Maps was a disaster. They didn’t change the name, they just leaned into it. There were a couple other MobileMe was a disaster. They’ve cut very few products. I don’t know. We will see. I mean, it took years for the decline of Microsoft, which started with the US government investigation into the company.

But hey, let’s move forward here. Databricks brought out a new model that, statistically from the benchmarks that are out there, looks pretty good, but is it enterprise worthy?

Daniel Newman: I think there’s a couple of arguments you could make, having an enterprise company that focuses on enterprise data, building an enterprise LLM for enterprises, it could be pretty compelling. You have a bunch of companies that are building large models, trained on a lot of openly available data that you could argue isn’t like with the GPTs and the clouds, and the Groq, and these different models that have largely been trained on the open internet data.

This one’s interesting though, Pat, so I’ll start off talking a little bit about its numbers, which by the way are very compelling. The numbers are very good, and they did some comparative of it with the LLaMA model, mixed role and Groq-1. It didn’t talk so much about some of the more proprietary things like the Google Gemini or OpenAI, but it focused on what mostly people are considering these open, available models. It’s got favorable results in three key categories. They call it programming or human eval, math, as well as language understandings. And it actually outperformed in all of those categories. Now, again, I don’t see comparisons to some of the other, but from where it starts, that’s pretty compelling. And it does say that in its thing that it does surpass GPT3.5 and it’s competitive with Gemini 1.0 Pro. So they did note that, they just didn’t show the data and in any sort of comparative data because, well, not fully open source, would be my guess.

Pat, interesting. Very opaque on training data here. Very opaque on where the data came from. So if you read this long form, what it is, and what it was built, I think it said it was trained on 12 trillion tokens of text and code data. Okay, where’d it come from? It said it was, I believe it’s data that Databricks has access to, Pat, but I do have to ask like, okay, 12 trillion tokens, carefully curated data, maximum content like 32,000 tokens. Is this basically being trained out of Databricks enterprise customer data in some anonymized fashion? Where else did the data come from? I can’t fully pull that together. It said, “It used the full suite of Databrick tools, including Apache, Spark, and Databricks notebooks for data processing, unified catalog for data management, MLflow for experiment tracking, curriculum learning, and pre-training.” Interesting.

Remember the CTO of OpenAI, like, what did you train this on? I am just kind of curious. Are you training this really, really good enterprise data model using a lot of enterprises’ data? And is there some way, some sort of use agreement that you have when you use Databricks that they’re allowed to do this, so long as they do what? Type anonymize? What are they doing with the data to make this work? I imagine they’re also using some of the other data that other open source models are using, Pat.

I do think there’s an interesting opportunity as we see this pivot from open models that are being built mostly on the open training of internet data, to more proprietary, to create better models that are going to be more business use case specific, industry specific. I do think that’s where it’s going. I actually don’t think it’s these mega large language, I think it’s going to be these big large languages coupled with very specific domain knowledge and data that is going to create useful insights. Pat, I mean, look, the investments that have been made, Databricks is probably one of the most exciting IPOs to be, to come. This is very interesting. I do think with Databricks plus Mosaic, they have the tools and the capabilities. How does this get adopted? How does this get used? Does this build popularity? I see it more as being inside the Databricks user community than being something that really drags them into this kind of broader OpenAI discussion. But I’ll kick this over to you to see what you think.

Patrick Moorhead: Yeah, so I’m going to do a John Fort, “On the other hand” here style and kind of debate this with myself. So the notion of having a data management company and having a model together is compelling, okay? There’s some efficiency there for sure, and that is what enterprises are looking for. But on the other hand, if you don’t know where the data comes from, and if you’re an enterprise and start using this model, you are likely going to get sued if the underlying training data was not licensed. And you had mentioned the Sora interview with the CTO who didn’t know what the training data was, which I find absolutely impossible scenario here.

Daniel Newman: I don’t think he didn’t know is the-

Patrick Moorhead: Right, probably knew, but didn’t want to say. And anyways, it was just, gosh, the internet seized on that one. And the other thing is 75% of enterprise data is on-prem, and Databricks is a data management cloud in the public cloud company. So I guess there TAM is 25% of that enterprise data that’s up there. And oh, by the way, where they compete even with Cloudera, who has extensions to AWS, and GCP, and Azure, where they also compete with Snowflake. In fact, I love to combine Snowflake and Databricks, and I’ve heard them referred to as SnowBricks out there. Here is kind of a middle stance which says, let’s say a company like IBM integrates DBRX in there, and then under NDA, they provide IBM the sources of data and how that data is pruned. And then, IBM could make the decision to go in and indemnify people from lawsuits. So IBM brought Mistral in that doesn’t have a… I don’t know if they legally indemnify them, but that’s some research that I’m actually doing now, I’ve got an inquiry into IBM. If nobody indemnifies the enterprise, it’s a hard pass. They’re not going to use this, folks.

So anyways, let’s move into the next topic. I’m going to call my own number and that is, is NVIDIA a monopoly? So through everything external that I’ve said about Apple, I had people say, “Pat, first of all, I hate you, but also Apple is not the monopolist. What about NVIDIA?” And I had somebody who showed me a matrix that said NVIDIA had 97.7% of the market, AMD had 1.8 and Intel had 0.6. Now, the poster never got back to me to tell me what market, what region, what’s the duration of time. But here’s the point, folks, it’s not illegal to have 97.7% market share. What it’s illegal to do is use that monopolistic power to squash competition, raise prices or reduce innovation. And as we talked about on this show, you have to have a well-defined market. What is the market that a company would be a monopolist in? And oh, by the way, even if you have 50% globally or 25% globally, if you have 75%, actually 50% at the bar here in the United States, that could be the case.

So with NVIDIA, NVIDIA does a lot of things, right? They’re certainly not the AI, they don’t have the AI monopoly because AI goes across smartphones, PCs, and the data center. People could say, “Oh, they have a monopoly and data center training,” which by the way, statistically is correct, but remember, it’s not against the law to have monopoly market share, right? You have to have the durability test too. The one that the Department of Justice used against Apple was essentially said, “You have had this monopoly position for a decade,” okay? And with NVIDIA, I don’t know, has it been a decade? Probably more along the lines of five years. Now if you can’t show how they reduce competition.

Now with that said, the other side of my mouth, they just want to say that NVIDIA does need to be very careful to make sure it doesn’t, let’s say, restrict certain accelerator vendors onto their software and APIs, right? And it’s kind of like a similar Apple conversation, which by the way, other posters about Apple said, “Pat, you’re an idiot. The government can’t force a company to open up its APIs to its competitors.” Well, guess what? Newsflash: that’s exactly what the US government did to Microsoft is it forced it to open up APIs to competitors. So when it comes to Nim, when it comes to CUDA, part of all the lawsuits against Intel, including AMDs, was that Intel was using CPU ID with ISVs that said only genuine intel with the CPU ID would actually work. And for AMD, the software wouldn’t work. So NVIDIA has got to watch it. To me, it doesn’t pass the monopolist test, or the illegal monopolist test at this point.

Daniel Newman: Yeah, you’re always balancing the two. You’re dealing with consumer harm and then you’re dealing with competition. And that’s always kind of those two different litmus tests for anti-competitive behavior. And then of course anti-competitive intent versus monopolistic market share, which you can have a monopolist market share but not be showing any anti-competitive behaviors. This is kind of interesting, because there really was no competition for a long time, because they’ve been basically doing this and building this. Jensen came out and basically said, “We create markets. We don’t compete in markets.” So when you create a market, you get enough size and scale like, what they built the framework top to bottom. It really didn’t exist. And so, for accelerated computing, there were some different efforts being made by different companies, but no one had really completed the stack and that was where NVIDIA got a lot of its leverage early on.

I’ve said this for a long time, Pat, as we’ve sort of tried to debate this whole monopoly thing with different companies, whether it’s been Apple, or Qualcomm, or others, is the FTC and these other regulators, should they be looking at stopping monopolies before they become monopolies? And how much should they be getting involved in that? Is it the burden of proof of like, “You are already doing this,” should the person have to commit the crime and then you fix it? Or, it was kind of like when Adobe in Figma deal, it was like that really wasn’t a monopoly, but they kind of stopped it because it was like, well, it could become one.

Patrick Moorhead: It’s like PreCrime.

Daniel Newman: It’s like PreCrime. So we’ve got, here I am Tom Cruise, I’m sitting here, it’s Tom Cruise, right? I’m better looking of course, but I’m sitting here with the ball, I’m waiting, “Oh shoot, it’s me.” Jensen’s going, “Oh crap, it’s me.” I do think directionally that is where it’s going. I mean, I don’t see, like, the CUDA lock-in is a real thing. It’s definitely being countered with some different layers of abstraction and frameworks. Everybody’s trying to address it. There’s not lot of optionality for hardware. There’s some different compilers out there that are enabling, but the fact that it’s very, very hard to move applications for accelerated compute from one hardware to another could be seen as something that is somewhat monopolistic.

I think if anything, it’s sort of the systems approach. I think someone at the Broadcom Investor Day called it an AI mainframe, is what they kind of described it as. And I think the fact that they’re going to the point where it’s almost impossible for anyone in the supply chain of NVIDIA to make money besides NVIDIA. And that’s going to be an interesting inflection as that continues to proliferate through the market, right, is they are uniquely outpacing every other company. No matter how much value is added throughout the supply chain, they do take up the majority of the bomb.

Again, that’s not monopolistic in itself, but if there’s no way for people to sort of build it, so this is where things like ethernet and different EFA of what different companies are doing to build the networking part, which NVIDIA is trying to cram into the quote, “AI mainframe,” the cabling, the box, the power, everything that, as it goes and becomes more consolidated. But the consolidation of the hardware, added to the software, added to the immobility of the data and the applications does create some level of concern that it could become a monopoly. What I say is: not a monopoly now, could it become one? Yes. Is this going to be something we’ll keep debating? Absolutely. I think we’ll know a lot better in about a year when we start to see some capacity, some indication of how much other hardware is being deployed in the market that will have a better idea if this portability of applications is real, and if in fact NVIDIA is on its way to Monopoly.

But to their credit, congratulations on winning 97.6% of the GPU market. There’s a lot of competent companies that have not competently been able to develop competition in a meaningful and immediate fashion, not easy to do, and that’s why they have this kind of power right now.

Patrick Moorhead: That’s good, man. I don’t know if it’s 97.7 of the GPU market. It’s probably 97.7.

Daniel Newman: I think it’s 96.

Patrick Moorhead: Of the data center. But anyway-

Daniel Newman: Yes, sorry.

Patrick Moorhead: Hey, ask your-

Daniel Newman: Data center GPUs. We’re not talking about gaming GPUs right now.

Patrick Moorhead: Hey, will you get your data practice and start?

Daniel Newman: I have an AI chip set data coming out in end of May.

Patrick Moorhead: I’m so excited for that. And you know what? Even though I didn’t do it, I’m still going to talk about it.

Daniel Newman: Will you?

Patrick Moorhead: Were you okay with that?

Daniel Newman: Even if it’s… Well, yeah, of course.

Patrick Moorhead: Well, it’s got to be accurate. I need to-

Daniel Newman: Well, how do you know? I mean like, 100% of marketing data is bullshit.

Patrick Moorhead: But it’s got to be pretty easy though. It’s like anything above 90% for NVIDIA. But hey, let’s move to our last topic here, and that is Lattice, low power and mid range FPGA company. Did they jump into the AI PC? How? I mean last time I checked, it was like 45 tops or something, or 11 that you needed to have to be in this market?

Daniel Newman: Yeah, so I believe it was it Dell, Pat, one of the companies there was a soft announcement, but there’s some definite powerful connectivity, connective tissue with Lattice SensAI and their low power FPGAs to be able to offer some value in this AI PC space. And this past week, company talked about on a blog, it announced on it’s paid, but it’s bolstering AI PC innovation with FPGA Edge accelerators. And what are they doing on these Edge accelerators? Well, there’s a couple of things. I don’t know if you’ve heard about this kind of SensAI, but one of the big problems with laptops is an always on, always connected PC means always available for hackers to get data, so, security. So one of the things that FPGA can do is have really good understanding of an environment and do things like knowing that I have now stepped away from my machine, I was going to stay away for a while just to make you nervous, and knowing that it needs to turn off the monitor and secure the machine so that someone can’t visually hack or gain access to that machine. That’s something an FPGA can do.

Another thing is, when you’re not on the task or using it, it can create screen brightness in real time. And so it knows you’re not looking, or you’re looking at a different screen so it can turn down the screen giving more power efficiency. And so, these are just a couple of the examples, but the idea that an FPGA can also aggregate sensor data from a number of different sensors across a device and create greater efficiency and performance.

So it was a come down to, Pat, it comes down to one, having some level of flexibility. ASICs are great, for instance, we talk about those a lot, but there is a benefit to having programmability and in the era of the AI PC, there’s lots of these sort of security connectivity, experiential AI, that could benefit from Lattice’s technology, Pat. And so Lattice has won some designs and they seem to be setting themselves up well for this FPGA renaissance that’s going to come along with this high volume, what did you call it, Pat? A super trend. What did you call it? Mega trend?

Patrick Moorhead: What I call it.

Daniel Newman: Hype cycle? Mega cycle? Super cycle? That was the word I was looking for. A super cycle of IPCs that are coming. And with Lattice, it seems to be a great opportunity. Now, this is a company that sells to Dell, sells to Lenovo, works with LG, I believe, Google. So a number of different device makers already partner. I’m not saying they’re all going to be using this straight away, but it seems to me that Lattice is finding a strong way to get the FPGA tied into the AI PC experience. And by the way, bonus topic: Seven Five is back.

Patrick Moorhead: I know. I know. So I’ll call it situational awareness, which is the shoulder surfing. And it’s not just are you there or not? Because you could put a cheap sensor to do that; the question is there a human being behind which can turn on certain functions for security? Can take you to the login screen, it can initiate some novel screen technologies that only allow you to see the content and anybody from the sides cannot. It also can invoke energy saving techniques, which says you walk away, you go to the bathroom, your screen dims. And there’s a lot of ways you can do that. You can do that with an ASIC, you can do that with FPGA, you can do that with a microcontroller. And FPGA is a nice balance because it’s programmable if you change your mind. And of course it’s also time to market.

And then you put the software stack on there, the lower level, and then mirror metrics, you have a turnkey solution. And you rattled off the list of vendors that are currently supported. It’s funny, Dell, Google, Lenovo and LG be like, “Well, wait a second, Google does PC?” No, they do Chromebooks. You’re pretty much running a table with the exception of HP. And Apple does their own thing and they would probably never buy something like this, they’d probably do it themselves. But yeah, kind of went under the radar during CES, but Lenovo Latitudes and Detachable AI PC.

So I’m going to call this here. Lattice is in the AI PC game. And while processor manufacturers are struggling a little bit as the market has dipped, because three years ago during the P, I’m not going to say it, I don’t want to be filtered, that demand went down, but I think I called it on stage at CES; the super cycle is going to begin, it’s going to start in the middle of this year, and then for the next two years, it’s gradually going to get larger to where you could have 75 to 80% of all notebooks, AI PCs, and half of desktops. So PC market is going to cook in the future, for sure, and let’s add in the Windows 10 support that goes away very soon. And those enterprises are going to have to move to Windows 11, and very rarely do they just do an OS upgrade; they do a complete system purchase.

So Dan, we made it on the Seven Five. It’s been a great show. It’s interesting, our viewership is down considerably on Twitter. Right now, I was popping off a couple of emails when you and I were on here when you were talking, and I was not paying attention, and I got, “Out of office” for almost everybody. Now what makes sense why I am expecting to have a bunch of people cancel meetings.

Daniel Newman: Yeah, I mean, it’s kind of a long weekend. I don’t know what else to really suggest there, Pat, but it seems that this is a Friday to Monday off kind of thing. My kids are off school-

Patrick Moorhead: Today? Today they are?

Daniel Newman: They’re off school today.

Patrick Moorhead: Oh, wow.

Daniel Newman: And we have a lot of people that are going to take Monday off and yeah, I mean, look, we know it’s not because the quality, the show’s amazing, right? I mean, just ask all our friends out there. I’m kidding. But we do appreciate everybody being out there. But yeah, I think last week we had close to 2000 and four or 500 concurrence, and this week it was like 10% of that.

Patrick Moorhead: Yeah, and by the way, anybody who is questioning the screenshot versus what I put on Twitter, that’s an aggregation of X, YouTube and Facebook concurrently that obviously, if you’re on X and you see the screenshot, that’s why there’s a discrepancy there. So anyways, we love you all, thank you so much for joining us. This is not a fake background; the fog has actually burned off. I also realized that I’m also getting a little bit darker, even though I’ve got this light bar right in front of my face. But thanks for tuning in, appreciate you and give your family a big hug this weekend. Love you bestie. Take care.

Daniel Newman: See you all.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Ron Westfall, Research Director of The Futurum Groups, explores why fiber services and AI are mutually beneficial, driving improved outcomes and experiences in Fiber and AI, AI and Fiber: Meant for Each Other – Sept. 2024.
Sanjay Mirchandani, CEO at Commvault, joins Daniel Newman and Patrick Moorhead to share his insights on leveraging strategic acquisitions and partnerships for enhancing cloud-first cyber resilience at Commvault SHIFT.
Paul Nashawaty, Practice Lead, at The Futurum Groups shares his take on top trends and challenges organizations are facing in The Rise of Cloud Native, WebAssembly, and FinOps in Application Development.
The Six Five team discusses Microsoft & BlackRock Sitting in a Tree