Musk Call to End AI

The Six Five team discusses Elon Musk’s Call to end AI.

If you are interested in watching the full episode you can check it out here.

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.


Patrick Moorhead: Elon Musk, a user of AI, calling for an end or a pause to AI. What the heck is going on here? Is this protectionism? Is this crazy Elon? What’s happening?

Daniel Newman: I think there’s a little bit of all that, Pat. I think we should probably take a quick step back and just say the AI advancements in the last four to twelve weeks have been unprecedented. And I think if you could go back to just before Sam Altman sort of brought ChatGPT out to market in a little more availability to the broader user audience, AI was advancing at a pace where we were seeing things happen. We were seeing things happening. It was the way we would render our video games. It would be maybe some natural language processing or Google’s email was starting to fill in some sentences for us. You could talk to your Alexa and maybe it was a little bit smarter than the average bear. I don’t think Siri ever did anything that really reflected AI. But we were starting to – I’m kidding. I’m kidding. Down all you Apple people. We were seeing smart vehicles, Elon Musk included, using AI and its technology to make vehicles safer and more efficient and a lot of things.

But what we hadn’t seen is this generative thing, and this generative thing just came on so fast. All of a sudden we went from basically Google search being the penultimate way to discover, to having a new way using something like Bing and generative AI that completely disrupted the entire business model of search. And then you had something like generative AI added to Dynamics 365 and then added to Teams and then added to Office where it’s writing our articles, it’s generating press releases, it’s creating PowerPoint presentations.

And by the way, it’s also opened a whole commode to the black hats to figure out ways to break these large language models. You’ve got some other side stories developing. You’ve got sustainability issues with all the additional compute and capacity that’s going to be required to run these models at any scale. And then you’ve got an upskilling issue when you have millions of white collar jobs that you could say, “Yeah, it’s a co-pilot,” but it does 99% of the work for you.

So just a quick backstory. So what’s going on with Elon Musk is I think we have a little bit of a who should control the narrative thing going on. You got a little bit of what do we do when there is zero regulation? So Pat, we talked about the automobile. The automobile, there’s the National Highway Traffic and Highway Safety Board that actually will prevent a company from putting driverless vehicles on the road without very strict guardrails for how these things are going out in testing. With this stuff, it just gets launched. There’s no way to prevent it. There’s no stop. There’s no, “Hey, what if it’s wrong? What if it’s usefully wrong? What if it’s really just straight up wrong?” We don’t know.

And like I said, as companies right now, we’ve already seen it with Musk himself at Twitter basically saying, “I can cut 75% of my staff and still run a business at near the same efficiency.” Well, what happens when all those people that are creating your PowerPoints, that are writing your copy and content, that are designing your graphics, building out your websites, creating e-commerce experiences, doing customer service, has all been completely automated? By the way, if there was a book called Human Machine, I would’ve written it about this very topic.

So I think what’s going on is we’re saying, “Hey, this is going at an unprecedented pace. We need to look at the risks. We need to look at the ethics. We need to look at social responsibility. We need to look at the sustainability impacts.” And we also need to look at the potential black hat issues, the risk factors. I think Musk shared a joke on Twitter yesterday about all the advanced technologists building AI in order to figure out if there is indeed a God, and now he has a little Twitter joke about that where he says in the end, “You do now”, where we’ve built this machine that basically supersedes humanity. Is that true? Is it going to take over humanity? I don’t know.

I actually shared a graphic on this, Pat, on Twitter that I found that showed that anywhere from about 10 to 25% of the broader populations of the world think that AI will eventually take over for the human, take over the human race. We’ve seen I Robot. I know we’ve seen these movies. Is that stuff real? I think that’s the most extreme to the right, but just to the left is how fast we’re moving, how much it impacts jobs, how much our capacity to actually implement this technology, the sustainability issues. These are all things that we have nobody talking about at any real level. And of course the big tech companies are going to win big by getting this out to market first, companies will immediately become more productive, blah, blah, blah. I’ve said a lot. I could run on this topic for another 20 minutes. I’m going to pass it to you because I’d love to get your-

Patrick Moorhead: I mean, you could just form your own podcast called the Five.

Daniel Newman: No, you do have one about enterprise data center and I think it’s called the Seven Six.

Patrick Moorhead: Yeah. I can’t even swing a cat without hitting one of your 17 branded podcasts. Anyways, no, let’s move forward. So my first reaction to all the spooky, spooky technology is going to destroy the world always gets me. You know I like to read books, typically history books, and if you look at over the last 150 years, I mean whether it was the wheat thresher, the radio, TV and car, it was going to destroy society. And so far it hasn’t. But we keep bouncing back. But I don’t want to be lazy here and put it under that. I went in and actually read some papers that were cited by the Future of Life organization that Elon Musk is part of and on the board of and that’s where this letter came from. And it posed some, I thought, some pretty good questions. Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? You hit on that one. Should we develop non-human minds and eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilizations?

Now here’s where it gets into the punchline. It says, “Such decisions must not be delegated to unelected tech leaders.” By the way, I agree with that. But I also think there’s a lot of people who question the trust of administrators and government so I don’t know if a pause… I mean I don’t know what a six-month pause would do. Who would step up and fill that? Do we trust our federal, state, and local governments to do the right thing? What I’ve seen is that unfortunately our country is in a politicization nightmare where the government is making a lot of decisions based on politics, not necessarily what is good for the majority of citizens. Dan, what do you think is going to happen when we pause for six months?

Daniel Newman: China.

Patrick Moorhead: Exactly.

Daniel Newman: China.

Patrick Moorhead: Exactly.

Daniel Newman: It’s not a universal pause, is it? It’s not going to happen like that?

Patrick Moorhead: Right. I mean that that’s just not going to happen. So I would hate clearly the United – well let’s say clearly the U.S. – I would say applied AI is – China is better than we are. I think its ability to do video surveillance is next – I mean they’re the leaders at that. Some people would say that England is the best at that with all the cameras that are in the city center based on the troubles that that country had back in the ’70s. But TikTok, I think it is very well understood that TikTok has the best algorithm. So the Chinese are not the best at AI hardware, but I believe they’re the best at the application. So I would hate for there to be a pause that could lead to either uncompetitiveness or some challenge to our defense. Now at the other side of mouth they’ll say there might be a pause, but there’s no way DARPA is going to pause or defense related company.

Daniel Newman: It’s not universal, Pat.

Patrick Moorhead: Exactly.

Daniel Newman: No, I was going to ask you though, because one of the things I was thinking about is obviously besides the fact that Musk has a lot of interest in China; it is a huge market for him. The sales restrictions on all the advanced chips, would China, in your opinion – because I’m thinking I know they’ll gray market stuff, but they won’t be able to get everything as quickly right now because of all those limitations on some of these specific advances.

Patrick Moorhead: It’s funny though. I mean, it’s not the H100 but it’s the H800. And guess what? You find the right way to network those together in a big enough data center and you can get the same type of performance in aggregate by stringing more of them together. So that was a clever, clever move by NVIDIA, but I highly doubt that China is having a problem creating these generative AI models.

Daniel Newman: I tend to agree, but I just felt it was appropriate to say, hey, I guess maybe if we don’t give them ASML, eventually they’ll run out.

Patrick Moorhead: Yeah, that’s going to be an issue. They’re at seven nanometer right now with SMIC. It’s not using EUV. So they have the right geometry, they just don’t have the ability to do – they have to double pattern to get there, which is less efficient and more expensive.

Daniel Newman: You’re going nerdy.

Patrick Moorhead: I know. So nerdy. Get me going.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.


Latest Insights:

On this episode of The Six Five – On The Road, hosts Daniel Newman and Patrick Moorhead welcome Intel’s Greg Lavender and Sandra Rivera for a conversation on Intel’s AI Portfolio during Intel Innovation in San Jose, California.
A Ride-Hailing Service Powered by 100% Renewable Energy
Clint Wheelock, Chief Research Officer at The Futurum Group, examines Waymo’s announcement that it has decided to focus its efforts and investment on Waymo One, its ride-hailing service.
From Digital Transformations To Periodic Software Reviews, Increased Visibility Can Help Reduce Costs and Improve Application Utilization
Keith Kirkpatrick, Research Director at The Futurum Group, covers WalkMe’s Digital Adoption Platform and discusses why the tool is useful for organizations that are expanding or consolidating their software tech stacks.
Are Consulting Firms Best Positioned To Lead Enterprise AI Transformation?
Mark Beccue, Research Director at The Futurum Group, examines the EY and BCG announcements about major AI initiatives and how these offerings will affect the market.