On this episode of The View from Davos, The Futurum Group’s Daniel Newman is joined by IBM’s Chairman and CEO, Arvind Krisha, for a conversation on how the topic of ethical AI is being discussed at Davos, the role of the government with AI, and the future of automation:
Their discussion covers:
- What IBM is focused on and discussing at the World Economic Forum
- The role of government in the ethical deployment of AI and IBM’s approach to governance in their solutions
- How AI can be leveraged towards increased sustainability
- What opportunities AI brings to the future of automation
Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.
Listen to the audio here:
Or grab the audio on your streaming platform of choice here:
Disclaimer: The Futurum Tech Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Transcript:
Daniel Newman: Hi, everyone. Welcome to A View from Davos. I’m Daniel Newman, CEO of the Futurum Group. Excited for my conversation today. I have Arvind Krishna, chairman and CEO of IBM, sitting beside me. Arvind, thanks for making some time.
Arvind Krishna: Daniel, it’s great to be here and talk to you about all that’s going on here in Davos.
Daniel Newman: It is great to be here. It’s my first time here, but for IBM it’s the 50th or so time. You told me in the green room when we were talking that IBM is in fact one of the founding members.
Arvind Krishna: We are. Thomas Watson, Jr. joined up with Professor Schwab in helping WEF get going.
Daniel Newman: Yeah, and as you walk the streets and you see all the activations and the layers of this conference, you’ve seen how far it’s come and how important it’s become to the economy, to the discussion. Of course, it is the World Economic Forum, but the discussions that take place here are really important. And one of the super important themes of this year probably doesn’t surprise anybody, Arvind. It is something that actually has been really important to you for more than a few years now, and that’s AI. And I want to take some time here to talk to you about that, because you’ve been doing this a long time. You came out with a big bet at IBM and said, “Hybrid cloud AI, this will be our focus.” You’ve rejuvenated, reinvigorated, in fact, we’ve had you on the show a few times. You joined us on the Six Five, and here you are. Big progress has been made. But this conference, it’s all about trusted AI. Talk a little bit about this theme that you’re really introducing and focusing on at this week’s World Economic Forum.
Arvind Krishna: Yeah. So look, Daniel, so in the context of the World Economic Forum, you mentioned the multi layers, but there is a layer of business. And there’s a layer of businesses talking to business and business also talking to the government. Now, with AI, if you just step back, why is there so much excitement? Daniel talked about a theme of the conference. You walk down the promenade, which is where all the governments and businesses have their, I call it storefronts, they’re really fronts where they talk. At least half if not more have the word AI in it now. Why is there so much excitement? I think our estimate is a little over $4 trillion in global productivity per year by 2030. You think about how many things could create $4 trillion worth of productivity or GDP increase, and I can’t think of very many. So that’s why it’s so important.
So then you say everyone wants to play a role in it. Businesses want to take advantage of it to grow revenue or to increase the bottom line. Governments want to advantage their economies, create employment, or create new businesses. So that’s exciting. Now, as in anything that is that significant, if you can’t trust it, then you get into all the questions. Should a few people run away with it, or do only bad people work on it? Or do people who want to do bad things work on it? And so I think the word trust is really, really important. And so as you get into trust, we have to then get into, what are we going to do about it? How do we make sure that we build AI the right way? How do we make sure it remains open and accessible to everybody, not just the few? And how do you hold people accountable for what they discover here? That is why we are so excited to be here and bring the point of view into the conference.
Daniel Newman: Yeah, it’s a really important point of view. And our intelligence too, Arvind, shows that the investment this year … Last year, I sometimes say ’23 was a bit of the year of the GPU. It was the year of the investment in infrastructure. And our data says that ’24 is going to be the year of implementation, where all that investment is going to start to turn to spend. We’re seeing a three times increase in companies spending multiple millions on implementation. So we’re seeing the big spend. And so getting it right ethically, getting the responsible part right, is important. What about the ramifications, though, of doing ethical AI right? What happens if we don’t get it right?
Arvind Krishna: Look, you just touched on the deployment piece. Most technologies go through this phase. The first phase is scientific curiosity. Then you follow it by invention and innovation, where a few, not very many, begin to use it, and that gets everyone’s imagination going. Then you get into the long deployment. But deployment is where you need a lot more people using it. Now, these people aren’t the inventors, but they are wanting to use it. So if you’re not the inventor and you don’t want to go that deep into it, but you do want to take advantage of it for your business, if you get it wrong from the perspective of ethics, you don’t want an AI that’s going to be biased towards a particular ethnicity or a particular gender or against a particular gender, or maybe doesn’t like people with our names but likes every other name.
Because AI learns from the underlying data. So how do you make sure that it is fair? I’ll use that word fair. It has also got to be equitable. Meaning, how do you get access to it to every industry and small and large, and to every economy, whether you’re in the advanced economy or you’re in the Global South? So all of these implications are there because I think humans want things that are equitable. And if it’s not equitable, I think you tend to get a lot of pushback against not just technologies, but anything that we do that is that significant. And so it’s important to get it right. Otherwise I think we’ll face a bow wave of resentment.
Daniel Newman: Yeah, and I also think there’s some layers and levels of importance about the enterprises and the builders, companies like IBM getting it right. Companies, your peers in the industry, whether it’s the hyperscale cloud providers, the software providers, the ones that you partner with and compete against. And the reason is because of the speed. The speed in which you move is very fast. Now, you were one of the first companies with generally available generative AI solutions and then one of the first to come with governance, maybe the first, of course.
And the reason I point that out is because the government of course, and policymakers, want to have a role. We see, and I think you’ve been to Capitol Hill, many of the executives from your peer companies have been to Capitol Hill, and they’re asking a lot of questions. But the government, even with whether it’s been social media and search, has taken a long time to keep up. So enterprises have to regulate themselves. But what about government? Of course Davos is full of policy makers. What’s the role that government plays in making sure that some of what you talked about, equitable, available, fair, responsible, comes to fruition?
Arvind Krishna: Look, this is where we are going to lead a lot of nuance and balance. First, let me be up front. Both my company, IBM, and I endorse that the government has a role to play as AI gets deployed. I wouldn’t say every one of my tech peers feels that way, but I think we feel about it very strongly. Now the question is, what is that role? We are still in the early stages of this technology. So to say that we all can comprehend exactly how it’ll play out, where the technology will go, what all it can do, what are its limitations, I think we should be a little bit careful we don’t know all of it. So that said, I think there should be regulation, but it’s probably a little bit lighter touch now, more perhaps based on the use case and the risk. Nobody should come around and try to regulate an actual algorithm, because you want innovation to keep happening.
I’ll talk about it maybe as an analogy to an industry, Daniel, I know you know very well, the semiconductor industry. Nobody there regulates how many angstroms or how many nanometers or what optics. They stay away from the technology. What they do regulate is a use case, meaning you cannot sell it for military purposes to certain countries. You should use it and get enough of a supply for automobiles. So you regulate the use case, but not the actual technology itself, if that makes sense. And we are very strong proponents that given this, where we are, that is what we should do.
Daniel Newman: Yeah, and I fear the overstepping, because we’ve seen it happen too often. Now, I agree with you and I love the example of the semiconductors. Of course we’re seeing, because AI is the next frontier for national defense, global technology leadership, and of course the economy, right? You mentioned $4 trillion at stake, and that’s probably just the beginning. It’s going to continue to grow. So we do have to manage it, but at the same time, we have to make sure we allow the enterprises that are building it to continue. It’s our economic and technological leadership at stake if they slow us down. So we can’t sit idly by, Arvind, and let it, but at the same time they’re going to have their say and they have to have their say. So where do you think they can help the most?
Arvind Krishna: Look, from the conversations I’ve had with different senators, congressmen, and the White House administration, I think people are trying to watch this fine line. They all talk about, “Look, we want the innovation to happen,” because looking at the examples of the internet and smartphones, people see and say, “We want the United States to be economically advantaged by leveraging these technologies.” So they at least do comprehend that. That said, there is a worry about misinformation, disinformation, misuse in elections, and I think that those are real fears. So when you try to balance those, I think that becomes the question. But that is why we are observing. They’re taking a bit of time, maybe, I don’t believe they’ll take five or 10 years, but they’re taking a few months to get to the right, this thing. And I’m actually, I wouldn’t say we can guarantee, but I’m confident, I think, that we’ll end up in the right balance here between regulation and guardrails.
Daniel Newman: Arvind, another topic that is front of mind here at Davos is always sustainability. Now, we’ve seen it ebb and flow from the standpoint of how front and center it’s been, especially in corporate messaging. We saw it really rise over the last few years. I feel like IBM, and you’ve always had a very pragmatic approach, you’ve had a very pragmatic approach about diversity, inclusion. We’ve talked about that, you’ve had a very pragmatic approach about sustainability. AI should serve as an enabler, it should serve as an insight provider. How do you see the two coming together, sustainability and AI, as this seems to be the perfect setting for that conversation?
Arvind Krishna: Yeah. So given maybe the lack of any climate change behind us that we can see, a lot of snow.
Daniel Newman: Lot of snow here, a lot of snow.
Arvind Krishna: So when we just think about AI, there’s three use cases I’m really excited by even before we get to sustainability. I’m really excited about AI in customer service. I’m excited about AI in coding, as in programming languages, and about AI to help augment human labor with digital labor. So those are, I think, really big use cases. And just to touch on coding, if you can make every developer more productive, imagine what it can do for the outcome of your corporation if you are using those developers to either construct code for yourself or in turn for your clients. So just think about that as a huge, huge aid. 20%, 30% more is within the realm.
Now let’s go to sustainability. In sustainability, while we would love to get to a decarbonized society, I do think that’s multiple decades away. But can we use AI to make ourselves much more efficient, to get ourselves much more circular on the path to that decarbonized economy? So I believe that by watching what is going on in consumption of energy, is there 20%, is there 30% better use of the energy you’re using that is there? Can AI be used to turn the heat down at night? Can it be used to turn the AC up at night? Can we use it to turn lights off? Can we get trucks and containers more utilized? Don’t let the 30% of empty miles keep going. So given all those things, all those will play into AI and sustainability.
I’ll give you a great example that we did in the United Arab Emirates, the UAE. We used AI to look at satellite imagery and drone imagery and come out and predict what are the urban heat spots in Abu Dhabi. If you know that’s going to be an urban heat spot, you can now begin to say, “I need to tone down how much energy I’m consuming there so that the heat signature begins to go down.” And two or three degrees can make a big difference to human life over there.
Daniel Newman: Yeah, absolutely. I like how you emphasized a sub-theme here, Arvind, is practical, implementable, measurable. These are where I think AI is going to really help. I think the overall goal as society is, yeah, turn off your lights. But the thing is, AI can help you know exactly when and exactly when not.
Arvind Krishna: And do it for you.
Daniel Newman: And do it for you. Automation, workflow. And by the way, automation’s been a very top of mind and focus point of IBM for a long time, IBM automation, and of course that really applies closely to AI. So you started to allude to my next question, so I’m going to go there now. You talked about digital labor, you talked about some upskilling. Now, I think you and I remember, when generative AI first hit, there were a lot of question marks around, “Well, what does this mean for people? Is this going to displace, replace, augment, upskill?” But I think it’s going to do a little of all of it.
You look back at every industrial revolution, and there comes out of it more jobs than the previous. In the beginning everyone goes, “Oh no, the assembly line is going to replace all the auto workers.” And then you realize, but yeah, but then we increase manufacturing by 10 and 100 and 1,000 times. So what does this one mean? There’s certain skills that you, if it can write for you, if it can draw for you, if it can create PowerPoints for you, and these are just some of the knowledge worker skills, what happens now?
Arvind Krishna: So I’m going to draw a long arc, but in a very quick minute or two, and then answer your question exactly. You used the auto analogy. So before the assembly line got invented, you were making 5,000, 10,000 automobiles a year, because it was a handcrafted art. That meant the price was so high that only the rich could afford to buy it. So the assembly line did two things. One, it began to lower the price so it became affordable to everybody. But then production, just for the very first one, the Ford Model T went from that 10,000 to millions. When you go to millions, you need way more people on the assembly line than the bespoke way of doing it before. That’s the internet. Who imagined we would need 5 million web designers back in 1995? But we do. So you create more opportunity.
AI is going to do exactly the same. So in AI, I look at it this way. Will there be job displacement? Tiny, and I’ll come back to that. The biggest thing it’s going to do is at least half, if not everybody, will use AI in their work. That doesn’t mean they need to invent it, but they need to know how to use it. But let’s go back to 1990. Very few people thought they would use computers all the time. You don’t even think about it now. Every school kid from middle school knows how to use a computer. Smartphones, in 2007 who thought that everybody uses a smartphone? We do today. That’s the level of upskilling we need in AI. Learn how to use it, know its limitations, learn how to make yourself more productive. If a programmer has something sitting over their shoulder saying, “Hey, I can type the next five lines for you,” great.
It can type it, the programmer can eyeball it. You get two, three times the speed in that piece of it. If somebody’s trying to compose an email response back in customer service, if the AI says, “Oh, let me compose that for you,” then you can spend your time making sure that it’s expressing the sentiment and the accuracy of what you want to tell them. So it’s all about that. I think at the end of it, just like the others, we are going to end up requiring more people, not less. Why? Simple economics. If something is more productive, by definition you’re lowering the cost. If you’re lowering the cost, you can gain market share. If you gain market share, you’re going to actually be able to provide more of that to everybody, which means, contrary to where you began, you actually need more people to provide that more, be it a service or be it a product.
Daniel Newman: And there’s more consumption. There’s still significant parts of the world’s population that don’t have smartphones or don’t … You know what I mean?
Arvind Krishna: Yes.
Daniel Newman: So there’s still a lot of the economy too.
Arvind Krishna: Global market can expand, not just the local.
Daniel Newman: Yeah. I always say that when new products come out, I say, “Well, if it can expand the overall TAM, companies can all grow. And yes, market share will be a fight, but you know what? You could still grow double digits and someone else grows more double digits, and you’re both doing well.” So all right, just one last question. We didn’t touch on this, but in the theme of responsible, ethical trust, cybersecurity. IBM is a really robust cybersecurity business, but sometimes in the scale of all your businesses, it doesn’t come front and center. Are we going to hear more? Is AI going to create more opportunities for the good, the bad? Or is it just going to be that continued cat and mouse game that we’ve seen over history?
Arvind Krishna: Look, I think that it’s going to be a bit of both. Let me give an example. I think we see like a trillion events a day on the things that we monitor on behalf of ourselves and our clients. A trillion. If you don’t use AI to triage it down and to say, “Here are the few hundred that you need to look at,” it’s an impossible task. So AI is already being used to help on cyber. Now, the bad guys are always going to try and come. Why? This goes back to that old Willie Sutton statement. “Why do you rob banks?” “That’s where the money is.” So today, if you look at corporations, where is their gold today? It’s in their internal data. So you’re going to expect the bad guys to come after your data. That’s not going to stop. As you improve your AI, you’re going to get better and better defenses against it. And if you are the harder one to come into, they’re going to go after somebody else hopefully and not you.
Daniel Newman: Arvind Krishna, chairman and CEO of IBM. Thanks for joining me for A View From Davos.
Arvind Krishna: Great being here with you, Daniel.
Daniel Newman: Thank you.
Author Information
Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.
From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.
A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.
An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.