IBM Think

The Six Five team discusses IBM Think.

If you are interested in watching the full episode you can check it out here.

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.


Patrick Moorhead: So many announcements, so much to talk about, and I’m going to do it in three minutes. So a little bit of backstory. When Arvin came in, really, the thought of the day or the leadership statement said we’re going to lead in hybrid cloud and AI; and I totally got the cloud part because quite frankly, I mean, the company had bought Red Hat, right? It makes total sense.

They have a hybrid multi-cloud stack that connects with the AWSs, the GCPs… But I was always wondering about the AI, was that a bolt on? I got quantum, but AI was like, “Oh, are you talking about the old Watson you brought out a decade ago?” I don’t get it. Well, this event was all about the AI part. It was really the coming out party for generative AI and foundational models. Not that the company hadn’t talked about it before with foundational models. Not that the company hadn’t actually opened up an entire data center that was optimized for AI, but this is really what it was all about. And if I step back, some of the big messages for me was that IBM offers a full stack for AI to clients, all the way from the applications in the top, all the way through to the actual silicon itself and then everything in between, and then wrapped around services that can help clients if IBM wants to lead them to the water, do that as well.

So on the whole, some of the more important elements of their strategy, aside from obviously delivering real client value and their client’s clients is, it’s multi-could, multi-model and open-model. And those are the three characteristics. And I’m getting to the… I still have to do a lot of work on really understanding why this needs to be hybrid because there are companies like AWS who have the end-to-end. Now, AWS is not putting Outpost capabilities on prem yet. So I think you might argue that that’s not possible. So company came out with a Watson X, which is foundational models, generative AI, has a studio, a data store, and a governance toolkit. Watson Infusion came in as well for code AIOps, digital labor, security and sustainability, which is the key access area, like the partnership with Hugging Face. Who isn’t partnering with Hugging Face? I think everybody’s partnering with Hugging Face.

Daniel Newman: Hugging Face, what a dumb name. Did I just say that?

Patrick Moorhead: You can’t forget it. I had a group chat pat somewhere along the lines where someone said that to me too. I was in a chat and they’re like, “So-and-so wants an introduction to so-and-so at Hugging Face.” And then the next person was like, “God, what a dumb name.” But at the same time, maybe that’s it. He can’t forget it.

Daniel Newman: Maybe both of you and I are too old to appreciate this. Can you imagine? Maybe that’s something in your 20s or something in your 30s. I mean, I get what it is. Hugging Face, emoji, yada yada. But anyways, I think it’s a dumb name.

Patrick Moorhead: So back to IBM Think. So in the future, here’s what I’m going to be looking, here’s what I’m going to be analyzing with this. First of all, speed. Okay, I would say that on the whole, IBM doesn’t stand for speed, it’s about safety and trust. And can IBM amp up the speed of this to move forward? Because that is important. But I do think that IBM won’t cross the line on trust. We did hear the parallelization between research and products when we were talking to Dario and Kareem, thought that was interesting. The second thing I’m going to be looking for is how does it actually operate with the data layer?

You have Watson X AI data and my question is, are they – I never saw one AWS, GCP or Azure logo that actually showed that this solution was multi-cloud. I know I heard the word multi-cloud, but I just haven’t seen that yet. In the past, IBM has been a little bit afraid to show any of those logos in the past. And then I saw them starting to be infused in. I didn’t see a single one of those multi-cloud logos at the show, aside from, obviously, sponsorships but not in slideware that made it just absolutely simple that this company is supporting multi-cloud. But again, a lot of research to do, good time. We spent three and a half days there, good show.

Daniel Newman: It was a big moment, Pat, for IBM, I’ve been writing endlessly about the kind of convergence of enterprise AI. And over the last two years, I think there’s been a lot of interest in AI in general, Pat. And generative AI has actually been something that we’ve been experiencing for some time. I don’t think a lot of people realize that. But when Google’s finishing your sentences in Google Workspace, that is a version of generative AI. The ability for multi-term conversation that we’ve been having with Amazon, with our devices, it’s still in its early days. But we’ve been seeing large language models being deployed, whether it’s Jarvis from NVIDIA, that model that they developed some time ago. But the truth is that a lot of the real value of generative AI is unlocked in enterprise data. The enterprise data that we hold, that’s in our systems of record, that’s in CRM, ERP and our HCM solutions within our supply chains, ambient data that exists within our ecosystems, it’s video data, it’s customer interaction data, CX data.

Companies want to build workflows. And with this onset of generative AI that’s taken place in the last six months, companies want to build sophisticated proprietary generative AI capabilities where they can add value to products and services that they’re bringing to market as well as deliver better customer and employee experiences within their organizations. IBM is basically standing up and saying, “We want to be to enterprise AI what ChatGPT or Google Bard or any of these sorts of large languages models, Facebook Llama has been to consumer AI.” And I say consumer, I just mean user interactions with the open internet, okay? That’s what basically is popularized generative AI. It’s the user interaction with search. It’s the user interaction with a chatbot that feels very human or little human depending on which one you’re working with and more real well.

The bottom line is this, is that companies have to have the data to train, they have to have the system and fabric to build this on. They have to have the applications to deploy this. And they need some consulting in order to actually figure out how to build these workflows out. It’s not as easy as… I know we’ve heard things about speech to code and yes this is happening. The role of the developer in the future is going to change. The role of the data scientists in the future is going to change because these are models that have the capability to be continually reinforced, learned. And with a company like IBM offering their foundation models, which is basically validated models for different kinds of things, digital labor, IT observability, they have the potential to basically say you can plug right into this and then put a layer of your own proprietary data on top of this. A much smaller subset of data.

It can cost millions of dollars to train a large language model. So you can take a data set that might be 10% of the size of a traditional data set required to trade a large language model. And then you can deploy it on IBM Watson X and you could therefore, implement into your business meaningful generative and AI capabilities on a much lower cost with the architectural support of a hybrid fabric that is Red Hat. And then you can take that all the way to utilization. So that’s my both assessment and question mark. The assessment is IBM offer the toolbox. It’s literally the toolbox, it’s the actual data layer and fabric. And then it’s interestingly, Pat, you didn’t mention this much, but the governance and the governance is really an important thing because we’re deploying this so fast. We went from zero to a million in six months and we actually don’t have very good policy frameworks and regulation around how we’re going to allow this to continue to proliferate into society.

So that’s super interesting. The Watson governance is going to come later this year. This is not an entire framework for all governing of AI, but it’s kind of within the work you’re going to do with Watson X. It’s, hey, how do we make sure our model doesn’t drift and change and become something we don’t want it to be as new data is introduced to it? Governance is going to be able to help with that. So lots going on, Pat. And I think you hit the big – this is the home run comment you made and I don’t like agreeing with you because I want everyone to think I’m smarter. But in all serious, it’s speed.

Patrick Moorhead: But you know you still do.

Daniel Newman: It’s speed. Speed is the question mark. I think we both have, we were in many executive meetings, one-on-ones conversations with the… And I kind of just kept saying, how fast can you get traction? The hyperscale cloud providers are the biggest threat. And while IBM certainly has partnerships with all of them, you can absolutely be certain that Microsoft is not going to limit its stuff to consumer or search. They’re already embedding it into plenty of applications. They’re going to make it OpenAI trainable, you can train right on top of it with your own data. How does this versus maybe using a foundation model plus your own smaller data set, which one develops and delivers better outcomes? The other thing is you’re going to see tons of stacking and with Auto-GPT, you’re going to see models stacked. You’re going to use Watson plus, Open plus, Bard plus, Llama because you’re going to take the best, just like we’ve seen with cloud, Pat.

And just like we’ve seen, you’re going to take the best of all these different models and you’re going to start gluing them together and you’re going to start influencing against more and more and more, which is going to drive tons of compute, tons of interest and tons of excitement. So listen, I’m stoked. Let’s go Gen AI. Pat, in two weeks, three weeks, three years, maybe you and I can be having the Pat & Dan Show while we’re still in bed getting some sleep, some rest in by the pool side. That’s what I’m hoping for because boy, we are moving quick.

Patrick Moorhead: That would be great. Based on some technologies that we’re going to talk about Google a little bit later, there would be a sign that says machine-generated. And I wonder if that’s what people want or do they want the real Pat and Dan, we will see. Because I’m looking forward to trying out these technologies.

Daniel Newman: Hey, Pat.

Patrick Moorhead: Yes.

Daniel Newman: Oh no, I was saying we should talk more about generative AI on this show.

Patrick Moorhead: I know, we really should. Hey, one thing I wanted to sneak in here, I mean, it was all about AI at IBM Think, but they did bring out a pretty incredible set of quantum safe technologies. You may or may not be aware, but there are bad actors who are harvesting data now to be used in the future when they can apply crypto breaking technology to get at that data. And IBM brought out a full array of quantum safe technologies, quantum safe explorer that looks at source code and object code, quantum safe advisor. That’s essentially a view of cryptographic inventory, quantum safe or mediator. And this by the way is to add to their quantum service that they have, that leverages quantum safe cryptography that IBM customers can use today. And I’m not pumping this just because I was in the press release or anything, but you can also read a lot about it on Paul Smith-Goodson’s Forbes article.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.


Latest Insights:

TSMC, Samsung, and Intel All Announced Agreements
Olivier Blanchard, Research Director at The Futurum Group, shares his insights on the geopolitical, market, and supply chain implications of finally securing domestic semiconductor chip production.
The Strategic Acquisition of Netreo by the Global Software Solutions Leader Has the Potential to Reshape the Future of IT Monitoring and Management
Discover insights from Steven Dickens, Vice President and Practice Lead at The Futurum Group, on how BMC's strategic acquisition of Netreo will shape the future of IT monitoring and management.
April 19 ‘Halving’ and New ETFs May Alter the Finance Ecosystem
Steven Dickens, VP and Practice Leader at The Futurum Group, highlights that as Bitcoin has introduced spot Bitcoin ETFs and experiences its fourth halving, it continues to redefine the financial landscape.
Unveiling the Montreal Multizone Region
Steven Dickens, Vice President and Practice Lead, and Sam Holschuh, Analyst, at The Futurum Group share their insights on IBM’s strategic investment in Canadian cloud sovereignty with the launch of the Montreal Multizone Region.