IBM Think 2024

IBM Think 2024

The Six Five team discusses IBM Think 2024.

If you are interested in watching the full episode you can check it out here.

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Patrick Moorhead: IBM Think, you and I left Dell Tech World got on a red eye, landed at about 4:00 in the morning to get ready for our 7:00 appointments. Dan?

Daniel Newman: Yeah, look, I mean, IBM Think was another big inflection moment. Kicked off by keynote by Arvind Krishna. Arvind got on the stage, focused in on the company’s two main areas that they’ve been focused on for now, what? The better part of the last three or four years, but it’s been hybrid cloud, and AI. And you could kind of argue to give some credit to IBM. When it was hybrid cloud and AI about three years ago when Arvind… Three, four years ago when Arvind took the helm of the company, hybrid cloud made a lot of sense. You and I were wrapped up in that world, that was pretty much anything and everything at the time. But AI was a bit more curious, because it was like, “Well, we’ve been doing AI forever.” But at the same time, what are we talking about? Machine learning, deep learning, neural networks, advanced analytics, what AI are we really talking about?

And so with that in mind, I was kind of waiting, but you saw he hit the trendline on the head. First, out of the gate about a year ago at their think event, where they did the IBM watsonx.governance, and IBM watsonx.data, and the whole platform, end-to-end enterprise AI, and they’ve nailed it. I mean, look, I’ll give you the contentious point of view on IBM. I really have admired under Arvind’s leadership how they’ve been able to launch and build this first out of the gate platform, first to GA platform. They’ve integrated a lot of really what I call good stuff with Red Hat this week. They came out talking about InstructLab, which is basically the ability to refine and tune models in a much more efficient way in an open source community.

They, of course, continued to open source Granite models, which is, they’re doing these models, and they’re very focused right now on this kind of smaller language models that can be utilized for vertical and industry solutions. And we’re seeing, coming out of consulting, a whole bunch of really capable use cases that are industry, financial services use cases, healthcare use cases, defense use cases. And that’s what I felt walking away from this thing, was, they also see that, a lot of the time… What Dell kind of keeps saying, Pat, about bringing compute to your data, same kind of thing going on here, “We’re going to be doing a lot of it on-premises.”

The other thing with IBM though that I thought was really prescient was the fact that they’ve been able to show… And this was more Rob Thomas, Dario Gil, you did a video with them. They’ve been able to communicate the way that we can find economic efficiency in delivering more tuned models. We hear a lot about RAG, and we’ve heard a lot about RAG over the last several months. RAG is kind of the poor man’s approach to doing model… Yeah, Pat’s shaking his head at me. It’s the lowest common denominator of using your existing data. It’s not always the most up-to-date, it uses more lineage and more older enterprise data. What they’re trying to promote here with doing their approach with InstructLab, is that the open source community can continually refine it, and get the outputs and the quality that you would get out of tuning. And forking models and tuning models is really hard, really expensive, and really difficult to track.

So I keep kind of using this analogy of RAG cost. RAG is efficient price-wise, because it’s data you have, it can access that data, and it can use it to create and generate outputs. Tuning is hard, ’cause you need synthetic data, you need to scale data, and you need to refine it in a way that’s really, really expensive. And so my big takeaway, and we can have a bit of a debate on is RAG the right model? Is tuning the right model? Is InstructLab the right way to go? But we need a bit of all of it, and IBM is helping proliferate it quite a bit faster, and that was my big takeaway from the event, and kind of the key highlight to me.

Patrick Moorhead: Well, that’s a good breakdown, Dan. And it’s interesting, if I had to sum up the entire conference, it would be that IBM is making AI real for enterprises, and that, hey, they could be your starting point, a strong alternative to starting with the hyperscaler. And not that you wouldn’t deploy hyperscalers, but starting there with Red Hat for instance, leveraging InstructLab. And by the way, the reason I say that, and I know it could be super controversial, is because, as we’ve said, 75 plus percent of enterprise data is on-premises or on the enterprise edge. It’s distributed, and therefore, you have to have a hybrid architecture to do what IBM calls AI at scale. I totally believe that. And it’s this idea of bringing the model to the data, versus the data to the model. Starting off with an open-based model, you trained it on public data, tune it to your own enterprise data, and also, I think IBM was all over smaller models.

So it was impressive. I finally understand what Concert was. It’s kind of funny, I love slides, and if you talk to me for an hour, I might not have any idea what you’re talking about, but Dario shows one slide on InstructLab, I see one slide on Concert, and boom, I understand exactly what it is. InstructLab, to me, is like you’ve got the three bears. You can take this large language model, blow it up, do RAG against it, maybe get you 90% there, or you can create your own smaller model, which has been super expensive and complex. And InstructLab just seems like a great middle ground, where you can actually create your own proprietary model, you can do RAG on it at the same time. So super impressive.

The automation stuff with HashiCorp, I’ve got to tell you, super compelling. I think there’s so many different brands out there of IBM software, it’s getting a little challenging, and I can understand this… There’s two ways to look at this. One side says, “Hey, best-of-breed software and brands is the way to go. There are category killers, and that’s what people want.” And then on the other side of it’s, “Hey, people want something that’s kind of pulled together comprehensively.” I don’t have an exact answer for that, but when I looked at all the different brands, and how they come together, it seemed a little complex for me. I’m going to end on, I think IBM needs to get more credit for its open models. I mean, they open sourced the Granite code models, and this is huge programmers. I have no hesitancy as saying that these models do perform the best versus other open source models doing what I love to talk about, and that is benchmarks. Bob… Sorry, Bob. Dan, did I leave out anything?

Daniel Newman: Hey, hey, hey, hey. No, no. But at some point, we do need to come back to kind of talking about these different bears, and these different approaches, and the techniques, and why certain ones are going to be better than others, and costing and structure, and then of course, to your point, putting the benchmarks, testing the performance. ‘Cause in the end, it’s all about getting the most accurate, on point answers, and building that data ecosystem that enables to create these frictionless experiences. So it was good stuff. It was a good event. I was pooped by the time I got home. Man, those red eyes just kill me. I think they say you die younger from those red eyes, and I’m pretty sure they’re right.

Author Information

Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.

From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.

A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.

An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

SHARE:

Latest Insights:

Brad Shimmin, VP and Practice Lead at The Futurum Group, examines why investors behind NVIDIA and Meta are backing Hammerspace to remove AI data bottlenecks and improve performance at scale.
Looking Beyond the Dashboard: Tableau Bets Big on AI Grounded in Semantic Data to Define Its Next Chapter
Futurum analysts Brad Shimmin and Keith Kirkpatrick cover the latest developments from Tableau Conference, focused on the new AI and data-management enhancements to the visualization platform.
Colleen Kapase, VP at Google Cloud, joins Tiffani Bova to share insights on enhancing partner opportunities and harnessing AI for growth.
Ericsson Introduces Wireless-First Branch Architecture for Agile, Secure Connectivity to Support AI-Driven Enterprise Innovation
The Futurum Group’s Ron Westfall shares his insights on why Ericsson’s new wireless-first architecture and the E400 fulfill key emerging enterprise trends, such as 5G Advanced, IoT proliferation, and increased reliance on wireless-first implementations.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.