Search
Close this search box.

Driving Transformation, Innovation, and Value for Enterprise AI – Six Five On the Road at IBM Think

Driving Transformation, Innovation, and Value for Enterprise AI - Six Five On The Road at IBM Think

On this episode of the Six Five On the Road, host Patrick Moorhead is joined by IBM‘s Dr. Darío Gil and Rob Thomas, for a conversation on how IBM is leading the charge in leveraging AI to create transformation, innovation, and value for enterprises.

Their discussion covers:

  • Insights on how IBM is enabling clients to capitalize on their AI moment to transform their operations.
  • A deep dive into the latest developments in AI-powered automation and the expansive toolkit IBM offers to unlock significant value for clients.
  • Challenges enterprises face in moving from the exploration phase to generating tangible ROI and value with AI at scale, with expert advice from Dr. Gil.
  • The strategic advantages that position IBM as the go-to partner for enterprises looking to deploy, scale, and derive value from AI.
  • A look at IBM’s strategy to maintain its leadership position in a fast-evolving industry through rapid innovation in AI.

Learn more at IBM.

Click here to learn more about the announcements from IBM Think 2024.

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Patrick Moorhead: The Six Five is On the Road at IBM Think 2024 in Boston. This event has been awesome, and not only from the amount of people here, but from the content. It’s amazing. What we’re seeing here at IBM Think is a pretty good representation of my conversations that I’ve had with enterprise, and that is in 2023, there was a lot of build-out and construction. And that’s not to say there isn’t build-out in infrastructure and software and putting down the place mat, but clients are seeing real benefits of AI, enterprises and also consumers. And to break this down, I have Rob and Dario, Six Five veterans. Great to see you guys.

Rob Thomas: Patrick, good to be with you again.

Patrick Moorhead: Yeah. Last time we chatted I think was in Austin. We chatted out in Yorktown together.

Rob Thomas: That’s right.

Patrick Moorhead: This is great.

Dr. Darío Gil: We’re regulars. We love it.

Patrick Moorhead: Exactly. No, and we appreciate that. So, in the run-up, I talked a little bit about this transition, and not that we’re not continuing to build, but we’re starting to see enterprises see benefits from their investments that they’ve been making ongoing. And I’m curious, Rob, and this is for you, generally speaking, how are you helping to shepherd clients along this path?

Rob Thomas: Let’s go back in time. We announced Watsonx in May, and it was probably the perfect moment, because everybody was excited about generative AI, but nothing had been delivered for B2B at that stage. Generally available a couple months later. We are working with clients and partners in every country in which IBM operates, and there’s a lot of patterns in what we see. People are thinking about, “How do I get an ROI from generative AI?” That goes to use cases like customer service, digital labor, code. A lot of people also want to play with models. We’ve made announcements this week around open sourcing our Granite models. This is about making technology more accessible. We want to unleash a decade or more of innovation. I think we’re doing the right things in IBM to enable that.

Patrick Moorhead: Yeah, it’s been great to see. And again, as analysts, I always need to watch when I pick who was first, right? But congratulations, you had the first end-to-end generative AI platform with Watson. And I went through all the spreadsheets, the features, the countries, and you nailed it, so congratulations on that. Work’s not done. We’ve seen your assistance providing value to your clients already. A question is, how are you taking that to the next level with application-level advancements?

Rob Thomas: Let’s go back to, what is Watsonx? We have Watsonx AI, which is the builder studio. Train, tune models, work with our open source models, work with our partners like Meta, Mistral. We have Watsonx data, which is an open data store, data lakehouse, for AI, and we have Governance. Clients that moved to want more from the line of business said, “We need an assistant. We need something that’s packaged up. Large language models.” That’s why we’ve delivered this whole line of Watsonx Assistants.

We’ve got Watsonx Assistant for customer service. Watsonx Orchestrate, we have over 1,000 skills, automated skills, to do work for you inside of Orchestrate. We announced the Assistant Builder this week, so now any company can build their own assistant. This has been an incredible time of innovation, and we can start to see how clients, partners are using it. It’s very good.

Patrick Moorhead: No, it is very good. And the speed of innovation is amazing here. Dario, saw you on a couple stages today. Like I said in the green room here, appreciate the way that you make hard things to understand easier, so thank you for that. I think your clients appreciate that. One of the challenges that clients are having is moving from POCs, experiments, to delivering enterprise value at scale. Can you talk us through how you are doing that, how you intend to do that?

Dr. Darío Gil: Typically, when you do a proof of concept, and we’ve seen that generative AI is quite capable, you get in situations where eventually, if you hone it properly, you find the right use case. But then, when you want to be able to scale, you encounter issues with trust. Okay, I prove it over here, but do I have all the rights? Do I have the right transparency, for example, of the models that I want to use? Is the cost right?

Maybe I did a proof of concept with a trillion-parameter model, looks fantastic, but if I want to scale it to 50,000 employees, it’s going to cost me a fortune. Can I do this more cost-efficiently? So, what we are addressing with our strategy is to go squarely at all of those constraints to allow them to scale in ways they couldn’t do before. And one has to do, and Rob alluded to it, to having base models that are open source and transparent on the data sources and how they were trained.

And then, a methodology with which you can incrementally add skills, knowledge, and data to the models at the right cost point. So, our Granite models that we have released are at the sweet spot on terms of size, things ranging from 3 billion to 34 billion parameters, and that is the right design point that we’re seeing more and more of the industry gravitate, so that when you do the cost at scale of inferencing, the performance ratio and cost is just right.

Patrick Moorhead: Yeah, it’s interesting. The questions and the learning shift almost month-to-month, and I remember a year ago, we were discussing the need for open models, when a year ago, that wasn’t necessarily the discussion. And now, discussions on what types of models, small models, big models, vertical models, and what you’re doing with InstructLabs is super interesting in that it, at least from my point of view, is bridging the gap between these two worlds of investing massive amounts of money to create your own big super model versus maybe doing RAG on something. I know you can do RAG with InstructLab, but where you’re getting maybe 90% of where you want to go. So, it’s really interesting. I don’t know if this is the most important question, but with all that said of where the industry is going, why IBM?

Dr. Darío Gil: Yes. I’ll give a slightly different framing now of how to understand what’s happening with AI, because there’s been so much focus right now on evaluating models and what is the right model for the right use case. But let me change the lens and look at it from a data perspective to answer the question of why IBM. It is interesting to see the contrast of the fact that almost all public data has made its way into a neural network, into a language model, and almost no enterprise data has made its way into it.

So, why is that? The reason for that is that because we had not yet given industry a way to safely bring their enterprise data into this new representation. So, why IBM is because we are the only large enterprise company that can give you the openness and the transparency and the guarantees and indemnification of the vessel that you’re going to put your data in. That’s our Granite models. And two, we’re the only company that is giving you an incremental methodology, InstructLab, that allows you to add step-by-step in a well-engineered fashion your knowledge and skills into it.

Absent those two, the best you can do is sort of interact with this model. Look at RAG. RAG is a very useful pattern, but you have your data outside. The model is the only thing that it has intelligence, and the data and the model are interacting arm’s length, so to speak. It is because of what IBM is bringing to the table that we’re going to allow for the first time ever in the industry a path for our clients to add enterprise data safely, securely, and scalably.

Patrick Moorhead: Yeah, the data conversation that I’m having with enterprises is immense. We spend probably half our time… And by the way, it’s been the bottleneck from them scaling. We didn’t plan this, but I’m glad to know that you’re listening to your clients and you’re reacting and reacting quickly. It’s funny, they say when you get the sale, the next thing they’re going to ask is, “Well, how can you continue to do that?”

And Rob, I’m going to hit you with that first, which is, this is great, right? You started off strong, first to market. You kept adding capabilities. Again, as an analyst, we need to be careful. You were the first one to push the indemnity part, and then once you pushed it, everybody in the industry started to push it. How are you going to continue? How do you continue this? Can IBM continue this quarter after quarter innovation?

Rob Thomas: I think the key dimension we’ve announced this week goes beyond what I’ll call the tops-down sale, working with clients, doing POCs, doing pilots. What we’re doing by embedding our open source models into Red Hat Enterprise Linux AI, this is about meeting developers where they are. There are millions of kernel developers around the world that play in Linux every day. This developer-centric motion we think will define the next five years of AI.

Because tops-down things will happen for sure, but developers, builders, is where everything actually really gets to scale in an organization. This is a big shift in terms of investing, not just tops-down, but also at the developer level. We think we will see a lot of different innovation here. This is part of why, to Dario’s point, bringing in InstructLab, that’s giving developers a toolkit, a capability, that doesn’t exist until now, which is why we’ve also contributed that to open source.

Patrick Moorhead: Just so I understand this, is it that the churn below, meaning some of the core technologies, is going to slow down, or have you put an abstraction layer in that can deal with potential future churn? Is it one or the other?

Rob Thomas: I would say it maybe slightly differently. Give access to models where people want to build, and all building in companies occurs with developers. From there, it could go straight up to a use case that we talked about. It could actually become an assistant that somebody in that company decides to build using something like the Orchestrate Assistant Builder, but it’s really about unlocking the innovation and initiative that we see in every developer around the world.

Patrick Moorhead: Right.

Dr. Darío Gil: By the way, on that topic, you only have to go to any university or any company around the world, and the enthusiasm and passion that the technical community has to understand, contribute to AI is contagious. So, what we’re doing by tapping into this energy is actually to give them a vehicle with which to make contributions to even the core models themselves. If you look at how open innovation was happening with AI, there was… Models would get released sometimes, and what people would do is essentially just fork them, fork them, fork them, copies and copies and copies and copies.

But there was not a mechanism to actually allow that community to make incremental daily contributions like happens in open source software to make the model better. That’s what InstructLab does. No matter whether the thing is big or small, you have a way to say, “Here is my little grain to make it better.” And now, tapping into that energy is going to give a continuous path to innovation.

Patrick Moorhead: Yeah, for what it’s worth, on InstructLab, gosh, whether it’s a financial institution that I meet with, an insurance company, they all wound up kind of in this weird middle ground, which was, “Hey, I’ve got this massive model that I was trying to do something with, and I’m not getting the results that I thought I would get.” And then, this daunting task of, “How on earth do I build my own model?”

Dr. Darío Gil: That’s right.

Patrick Moorhead: “I can’t afford to do this. I can’t make it better incrementally.” And as I’m piecing this together, this sounds like what InstructLab is intended to do.

Dr. Darío Gil: That’s right. That is exactly the intent. And so, the two pieces that Rob talked about are essential, right? Because I’m giving you base models that are transparent with an Apache 2 license so that you have full rights, and if you get them through RelAI or through Watsonx, they’re fully indemnified by us. So, you have your building block with all the rights.

And on top of that, I’m giving you InstructLab, which allows you to now add it and specialize it. In the fine-tuning traditional world, you have your model, you have a use case, you specialize it, and you end up with a copy to do one. You have a second use case, now you need a copy of the model. You have a third use case, a third copy. With this methodology, the same capability now meets multiple of those things, because you are doing in a way that the knowledge is incremental.

Patrick Moorhead: Right. Guys, it’s been a great conversation. I got to tell you, I can’t believe it’s been a year since we were…

Dr. Darío Gil: It’s like you say, it’s an AI year, though.

Patrick Moorhead: Since we were in Orlando.

Rob Thomas: I do like this phrase, AI years, which is more like a week long.

Patrick Moorhead: It feels…

Rob Thomas: Because that’s how fast the innovation happens.

Patrick Moorhead: It feels like it’s been a lot longer than that. So, guys, thank you so much for coming on. I can’t wait to get a mid-year update with you guys and chat, and then hopefully at Think 2025.

Rob Thomas: We will be there.

Patrick Moorhead: Thank you.

Dr. Darío Gil: Thank you.

Patrick Moorhead: So, this is Pat, Rob, and Dario signing off here from IBM Think 2024. Tune in for more Six Five Media coverage for the entire event here. Take care, and thanks for tuning in.

Author Information

Six Five Media

Six Five Media is a joint venture of two top-ranked analyst firms, The Futurum Group and Moor Insights & Strategy. Six Five provides high-quality, insightful, and credible analyses of the tech landscape in video format. Our team of analysts sit with the world’s most respected leaders and professionals to discuss all things technology with a focus on digital transformation and innovation.

SHARE:

Latest Insights:

Managing Cloud Costs Amid AI and Cloud-Native Adoption
Paul Nashawaty and Steven Dickens of The Futurum Group cover IBM's acquisition of Kubecost, a cost monitoring and management tool for Kubernetes, marking a step toward providing a comprehensive cost management platform for cloud-native applications.
Veeam Makes a Strategic Move to Enhance Positioning in Next-Generation, AI-Driven Cyber Resilience
Krista Case, Research Director at The Futurum Group, covers Veeam’s acquisition of Alcion and its appointment of Niraj Tolia as CTO. The move will strengthen its AI cyber resilience capabilities.
Google’s New Vault Offering Enhances Its Cloud Backup Services, Addressing Compliance, Scalability, and Disaster Recovery
Krista Case, Research Director at The Futurum Group, offers insights on Google Cloud’s new vault offering and how this strategic move enhances data protection, compliance, and cyber recovery, positioning Google against competitors such as AWS and Azure.
Capabilities Focus on Helping Customers Execute Tasks and Surface Timely Insights
Keith Kirkpatrick, Research Director with The Futurum Group, shares his insights on Oracle’s Fusion Applications innovations announced at CloudWorld, and discusses the company’s key challenges.