With a little imagination, the possibilities for monetizing data are endless. In this episode of The Main Scoop™, hosts Greg Lotko and Daniel Newman are joined by Doug Laney, fellow at West Monroe and author of the best-selling books Infonomics and Data Juice, to explore unique strategies for – and real examples of – collateralizing data assets.
It was a great conversation and one you don’t want to miss. Like what you’ve heard? Check out all our past episodes here, and be sure to subscribe so you never miss an episode of The Main Scoop™ series.
Watch the video below:
Listen to the audio here:
Or stream the episode from your favorite platform:
Disclaimer: The Main Scoop Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.
Transcript:
Greg Lotko: Hi, folks. Welcome to this episode of The Main Scoop. I’m joined here by my host, Daniel Newman. Dan, good to see you again.
Daniel Newman: Greg, it’s always good to be back on The Main Scoop. Sometimes we can do it from just feet away from each other and sometimes we’re just inches away in our little squares, but it’s always good to see you.
Greg Lotko: But our minds and our psyches are never all that far apart. We always have interesting things to talk about. And today we’re talking about the reality of a data-driven world. Data alone can be worth two to three times the value of the actual organization themselves. But fully unlocking and monetizing data value, that means thinking in entirely new ways, so there’s a lot to really unpack here. So, Daniel, do you buy that? Do you think there’s just as much or even more value in an organization in the data versus even the brick-and-mortar or the online presence of the product or service that they’re selling?
Daniel Newman: Well, I think the data economy is explosive and we’ve seen the advent and now rapid onset of generative AI over the last year. And there’s a reason that there are the haves and have-nots, the companies that will immediately distance themselves from the pack, and in almost all cases where that’s happening, Greg, it’s happening because they have better data, they have better insights. They’re able to then leverage all that data. They’re able to implement technologies that can extract those nuggets, and then they can get to that customer. It doesn’t matter if it’s B2B, if it’s B2C, it doesn’t matter what industry it’s in.
Greg Lotko: And I think there’s two sides to this coin. I mean, you’re absolutely right. When you think about the business you’re in, it is all about the data, the information around products and how people are using it, whether or not they’re succeeding with it, laying down what are the positive patterns, the not-so-positive patterns, and how different companies are doing in the world.
The reality is, those that garner the data, those that reap the value out of it and can figure out what’s making them successful, how they can help their customers be more successful, or how they can target them, I think even that is kind of the next step of obviousness, of how do they differentiate. But then there’s the pattern where, wow, there’s this business that’s really successful and the data that they have is maybe a new or a different business, or a different value that they hadn’t realized.
And, look, we’ve got somebody joining us today, Doug Laney, who is a fellow at West Monroe, that data is his expertise. He’s the author of the best-selling book, Infonomics, and follow-on book, Data Juice. And he highlights key strategies and real examples of collateralizing those data assets. That’s a mouthful. Let me welcome Doug in. Doug, delighted to have you join us today.
Doug Laney: Thanks, Greg. Great to be with you. Hey, Daniel. You, too.
Daniel Newman: Yeah. So, is the data juice worth the squeeze, Doug?
Doug Laney: Yeah, that’s how the book got its title. A lot of questions around is it worth it to collect this data, is it worth it to find ways to generate value from it? And, while I was an analyst back at Gartner, they said, “Hey, Doug, you’re the big data guy. We want you to start advising clients on big data.” And the question started from, what is big data, to then what do we do with big data? And so I started collecting uses cases and I’ve got a library now of about 6, 700 use cases. And folks said, “Hey, why don’t you put those into a book, so you could either inspire or shame organizations into doing more with their data.” That’s what Data Juice is.
Greg Lotko: Really, really, really cool, right?
Doug Laney: Yeah.
Greg Lotko: When we think about unlocking that, and Daniel and I have talked about the breadth of IT here, 70% of the world’s business data sits on mainframes. So I’m curious to hear how you see the opportunities for these info data-savvy leaders to monetize that. What are use case examples of how businesses we might not normally have thought about seeing the value in the data, how do they unlock that?
Doug Laney: A lot of it starts with just imagination. Are they actually creating ideas? Are they generating ideas? We run, at West Monroe, workshops with clients, week in and week out, to help them conceive new, innovative ways to drive value from their data, beyond just building pretty pie charts and bouncy bar charts and dashing dashboards, which we’ve been doing for decades. The real high value comes in, and I think, Greg, you mentioned this, in doing more advanced analytics with data. How can we diagnose something or predict something or prescribe something or digitalize or automate something?
That’s where the real value comes in. It’s really limited by an organization’s own imagination as to what they can do with data. So we help them through the process a bit. But a lot of organizations are also sitting on a lot of dark data, data that was used for a single purpose and forgotten about, or archived. And there’s some great examples of companies that have realized they’re sitting on this goldmine of underutilized data.
Greg Lotko: Hey, Doug, I’m really curious about the process. I mean, you said you meet with these customers and you go through a workshop and you’re talking about data. Are there ever really surprises? You’re meeting with a particular industry and you find something in the data that was unanticipated or not related to the core business, or there’s value there that wouldn’t be obvious to somebody being in the industry?
Doug Laney: Yeah. I can’t speak to particular clients, due to confidentiality, but I’ll share few examples that I’ve come across in my collection, or compilation of stories. The first one is interesting. At the beginning of the pandemic, I think you hinted at this in the intro, the airlines, in order to stay aloft, pun intended, needed to take out loans and so they were like, “Well, what can we borrow against? What can we collateralize? Can we collateralize our aircraft? Well, no because we lease those. Can we collateralize our gates? No, we lease those, also. Well, what do we have?”
And it turns out the only thing that they really own of value are their customer loyalty programs, i.e., all the data on their customers. And so American Airlines and United Airlines, in fact, got 20 and $30 billion valuations for their customer loyalty programs in order to secure loans. Which were not only huge, fricking numbers, but valuations that were two to three times the value of the companies themselves at the time. So I think it’s important for organizations to consider that the value of their data might be, as you said, greater than the value of the companies themselves if they apply it right. And there’s a variety-
Greg Lotko: And that puts a point on it, right? That puts a point on it. We all look at physical assets or the business that you’re in, and then all of a sudden realizing there’s this huge other treasure trove of value.
Doug Laney: Sometimes a lot of the data that’s of value comes from external sources. For example, Walmart had a great search engine. It was helping people find what they wanted online. And one week, though, it was a particular search term that was resulting in a really high degree of shopping cart abandonment. That search term was the word “house,” and so it was taking people to housing goods and housewares and doll houses and dog houses. It wasn’t at all what people were looking for. You may have guessed that the search actually coincided with the week that a particular television show, series, season-premiered. And that was the medical drama, you got it, House, staring Hugh Laurie doing such a good American accent. I didn’t even even know he was British.
Greg Lotko: Loved that. Another brilliant character named Gregory.
Doug Laney: So Walmart goes, “Hey, how did we not know that people were looking for the box of DVD set of this television series?” Well, why? Because they were staring at their own navel. They were only looking at their own data. They weren’t considering what was happening in the world, what was trending at the time. And when they upgraded their search engine to include social media trends, they ended up reducing shopping cart abandonment across the board by 10 to 15%, which, in Walmart terms, is on the order of a billion dollars a year.
So there’s some serious money to be made by incorporating external data assets. Other companies like Dollar General have a self-funding data warehouse. I think it ought to be aspirational for any organization, a self-funding data warehouse or data lake. If your organization isn’t self-funding their analytic infrastructure than either, one, you’re not measuring, we’ll get to this in a bit, not measuring the value of your data well enough, or, two, you’re not ideating well enough, you’re not coming up with really high-value use cases.
Daniel Newman: So let’s pivot to the topic du jour, or de year. In all of 2023, generative AI was absolutely red-hot. You talked about the foundational, and I think we talk about that a lot here, Greg, kind of the pix, the axis, the plumbing. We talked about the mainframe. You talked about where all the data lives. You talked about all the infrastructure, the network to move the data, to secure the data, the privacy of the data. But, basically, the ability for it to now start to generate things, like generating text, generating images.
And we basically heard that this, what I call private-public partnership, meaning data that’s widely available on the internet, and then data that companies have that’s truly unique. You put those two things together, you ground it, you put it into a knowledge graph, you couple it with a vector, and now you’ve got something really, really powerful. What are you seeing in terms of how companies are taking what we talked about in the beginning and starting to add generative capabilities to it?
Doug Laney: Yeah. First of all, they know they want to do it but they’re struggling to figure out how do we do it and then what kind of value is it going to generate? What are the use cases for it? So pretty much every client that we talk to today is asking us, “All right, can you help us think through the art of the possible with generative AI? Where is it heading? How are jobs going to change? How are business processes going to change or be expedited?” And so it’s really very much an ideation process.
But then when it comes down to the nuts and bolts, considering what data you’re going to incorporate into your own custom gen AI application, you’ve got to identify the data sources, make sure that they’re clean, consider the bias in them. We’re also looking at nontraditional data sources, like books and websites and scientific papers and forums and even other forms of unstructured content like media.
Then you’ve got to be concerned with the licensing and ethics. Does this data comply with legal and ethical standards regarding things like copyright and privacy, which are becoming a big issue? We want to build models that are unbiased and well-rounded, so we have to have data from a variety of topics and perspectives. There’s often a cleansing issue, annotation, classification, tokenization of the data that has to happen first. Listen, there’s a lot of work that goes into this, architectural design, model training, evaluation testing, feedback, and then all sorts of compliance and governance considerations. I think organizations ought to start small and kind of wade into it, not jump in with both feet, but ensure that they’re able to deal with all of the cleansing and ethical considerations and how it’s going to affect jobs. Are people going to adapt it or not?
Greg Lotko: So, Doug, when you talked about a cleaning, provenance, pulling out bias or being aware of it and all that. Is that what you’re defining as data diligence, or is that a different concept here? Or is there more to it than that?
Doug Laney: That’s a lot of it. I’m trying to think what the overlap is. There’s definitely a lot of overlap. I think there’s probably more to populating the knowledge within a custom language model than just data diligence. There’s the whole data curation aspect, as well. It’s interesting. Let me show you. A lot of companies have an entire department dedicated to procuring office supplies but they don’t have a single person dedicated to procuring data supplies. Which I think in today’s world is really pretty ridiculous, especially given the value of third-party data, whether it’s from social media or open data sources or partners or syndicated data sources, et cetera. It’s ridiculous that companies don’t have someone who’s curating this data.
Greg Lotko: Don’t you think that’s a common pattern, that when something is easy to focus on or easy to address, we do it as a matter of course? So, buying the small stuff from Staples or wherever, your pens, your staplers, your tape and stuff, that’s easy. It’s a well laid down pattern, defined process. But this whole space of where all the data is coming from, combining it with what we have, it’s more difficult, or more challenging to get right, but often the things that are hard to do or take a lot of effort, those are the things that we should be focused on.
Doug Laney: Absolutely. When you look at the number of data sources that are out there, a trillion websites that can be harvested. There are thousands of data brokers out there selling all kinds of data. Any company has dozens or hundreds of partners that they could exchange data with. So getting your head around all those potential data sources that might provide some incremental value or alpha for your business is difficult to do and a lot more difficult than purchasing, as you say, Greg, purchasing office supplies. So I think it’s really problematic that organizations don’t have somebody dedicated to that role. The Economist called the data scientists the sexiest job of the 21st century. I think maybe not so sexy but definitely as valuable as a data curation specialist.
Greg Lotko: Well, that’s certainly a phrase that must have come out of a technologist or data scientist’s mouth, right?
Doug Laney: Suggesting that somebody doing something with data or technology is sexy, yeah. But the fact of data diligence. Data diligence is this concept of understanding the value and the potential of your data assets. This is part of a larger issue that PE firms and boards and CIOs discount the financial value and market potential of their corporate data assets. So, given that we’re at the cusp of a massive global economic reconfiguration as, I think, Daniel, you were starting to discuss, improving the power and availability of AI, especially.
So, as a result, these corporate entities are going to change at a rate that we just haven’t experienced. So I think executives and boards really need to evaluate their data assets as part of the overall valuation of the business. And so we, at West Monroe, do a lot of what we call technology diligence work for private equity firms to help them understand their tech stack, but increasingly, we’re helping them understand their data stack, as well. What’s the value of data, are their synergies in the data that’s being acquired? Are there data opportunities that are being missed? Are there risks in the consistency or quality or governance of data, and are there going to be challenges with integrating and leveraging data from the combined organizations? That’s what we refer to as data diligence.
Daniel Newman: And, I think, building on the data diligence, though, every organization is sort of challenged. You gave a great example on having dedicated resources in a company to buy pens, paper clips, notebooks, and yet most organizations really have little or no proportional investment into that data management infrastructure. And one of the things that I know we talk a lot about from the analyst lens is that, in order to really take advantage of LLMs, take advantage of generative AI, take advantage of AI, in general, having your data landscape, your data had to be well in order.
You talked back to your early days of being the big data guy, or the big data person. And I don’t think it’s changed that much, Doug. I think all that’s happened is new tools that sit on top and accelerate the viability, but the companies that saw big data a decade, 15 years ago, and started really doing it right, were all the ones really well positioned. The ones that, I don’t know, thought it was a fad or a trend or just didn’t make the investment, LLMs aren’t really fixing that for them, are they? I mean, in my opinion, all that’s doing is it’s just a new layer on top. They can now access more data but if it’s not well organized, defined and management, so how are you seeing companies catch up? How are you seeing-
Doug Laney: I would say, yes, but LLMs don’t really require data to be as organized as, say, a data warehouse or even a data lake. So there is some advantage of being able to-
Daniel Newman: But graph and vector, right, but you’ve still got grounding. You still need to trust. You see all these companies come out with indemnification offerings because it’s like, yeah, it’ll come up with something, but will it be right? Will it be accurate?
Greg Lotko: You’ve also still got to know where to look. Or know the type of information or data you want to get to. Or, even if you don’t know for sure, you have to have an inkling of the space of what you should ingest. So there does have to be some inspiration or some plan of the type of information you want to bring into this space. But I agree with you, Daniel. If you’re not thinking foundationally or fundamentally about it that way, and if you’re not opening your aperture to all the different types of things that could intersect with your business or the rest of your data, then you don’t pull it into the set of what you’re looking at and what you’re analyzing and trying to derive strategies and inferences and analysis from.
Daniel Newman: I want to give it back to Doug here. Can a company catch up now? Does generative AI give them some advantages? Are you seeing that in your research now? Is the process to get organized, get your data right, is that being consolidated and can these companies not only catch up but really become leaders with all the new technology available?
Doug Laney: Data management is definitely coming to the fore because of generative AI, companies need to understand what data assets they have, the potential value that they can derive, how to govern that data, how to prepare that data. And in particular, unstructured content is really ripe for generative AI. We’ve been doing data science and analytics against our structured data sources for a couple of decades now, at least, and generative AI opens up a real value proposition for unstructured content, yep.
But we’ve got to be careful. We need to be really careful about not only understanding what data we have and how it can be used but what kinds of questions people are going to ask or how they’re going to use the AI apps that we deploy. There was just a story about a company, a Chevy dealer in California that unleashed a chatbot for its customers and somebody just posted a tweet or a post that he just bought a 2024 Chevy Tahoe for $1. He convinced the gen AI to sell him, in a legally-binding way, the Chevy Tahoe for $1.
Doug Laney: Yeah, so we need to be careful about how far we let our AI applications go in addressing customer questions and requests, and put some guardrails on that.
Daniel Newman: So let’s wrap this together with a little bit of a future-forward, let’s get some vision here, Doug, no what you see. How does the next 12 to 24 months play out? Are generative tools going to take over everything? Are we coming to the era of… is this HAL? Are we in the HAL era? Will the door open when Greg asks it to? I’ve listened to the tech pessimists in every revolution and they’ve been wrong every single time. But you do see some very highly acclaimed people that have concerns. So, as the tech keeps moving faster, what does the next year or two look like?
Greg Lotko: Yeah. Literally at that car dealership, that AI is not going to take over the salesman’s-
Doug Laney: Yeah, that one’s probably not. But let’s hope we’re able to build, at some point, sentient AIs that are more ethical than we are. That’s kind of my hope. Maybe at some point they’ll have no use for us but that’s kind of far out. I think in the near term, yes, employment is going to shift as some jobs are made easier or outright usurped by AI. Jobs, as Daniel, you’ve suggested, jobs have always shifted as technology has evolved, or new technology has been introduced.
Today, nobody mourns for the 19th century buggy whip manufacturers after the introduction of the automobile. Things change. But I’ll leave you with an interesting, fun quote. I was kind of an AI groupie at university and I used to watch Marvin Minsky, the father of AI, or one of the fathers of AI, speak in the ’80s, and I remember him saying once, he said, “Remember, we’re in the 1,000 years between no technology and all technology. And you can listen to what the experts say but remember that we’re all ignorant savages.”
Greg Lotko: Well, that’s a humbling note to end on.
Doug Laney: For sure.
Daniel Newman: Yeah, I think it’ll be a positive year ahead but, you know, Doug, first of all, thank you so much for joining us today. Let’s make sure we get the books into the show notes. It’s great reading on a very, very important topic, and no doubt you’ve come out with this at a timely point in our business and personal lives, where LLMs, data, generative AI, is changing our lives every single day. So, hope to have you back soon. Thanks for joining Greg and I here on The Main Scoop.
Doug Laney: Thanks, Greg. Thanks, Daniel.
Greg Lotko: Awesome having you here and that’s another episode of the Main Scoop. Go ahead, Daniel, take us home.
Daniel Newman: Go and hit that subscribe button and join us for all the episodes of The Main Scoop. Greg and I break it down each and every time, talking to our guests, always challenging what’s going on in the market, but always bringing great insights to you, our audience. For now, take care.
Author Information
Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.
From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.
A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.
An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.