Generative AI in the Enterprise: The Implications of Enterprises and Platform Vendors Using Generative AI Today – Enterprising Insights, Episode 1

Generative AI in the Enterprise: The Implications of Enterprises and Platform Vendors Using Generative AI Today - Enterprising Insights, Episode 1

In this episode of Enterprising Insights, Mark Beccue, Research Director, AI, with The Futurum Group joins host Keith Kirkpatrick, Research Director, Enterprise Applications, at The Futurum Group, for a conversation about the use of generative AI within enterprise platforms and applications. We’ll also cover some recent news and newsmakers in the enterprise software market. Finally, we’ll close out the show with our “Rant or Rave” segment, where we pick one item in the market, and we’ll either champion or criticize it.

You can watch the video of our conversation below, and be sure to visit our YouTube Channel and subscribe so you don’t miss an episode.

Listen to the audio here:

Or grab the audio on your favorite audio platform below:


Disclaimer: The Enterprising Insights podcast is for information and entertainment purposes only. Over the course of this podcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.


Keith Kirkpatrick: Hello, everybody. I’m Keith Kirkpatrick, Research Director with The Futurum Group. I’d like to welcome you to Enterprising Insights, our weekly podcast that explores the latest developments in the enterprise software market and the technologies that underpin these platforms, applications, and tools.

We’re going to begin today by taking a deep dive into everyone’s favorite topic: the use of generative AI within the enterprise. Next, we’ll cover some recent news and newsmakers in the enterprise software market. Finally, we’ll close out the show with our rant or rave segment, where we pick one item from the market or news around the market, and we’ll either champion it or we’ll criticize it.

Now, I’d like to introduce my co-host for Enterprising Insights this week, Mark Beccue. Mark is a Research Director for Artificial Intelligence at The Futurum Group. He spent more than a decade covering artificial intelligence and those underlying technology approaches, including natural language processing, machine learning, deep learning, computer vision, and most recently, of course, LLMs and foundation models, which are powering today’s generative AI offerings. Welcome, Mark.

Mark Beccue: Hey, Keith, how are you doing?

Keith Kirkpatrick: I’m doing well. And yourself?

Mark Beccue: Very good.

Keith Kirkpatrick: Well, let’s just get right into it into our deep dive of the week here. Of course, this is this growing trend of enterprises starting to use generative AI and rolling them out into their platforms. Now, Mark, you’ve been covering this a long time, at least a decade, probably more. If we can just jump back to, say, 2013, can you just give us a sense as to what the state of AI was back then in terms of its ability to provide meaningful, real world benefits to companies back then? Were there any or was it all just kind of smoke and mirrors?

Mark Beccue: Yeah, it’s a good question. There certainly was more than smoke mirrors. Let’s take a real quick jump back and say what really pushed that forward. AI’s been around for a long time. People talk about AI being around for a long time, but what really changed in that time period.It really took off, like you said, 2013, 2015. 2015 was that compute got cheaper. You had these scientists, data scientists were playing with models and thinking about things, and they were like, “Well, now this becomes more pragmatic,” and it moves out of the research lab and into where we had companies actually operationalizing it.

If you look at the period we were at before we got to October, the jubilee date of October of 2022 when ChatGPT showed up… Before that, you had a lot of companies operationalizing AI. The most money was really spent in these use cases where you had some automation around customer service. I think we’re going to talk a little more about that in a bit, but this idea that you could use automated agents to assist and move folks forward without necessarily a live agent being present. That was one of them.

Lots of things around predictive analytics. Matter of fact, a lot of people refer to previously the generative AI as predictive AI, or predictive analytics AI was really the dominant piece to it.
That’s kind of where, I think, we started. You had lots of investment in use cases going all the way up until October.

Keith Kirkpatrick: Right. It’s interesting. You’re mentioning October of 2022, which is really just about a year ago. What happened then and why was it so, pardon the phrase, game-changing for the market?

Mark Beccue: Right. We’re living in the middle of this such disruptive space. In October 2022, you had OpenAI introduced ChatGPT. What’s interesting to me was the way that people have talked about this differently. I’ve heard a wonderful explanation for this. This was the explanation, Keith. I love this. They said we’ve had LLMs for a while. GPT has been around. We’ve had it for a couple three years where people talk about these deep language, deep learning models, big thing. But what we didn’t have was this interface that was just a democratization of a deep learning model.

If you think about it, really, the difference was to interact with a deep learning model, you had to be a data scientist before ChatGPT. What ChatGPT envisioned, and it worked really well in concept, was this idea that anybody can talk to a language model or an AI model and it’ll do something. That was what really changed the game. You had a couple other AI foundation models come out around the same time, so it was… Stability had one that’s different. It’s a diffusion model, those kinds of things, but that really changed the whole thing. It changed the whole idea. All of a sudden there’s this access to these massively smart, intelligent AI models, and that was the difference.

Keith Kirkpatrick: So what you’re referring to is you or I being able to go online, type in basically any type of a query as a prompt, and then get some sort of result, and we can talk later about whether or not it was true, accurate, what have you.

Mark Beccue: Yeah. Think of it more as in the old days. Now I’m old so I can say this, but there was MS-DOS and all those kind of computer languages and you had to really learn. Before, if you think about the evolution of compute we had in the early days was somebody had to learn a specific language and programming to even get to talk to these very complex software programs. Then we moved into a much better interface that worked for all sorts of us, so we could all do that. It’s the same idea.

Keith Kirkpatrick: Well, it seems like… I’m glad you brought that up, because now if we think about where we are right now, October 2023, now you have all of these SaaS vendors, whether we’re talking about Salesforce, ServiceNow, Adobe, Oracle. Whoever you want to name, they probably have something in their platform where they’re going to say, “Hey, we are utilizing generative AI and we’re rolling these tools out so that everyday office productivity workers can now access this within their platform.” Are you surprised that they’re rolling out this technology now when, really, it hasn’t been that long?

Mark Beccue: Right. This is good because you and I have been messing around with this for a long time. I will say that I think people miss some very important… You mentioned a few names there that I would call AI pioneers. In my opinion, the few that you mentioned, specifically Adobe and Salesforce, have been invested in AI for almost 10 years. They were in that early stage. They were in on predictive AI. They’ve been thinking about it.

In their case, these are companies that were ready to take advantage of generative AI because they understood all of the parameters and guardrails that you need for AI in general. They were in a good place. They’d learned. What’s funny about that is when you operationalize AI, you got to think more like, “What’s the life cycle of this stuff?” It’s got a different life cycle than software in many ways. They learned all the ins and outs and were able to pounce on generative when it started to come out.

Some of those other companies you mentioned were, I guess I would call them… They were thinking more like vendors, like AI vendors. Oracle is an AI vendor in many ways. Some of the other guys, if we were to go back and look at those SaaS companies, I would say nine times out of 10, most of them have been invested in AI for more than three or four years easily. They’re in the right place. Now, what you have now is everybody’s starting to talk about it. We can talk about that, how that looks.

Keith Kirkpatrick: Well, it’s interesting because one of the things… When we started hearing about generative AI in the popular press, one of the big topics that came up was the issue of hallucination. Basically, where you put in a prompt and basically the model comes back, because it’s a mathematical model, with, quite honestly, garbage. I guess the question is, with these SaaS platforms incorporating the technology, generative AI technology, how did they deal with those type of issues or toxicity or model bias, all of those things that we’re really being told right now are a big issue?

Mark Beccue: Right. You and I have talked about this before and we keep saying everybody has to remind themselves. You just said it a couple minutes ago. It’s not even been a year that ChatGPT just exploded into everybody’s thinking about what it was. I will tell you this: before ChatGPT, I was talking to some companies that would really use a GPT and LLM. An LLM was something that existed before the interface.

I was talking to a company that was looking at this and they went, “Well, there’s a lot of upside to an LLM.” And he said, “There’s a lot of downside to an LLM.” And this is where we go. So it’s like, “Well, what’s the downside?” It’s ingesting so much information. Most of these bigger, the biggest ones we know of, the most popular ones, scrape the web. They scrape everything. There’s all this stuff, garbage in, garbage out.

What happens is lots of problems. It’s eating information that’s not good, so it’s taking things that cause the model to think this is correct when it’s misinformation. Most of the LLMs can’t really discern and do well with bias, because that’s the data that they’re eating. They’re seeing this data. They don’t know. I had a friend that said, “It’s kind of like a Golden Retriever. An LLM is kind of like a Golden Retriever. It’ll go get anything. You tell it to go, it’s going to bring you an old sock and something that you wanted, but it’s going to bring everything.” Same idea.

The hallucination part is even… That’s really built into these models that just say… I think you actually said it right. The way I would explain it is mathematically this makes sense. It doesn’t have the grounding to say, “Well, I’m not fact-checking this. This is the data set, and according to the data set, it’s right.”

Keith Kirkpatrick: Mark, you just brought up a really great point about grounding. Maybe you could talk a little bit about what that is and why it’s important to figure out a way to make sure that these models don’t just, like a golden retriever, go out and bring back a dirty sock, an old tire, instead of what you want.

Mark Beccue: Yeah. There’s three or four different things that people are doing. It’s amazing because I’ve used the word mutation. We’ve had a mutation happen since October when you had these certain… They started coming out and people started going, “Well, maybe I should do this or maybe I should do that.” First would be you’re seeing more specialized models come out. The way you’d address that would be, “Well, I’m not going to train it on so much data. Let’s train it on a smaller data set that we’re more confident in what’s in there is good stuff.” That’s one way to do it. Those are… Let’s call them slightly smaller large language models. I don’t know. We don’t have a name for it yet.

The other way would be that can you use the best of both worlds? Take a model and point it at data that you’re confident in. Let’s say you’re a big enough enterprise that you have enough in a private domain that you could point it at that. Say, “Well, let’s look at that.” That’s grounding, to your point. It’s like, “Well, I know what’s in there is okay. It’s not going to be biased. It’s not going to hallucinate based on the data that’s in there.” That’s another way. Sometimes people call that fine-tuning. You and I talked about this. I’m not sure if there’s a really good name for it yet. It’s sort of grounding. I don’t know.

Keith Kirkpatrick: Retraining, lot of different names.

Mark Beccue: Retraining, yeah.

Keith Kirkpatrick: Just as by wave of an example, so that would be… Let’s say I’m a Salesforce and I’m trying to use generative AI, and I’m saying, “Well, I don’t want my chatbot to go out and just grab any old answer. I want to make sure that it only uses our vetted company knowledge base for answers.” Is that kind of what you’re referring to?

Mark Beccue: Yeah. There’s a trick to that. I think that Salesforce was one of the first companies that said they were going to do that. There’s no owner’s manual for LLMs as far as I know like, “Oh, we can do this.” Well, they figured that out, so that was a good idea. I think they meet a lot of the criteria that would make sense for somebody to go ahead and do that right now is because they have so much data that they can protect it, they can make sure it’s normalized and anonymized, and those kinds of things. It made sense. That’s really something we talk about a lot with customers is, if you have a lot of data that’s readable, you should think about AI, because if you can do that, that brings value to the table and what you’re trying to do.

Keith Kirkpatrick: Right. Right. It’s interesting. You bring up Salesforce. They are a company that seems to be going about deploying a generative AI in the right room. I think you have a series of articles or research notes called Adults in the Generative AI Rumpus Room, which I think is a great title, by the way. What is it that they’re doing and what is it that… Whether we’re talking vendors or even enterprises that are looking to deploy generative AI, what do they need to look out for?

Mark Beccue: Yeah, I love that. What’s funny is I’m finding more and more that our advice… You’ve been in on this. The advice to companies is make sure you’re paying attention to what you learned in business school and a business discipline. It’s really very constructive to think, “Do I want to do…” This is what Salesforce did. They said, “We have an issue we’re trying to solve. What would be a way to solve it?” They didn’t say, “Oh, we need some AI. I want to go do it.” That’s kind of flavor of the month right now. They said, “Well, we think that AI can help us with some of the things that we do.”

They went through a process and they said, “All right, so here’s something it can do. We’re going to figure out how to use it. Then we’re going to…” As you do that, you understand more and more what its pros and cons are and you’ll say, “Oh, that helps me understand processes and people.” These very soft things that you need to run AI, it’s not really technical. It’s got to do with these policies you might have. Better data management, better data governance, how do I… They have something that they’ve pioneered that we really push out to a lot of people, and it’s called the tenets of responsible AI.

You ask yourself these questions about the use case. You’d say, “Does this meet the criteria for privacy, accuracy, all of the bias?” All those things that you ask it to do, they figured that out. I think that, really, it goes back to a company with a lot of experiences. These guys that have worked with AI for a while, they figured it out the hard way. You can’t shortcut it. It’s really not easy to shortcut it. What’s interesting is those same principles that we just talked about, those apply to… Let’s call it predictive AI before you get to generative AI. It doesn’t change. Matter of fact, it just amplifies it. You’re taking the principles you had that you already had in place that these companies did, and they’re just applying it.

Keith Kirkpatrick: Right. Yeah, that makes a lot of sense. I guess the big issue here, it sounds like, it’s not necessarily about the technology. It’s about, as you mentioned, looking at your people, looking at your processes, and making sure that they’re set up properly before you even think about the technology as the solution.

Mark Beccue: What problem are you trying to solve? What problem are you trying to solve? It sounds simple, but that’s really where you start and think about, “Oh, okay, maybe that’ll help. Maybe it won’t.” One of the things, I think, Salesforce… And we talked about others like Adobe. They’ve actually said, “What problem do I want to solve?”

This really goes to innovation, how you think about technology innovation. What you’re dealing with these companies that are very bright that do this is they’re not saying it’s one thing or another. They’re saying, “I don’t care what technology solves a problem. I’m not going to be biased in that sense. Maybe it’s AI, maybe it isn’t, maybe it’s something else.”

Keith Kirkpatrick: Right. Well, that seems like a prudent approach, because again, getting back to the other point you made there about looking at even the selection of different LLMs, why use one over another? Well, you have to look at the problem that you’re trying to solve and then worry about which model you might be used and how you might use it.

Mark Beccue: Yeah. That’s what we’re seeing in this mutation is people are not… It’s not a marriage. You’re kind of dating LLMs. I think there’s going to be a lot of dating of LLMs and no real long-term commitments to one or another, because exactly what you said: we’re getting these different flavors. One might make better sense for certain use cases that you have versus another. We’re seeing a lot of, “I’m going to use this one. I’m going to use this one.” It’s kind of like getting a cloud provider, too.

Keith Kirkpatrick: Right. Right. Well, thanks very much for your insights, Mark. There’s certainly a lot going on in the world right now and it’s only going to continue. But what I’d like to do right now is just take a look at some of the news in the market over the past week or so and get your thoughts on it. These are all related to generative AI, so just sit tight.

One thing that we saw recently is that Oracle announced on September 19th that they have integrated generative AI-powered capabilities within their Oracle Fusion Cloud customer experience offering. The interesting thing here is they mentioned several use cases, I think, assisted agent responses, assisted knowledge articles, customer engagement summaries, guidance for content creation.

But the thing that really was interesting to me was the use of generative AI to help out field service professionals in providing recommendations. What I think they’re getting at here is… Let’s say you have an appliance repair person coming out to your house and he is trying to figure out which way to put in a certain part. The goal here is to be able to make it easy for him to say, “Hey, which way should this part go in,” and not have to go through a 50-page manual.
Are use cases like that… Is that where you see a real value in that it really does actually save people time as opposed to just a lot of what these other vendors are saying is, “Hey, we can save you 10 minutes from writing an email or something like that.”

Mark Beccue: Right. Well, I like what you said about Oracle is very bright in AI. I think they’re pragmatic. That’s another word. They’re an adult in the rumpus room. Interestingly, if you notice that list, they didn’t have automated agent. They didn’t have a lot of that. You and I have talked about that so much. They were using it in an assisted way. You’re thinking about human in the loop kinds of things.

I think that they’re understanding the low hanging fruit of generative AI is in all of those things you mentioned, kind of bring in the strengths of that. Whereas you avoid the “I’m going to handle it directly” customer service things. That’s something we’ve been working on for years and still not solved exactly, but we’re getting there.

The field agent thing is interesting because what most people want to add to that is computer vision ideas. I’ll be interested to see where that goes. There’s certainly generative AI in the computer vision space as well, but it’s like, “Well, if I could show you a picture of this and then pops up and shows me, here’s what that part looks like and where you should put it.” That’s been on the table for a while, so maybe that gets solved.

Keith Kirkpatrick: Right. Well, the other thing that I found really interesting is… I guess a couple months back, I saw a paper that had been published by researchers at MIT Sloan and Stanford. They basically looked at the use of a generative AI tool in the contact center. The really interesting thing was they found out that, yes, using generative AI can save time, it can make these agents’ life a little bit easier, but really the productivity gains are the highest among new workers.

I think there’s something to be said for that. We really have to or organizations are going to have to look and see where does generative AI make the most sense because, and this is something I think you alluded to before, compute isn’t free. You can’t just go out and deploy this everywhere because it’s expensive.

Mark Beccue: Right. It is. We don’t know… It being in the early stages of this and my coverage, we’re looking at these investments. And what these LLMs are going to spend on training being one side of the equation on inference being the other is substantial. While you have a lot of unicorn companies showing up and a lot of this positioning going on, what we really don’t have… And we’ll go back to when we talk about business disciplines and things like that, is what’s the cost? What’s the cost? We don’t really know.

And Companies are pretty good about going, “Okay, I’m going to try and figure this out,” but all that has to line up pretty well. Who they’re getting the LLM from is going to say, “All right, well, here’s a contract, but it’s open-ended,” or “How much am I going to charge.” You have to go run this in your own data center. That kind of thing is still being… It’s formative, right? We’re working on those.

Keith Kirkpatrick: Right. Is it your sense that this is something that’s going to be changing over time? We’re not going to see the same pricing?

Mark Beccue: It has to. There’s so much emphasis right now on how much pressure AI is putting on compute that there’s a ton… I mean, if I were… Well, I am an investor, but I mean, if you were investing in things, you’re thinking about all these different, again, mutations. Here’s the LLMs coming out. Now, we’re getting this cottage industry of, “Well, this runs the models more efficiently and here are our new chips,” because we don’t have chips that actually were designed to run AI workloads. It just so happened that GPUs work pretty well.

Now you have this race in chips and all of a sudden we’re going to… because you have to push the price down or the cost down of compute. It’ll be interesting to see how this all sorts out for what it costs people. I mean, we had Microsoft come out and say, “Copilot’s going to cost $30 a seat enterprise.” At least that’s a stake in the ground. We’ll see what happens, right?

Keith Kirkpatrick: Yeah. Right. Well, it’s interesting. Another piece of news that I wanted to bring up here is ServiceNow, which is I believe another one of the adults in the rumpus room there. They actually announced several major enhancements within their Now Assist family solutions in their next release, which is the Now Platform Vancouver Release.

Now, the really interesting thing, aside from the fact that they’re deploying generative AI across all of these different workflows, is that they claim… It seems like they’re really claiming first-mover advantage in terms of making all of this technology generally available not just in pilot. It’s available for anyone to use. Is first mover advantage, is that even important given the fact that, like you said, we’re still in the early days and things are going to change?

Mark Beccue: No, I don’t think so. I think that people go to ServiceNow because they have a very specific thing they’re trying to do. This is what we do. It’s like, “Well, I have AI now.” It’s like, “That’s nice. Did it make what you do for me better? If it does, then fantastic. And if it doesn’t, it doesn’t help.” What you have is a classic marketing people talking sometimes. At the end of the day, we encourage our clients and tell them, “Tell them what it’ll do. Why is this better? Why am I making your life better?”

Now, that said, let’s say some of these SaaS companies you’ve mentioned are kind of… I don’t want to say morphing into, let’s call it, low-code platforms. They’re kind of saying these things to their customers who tweak stuff. It’s not just embedded AI in the SaaS that I get and I’m just going to go do this. It’s kind of more like, “Well, I have something that you can make a little better if you’re…” I get the sense that most Fortune 500, Fortune 1000 companies do tweak a lot of these SaaS platform applications for their own use. That would be logical to say, “Okay, maybe that does help a little bit.”

Keith Kirkpatrick: Right. Okay. Interesting. Then the final piece of news I wanted to get to, and this is somewhat related, SugarCRM, which is a CRM vendor. They just released the results of the survey that I found was pretty interesting. They surveyed about 800 global sales folks, marketing people, IT leaders, so forth, and they’re trying to understand the different or the shifting use cases for CRM over the past five years, given that everything we’ve gone through from the pandemic to economic uncertainty and whatnot.

One of the really interesting things that came out of that is that they are saying that a lot of these folks still feel that CRM doesn’t have the capability or doesn’t have the features that it needs to really provide much more value, and they’re looking to generative AI to help that. My question to you is, isn’t that missing the point when we’re focusing on generative AI? Isn’t it really about, again, defining the problem and figuring out the solution, and then, “Hey, if generative AI is the way to go, great. But if you can do it another way, take it that way.”

Mark Beccue: Yeah, it seems a little vague. If you were to say, “My productivity is better or worse,” what were they actually saying about that CRM problem? They weren’t being very specific, but we do know that if you just took the generative AI use cases and said, “It seems that the argument for, let’s say, text summarization…” I heard one that actually was pretty cool. Let’s say you’re on a call, a sales call, or any call. We’re on all these different business calls. And the assistant takes the meeting notes. We’ve been doing this for years, but it’s transcribed immediately. Generative AI can do that, right?

Keith Kirkpatrick: Correct.

Mark Beccue: Now, you have a record immediately available that is a accurate record of that meeting. I’ve heard some instances where… This was actually last week. I heard this from an enterprise that was doing this. It’s like there’s no question when you’ve got four people on a call and the transcription says this. It’s not open to interpretation. You had some synergies. That’s CRM. That’s synergy to say, “Hey, we just saved some time because we’re not arguing about who’s heard what in a different way. This is what we said.” Take those notes and move them into something quicker. I can put this into other content very quickly.

You can see… I can put it this way, Keith. Think about where we are right now. We have a couple things going on. One would be a massive paradigm shift. I like to think of it as when we first got the iPhone and there was the App Store and when we have Tacobots and different stupid things that people came up with. It’s like, “This is what we can make with the…” Then you had people really thinking about how to apply it and it got sharp. We’ve got mobile apps now that just are beyond our comprehension.

I never even thought we’d be like, “Oh, well, we can…” You got on a plane last week and I’m like, “I don’t have paper.” I completely… Boom, phone, done. All of these things that we just… It takes time for all of those logical things to come out and go. The other part is education process. That is you’ve got people coming up with innovation.

Now you’re going to have people thinking about how they use these interfaces and all this… How they’re going to use an LLM. We are going to be prompt engineers, all of us. If we don’t ask it right, it’s not going to give us what we want. I know that’s kind of off the subject, but that’s really where these people think about those things.

Keith Kirkpatrick: Isn’t that also one of the goals of some of these platforms is to help folks construct platforms or prompts, excuse me, so they are asking the right thing without going in circles?

Mark Beccue: We’re just getting to that. We’re just getting to that. There’s an education process and it’s on… You think who’s got something to lose there is whoever’s putting that out to say, “Well, you need to use this.” It’s like, “Oh, well, I have a magic…” All those different things that Oracle said. Well, does it come with an instruction manual or how do I use this? What’s the best way for me to use this? It behooves Oracle to make sure they’re telling people, “By the way, you might want to ask it this way or you might want to do…”

Just bullet point things that teach people how to maximize what they’re doing because they’re not going to know. Then they may get frustrated and just put it aside. I like to think of one other part that you’re going to laugh about, but there’s all these bells and whistles people talk about and they’re like, “Well, you can do this and you can do this.” And I’m thinking VCR clock. Do you ever set a VCR clock? Does anybody know how to do that?

Keith Kirkpatrick: Never did

Mark Beccue: Yeah, they never knew how to do it. I don’t care. It just went away. It was like, “It was a great feature.” No, I never used it. So we’re going to have to teach people.

Keith Kirkpatrick: Yeah, that makes sense. All right. Well, thank you for your insights, Mark. Now I’d like to get to the last part of our show here, which is the rant or rave of the week. What I’d like to do is give you a topic and you can either choose to rant or rave about it. I’ll just throw this out there.

We talked about it a little bit before about SaaS platforms allowing enterprise customers to bring their own LLM into their platform and use that. Is that something that is a smart strategy? Is it something that is introducing a host of other issues or what’s your thought there?

Mark Beccue: Yeah. I’m not sure if it falls into rant or rave. I’m going to call Switzerland on that and say-

Keith Kirkpatrick: Sit on the fence.

Mark Beccue: Yeah, sit on the middle. I think the reason you’re seeing that is because we’re at this point where you’re at the beginning of how all this stuff’s coming out and nobody wants to be married. You’re not making a big, long commitment. You’re kind like, “We’re dating. We’re just dating casually. We’re not even going to think about this for any other reason.” I think that that is smart on the behalf of the companies that say, “Bring your own model,” because there’s a lot of… I mean, it’s a mutation. It literally is a mutation. There’s a ton of really interesting stuff happening.

There’s some friction between private and open source as well. You have lots of enterprises that say, “I’m only doing open source,” or “I’m only doing it the other way,” or those kinds of things. It gives enterprises a lot of flexibility. Maybe I’d lean towards a rave, I guess, a little more than… It’s certainly not a rant, but it’s not a strong rave.

Keith Kirkpatrick: All right, fair enough. Fair enough. All right. Well, thank you very much, Mark, for joining me here on Enterprising Insights. Next time, we’re going to be discussing the growing use of low-code and no-code platform, something we touched on a little bit today within the enterprise. I want to thank everybody for tuning in today and be sure to subscribe, rate, and review the podcast on your preferred platform. We’ll see you next week.

Author Information

Keith has over 25 years of experience in research, marketing, and consulting-based fields.

He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.

In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek,, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.

He is a member of the Association of Independent Information Professionals (AIIP).

Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.

Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.

Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.


Latest Insights:

The Six Five team discusses Sequoia/A16Z/Goldman rain on the AI parade.
The Six Five team discusses Oracle & Palantir Foundry & AI Platform.
The Six Five team discusses AWS Summit New York 2024.