Consolidating Applications to Improve Efficiency, Leverage Technology Investments, and Tighten Security – Enterprising Insights, Episode 5

Consolidating Applications to Improve Efficiency, Leverage Technology Investments, and Tighten Security - Enterprising Insights, Episode 5

In this episode of Enterprising Insights, host Keith Kirkpatrick, Research Director, Enterprise Applications, at The Futurum Group, discusses the topic of tech stack sprawl, focusing specifically on the proliferation of multiple enterprise applications within an organization. He covers the conditions that lead to sprawl, the risks and drawbacks of acquiring and implementing a wide range of applications, and highlights the offerings from vendors that are designed to reduce or eliminate sprawl.

He also covers some recent news and newsmakers in the customer experience software market. Finally, he’ll close out the show with the “Rant or Rave” segment, where he picks one item in the market, and he’ll either champion or criticize it.

You can grab the video here and subscribe to our YouTube channel if you’ve not yet done so.

Listen to the audio here:

Or grab the audio on your favorite streaming platform:

Disclaimer: The Enterprising Insights webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.


Keith Kirkpatrick: Hello, everyone. I’m Keith Kirkpatrick, Research Director with The Futurum Group, and I’d like to welcome you to Enterprising Insights. It’s our weekly podcast that explores the latest developments in the enterprise software market and the technologies that underpin these platforms, applications and tools. Now, I don’t have a special guest this week, so I’m going to use our time together to take a deeper dive into an issue that’s impacting the enterprise software market, technology stack sprawl. Try saying that five times quickly. So if you’re here and I’m here, it’s truly our time, so let’s dig right in.

So let’s begin by taking a deep dive into tech stack sprawl, identifying what it is, some of the challenges that a sprawling tech stack can have on an enterprise and its users and how IT leaders can address and mitigate this issue. Now, tech stack sprawl, which sounds like the name of a band made up by a bunch of IT workers, has been defined in many ways. But it really refers to the growth of the applications and technology stack where it becomes hard to identify what each application does because there’s some applications that do the same thing as others, some haven’t been used in months, years, or really ever, and some may only be used by a select few workers within the organization. The root cause is that tech sprawl starts over time because of silos because you’ll have development teams that will add tools for specific functions or specific teams or team members, without assessing the existing tooling options or software options that already exist. And there’s also a case where there just isn’t any communication with other team members on the available alternatives. So this can result in the unplanned acquisition of different versions of applications or technologies that are used to solve the same problem. Now further, there’s also this really kind of accelerating demand to deliver new functions, features or workflows very quickly. And this demand may overtake the use of data-driven tech decision-making, resulting in a sprawl of redundant, overlapping, poorly managed technology that will increase the overall risk to the organization. And some of these risks, sort of the biggest ones, of course, security risks, compliance concerns and then, of course, hiring issues, too, because you may have a legacy tech stack that nobody really wants to use or interact with.

Now, obviously, tech stacks sprawl results in several issues, one, of course, which is technical debt. So you’ll have an organization that is making investments in software that nobody uses anymore, it no longer provides value, but you’re still paying license fees or you’re paying for their maintenance and updates. This is a really poor use of resource. Worse, if these applications are simply forgotten about or ignored, they can become security risks. So if you have an application that’s sort of laid by the wayside because of the acquisition of another similar app or tool, it might not get the security patches or updates that it needs. It might not be monitored to see what data is actually flowing in and flowing out of the application. It might not be tested on a regular basis to ensure that it’s compliant with local or regional data privacy and security regulations. Essentially, we’ll have an asset that is a security vulnerability. Now, tech sprawl can also impact worker productivity. Well, right now if we look at the average enterprise, I think the stats are pretty clear, workers use an average of between six and eight applications throughout their day to complete their typical workflows or processes. As a result, employees are spending about a quarter of their time simply trying to find and work with the information that they need to do their jobs. This can lead to hours and hours of frustration from toggling between apps. So this sort of toggle techs can add up to around four hours of wasted time each week. And if you look at that over the span of entire year, that’s 9% of your workers’ time at work each year. You multiply that by the number of employees you have, and that is a really, really, really large number.

Now, tech sprawl can also increase the distance between your developers because they may be using different tools and speaking different languages when they’re asked to manage a variety of different applications that aren’t really working together. It makes it more difficult to collaborate and support each other on projects when they’re doing things like customization or changing workflows, that sort of activity. And as a result, developer productivity is really impacted by these redundant efforts because they’re wasting time re-implementing solutions that already exist in other teams and other applications, and it means that they’re spending less time on developing new functionality and truly innovating. So if you have all these serious risks that are brought on by tech sprawl, what needs to happen? Well, it is a challenge to really identify and combat it. The truth is that workers, and even developers, they have their own opinions on their favorite tools and technologies. Sometimes it’s because it’s the application that they’ve always used to do something or they’re more familiar with it. And, of course, there’s the human issue of people are afraid of change. Even if an application doesn’t work well or there’s bumps in the workflow, they know what they know. So ultimately, the other issue is that a lot of times decisions on which applications to use are made in silos by individual developers or teams, and the knowledge isn’t shared or evaluated or a regular consistent basis.

So how do you really identify that you have a sprawling tech stack? Well, certainly, if you’re taking a look at the various accounting of software in the enterprise, if you’re seeing different applications that functions similarly for different teams or different groups, that could be an indication that there’s some sprawl. Because ultimately, if you have a resource list of applications that seems to be outdated or there are duplications, it might indicate that the tech that is actually being used is changing more frequently than it is being documented, and that could be a problem. The other problem, of course, is that if you have low visibility, if you don’t understand or you don’t know which applications are being used by which folks within the organization and for which tasks, there’s a pretty good chance that the IT team isn’t really keeping tabs on what’s going on. You shouldn’t have one application or, I’m sorry, several applications that really do the same thing, that are accessing the same data within the organization. That can be indicative that your tech stack is starting to sprawl out of control. This will also lead to things like redundant effort, duplicative work between teams. If there are multiple major versions of the exact same libraries within the organization or multiple competing technologies that do the same thing or even multiple implementations of the same application, different versions, that’s a sign that there’s sprawl and there’s something needs to happen. So if you think about it from a resource perspective, developers times, it’s already at a premium. Tech sprawl will further burden them and basically pull them away from what they should be doing, which is writing new code or improving existing code across a visible and efficient technology stack.

So what needs to happen here? Well, it’s not an easy challenge, particularly for larger organizations and ones that have been sort of adding to their roster of applications and tools over the years, but you need to have a strategic approach to sprawl. There’s a process that’s known as strategic portfolio management. Really what that is, it’s taking stock of every system, every process and every tool within the organization, examining how they’re working together to accomplish very specific tasks and then making decisions about which ones should stay, which one should go, based on the one that actively support your business objectives. That’s a challenging process. It’s kind of like going out and doing spring-cleaning in your house. But instead of having things like a pile of old winter clothes here, the old toys that your kids haven’t touched in 10, 15 years, old china that never gets used, within the IT organization, you’re going to have a complex web of applications and tools that are being used in a number of different ways across departments. And then you also may have issues with shadow IT spending and usage data that is really kind of sliding underneath the radar. So tidying up this mess, it can be a real challenge, but it’s absolutely essential to do, particularly if you’re going to be infusing new technology across the entire organization. And, of course, I’m referring to the use, the increasing use, of artificial intelligence.

Business leaders, what they need to do is take a look and really assess the real world evidence in terms of which tools are being used to identify which ones are providing genuine value. So one way to do this is to look at adoption and employee usage data so you can see which employees are actually logging in or using a particular application, how much are they using it? What tasks are they using it for. And this will help ensure that any kind of purchase that was made is delivering ROI and it should inform your future IT decisions. Now, to do this on a manual basis is insane, particularly for larger organizations. If you don’t have some sort of tech stack intelligence software platform, you tend to run into a few different issues. Of course, complexity and size. You may have changes that are happening frequently within your organization in different areas, and it’s just really too tedious to try to track that manually or by sending out a survey because things may change. People don’t always understand or they’re not able to accurately quantify utilization of a specific tool because they may not know, particularly if you have an application that is pulling data from another application, rather than a data leak. It may be the application may actually be used, but you just may not have that visibility or the user may not have that visibility. Individual developers may also have no way of knowing which teams are using which applications to solve similar problems within your organization. If there isn’t sort of a conversation that is occurring on a regular basis, which nobody has really any time for, they’re just not going to have that insight and visibility.

And then, of course, you have multiple versions of competing technologies. That can really create a problem because one team may find an application absolutely critical because they believe that that’s the only application that can address that particular problem, whereas another team may find that there’s another one that does the same thing. There needs to be a harmonization of the application, what its strengths are, what its weaknesses are, what are features that are deemed absolutely critical for the organization. And to do all of this manually can just be an exercise in futility given the size and scope of many organizations’ technology stacks. Now in the past, you’ve seen version control systems. They’ve gained some traction in terms of making sure that everyone is sort of on the same page when it comes to source code. Organizations are going to need to bring that same level of automated monitoring to really ensure the visibility can be expanded across the entire application technology stack. So you have sort of these tech stack intelligence platforms, a bunch of them out there, StackShare, AllStack, WalkMe, there’s many others out there. They can really assist your IT teams, providing the support they need to really get into that level of visibility on an automated basis and then help consolidate and streamline the tech stack.

Now, how does that happen? Hopefully, automation is being used to make some of these changes visible or make some of this utilization visible in real time. It can be driven by AI. It can also just be used… Automation can be used to make stack changes visible in real time using AI. It can also just simply be looking at basic data like logins, application utilization time. There also needs to be some sort of team empowerment. What I mean by that is you have to allow users and developers to really make data-driven technology decisions. You need to actually be able to lay out and say, “Look, here is your team. Here is how many people actually use this application for how long and for which tasks.” If they don’t know about their utilization, they’re never going to be able to step back and make an honest assessment of whether or not a particular application is really delivering the business benefit that it promises. And then, of course, ultimately, when you’re looking at software platforms, you need to have that visibility to really look at overall workflow and processes, rather than looking at applications in a very sort of siloed manner. Because as I mentioned before, you may have applications that are pulling data from others, and you need to have a plan for managing either the elimination of a particular application or a tighter integration between two applications. Ultimately, this is not easy work. You need to have a platform to help you make that decision by providing enhanced transparency and visibility. And, of course, the most important thing is you need to make sure it is capable of looking at all of the apps within your tech stack. That could be on-prem, it could be in the cloud. Unless you have that holistic view, it’s very, very hard to make a decision that will impact and really connect, elevate the entire organization.

So what do I see moving forward? Well, certainly, there is an increased focus on consolidation of tech stacks. We’ve heard a number of large vendors talk about that as being an issue that’s being raised in their meetings with prospects and customers. Why is that? Well, because some of these platforms are able to incorporate greater functionality within a platform, rather than have to go out and pull sort of this best-of-breed from point solutions so we’re starting to see that happening more now. As AI begins to infiltrate all types of software and processes, we’re going to see more of that, just because of the fact that you’re going to have more capability be able to be built into those applications. And, of course, with generative AI, it can actually be used to help managers interact with these tech stack intelligence platforms more effectively and efficiently so they can conduct more frequent reviews to make sure that workflows are operating as they should, to make sure that users are interacting with the software as they should, making sure that it’s as efficient as can be. Now, certainly, we believe that you’re not going to see a complete elimination of other applications. You will see integrations survive, but organizations are going to be focused on the ones that deliver the most value and that can be easily monitored to make sure that they are, in fact, working as they should. Now, of course, in this topic, there’s a lot to consider, but I expect that enterprises will continue to address this challenge as they prepare their organization to integrate AI and other advanced automation tools. We’re sort of at an inflection point here where there needs to be a reckoning of as organizations undertake a digital transformation or an AI transformation, they are going to have to really make sure that they are aware of the tech they have, it does what it says it does, and then make the hard decision of really streamlining because, otherwise, this problem will continue to grow, and when you infuse AI in it, it can only become more confusing.

Okay. Well, now I’m sure I could talk for hours and hours on this, but we do need to move on here in the show. I want to take a look at a couple of companies making news this week in the world of enterprise software because I think there are a couple of things that really kind of caught my eye. One of the news things was the FinOps Foundation. They recently announced the first preview of its foundational project. This is called the FinOps Open Cost and Usage Specification, or FOCUS. Now, the idea behind FOCUS is to create a framework that normalizes cost and usage data between SaaS and cloud providers. So essentially, what this tech does is it includes definitions for commonly used terms and the kind of metrics that providers should attach to them. Because right now, typically, if you’re dealing with multiple cloud vendors, each of them have distinct metrics, terminology and usage dimensions. That can create a real problem in terms of sort of leveling or harmonizing all of that information when trying to assess which provider you should go with. So it’s really great to see a consortium of industry participants coming together to simplify the representation of this cloud cost data that should increase the trust in the data, while enabling enterprises to really kind of expedite their cloud adoption and, most importantly, maximize the value of the investments that they’re making today.

Now, the companies that are involved, a lot of them you would consider to be competitors, which is a good thing, AWS, Microsoft, Google, Oracle Cloud, IBM, meta, the list goes on and on. The fact that they’re all working together on creating this spec goes to show that they, as vendors, realize how pervasive this problem is. Okay. Another really interesting thing going on this week, Microsoft held their Ignite event, and they have announced several new AI tools. Microsoft really wants to bring AI to everyday technology experiences across the board from collaboration to field service and, of course, within productivity applications. Microsoft announced three new Copilot offerings across its software and technology services portfolio, Copilot for Azure, Copilot for Service and Copilot in Dynamics 365 guides. They’ve also launched Copilot Studio, which is a platform that delivers tools for connecting Copilot for Microsoft 365, and these are like the Copiloting apps like Excel, Word, PowerPoint, as well as in the Edge browser and Windows, and being able to connect all that not only to internal data, but to third party data. So that’s pretty interesting. Microsoft seems to be doing well in terms of pushing out its Copilot strategy. I believe that CEO Satya Nadella said during the event that about 40% of companies in the enterprise, I’m sorry, in the Fortune 100, were testing Copilot as of this fall, and its aggressive rollout of Copilot features and functionality were really helping to position the company as a leader in the generative AI space. But, of course, Microsoft is far from alone when it comes to rolling out a wide range of generative AI tools and features this fall.

Certainly ServiceNow, they announced the availability of a major expansion to its Now Assist generative AI portfolio. They’ve announced some new capabilities, such as Now Assist for Virtual Agent, Flow Generation and Now Assist for Field Service Management. Salesforce. They are also looking to bring two new generative AI tools to market very soon, Copilot for Service, which gives users the ability to ask Einstein questions about their service intelligence dashboards, metrics, friends, and also using natural language directly within Service Cloud and Einstein Studio. This will help surface AI-powered insights, like propensity to escalate, which is the likelihood that a company will elevate a complaint to the next level, and then providing predictions on the time it takes to resolve a customer issue. All of these capabilities are designed to be productivity multipliers for these end customers, essentially reducing the amount of time spent searching, summarizing and creating basic information, while allowing deeper insights to be surfaced automatically. So again, going back to the topic I raised earlier, you’re not having folks needing to kind of toggle through a million different applications. All of the relevant information will be surfaced automatically. Now, it’s going to be interesting to see how well these new tools can be integrated within existing workflows, and it’s going to take a lot of foresight planning, in addition to technical integration skills. But certainly, we feel that there is a lot going on, a lot of competition in the marketplace, and that is always a good thing because it really continues to drive the bar upward in terms of functionality. And perhaps, most importantly, it forces organizations to go beyond just talking about product and forces them to talk about how are they, as vendors, going to really help these enterprises move their AI strategy forward.

Now come to our favorite part of the show, the rant or rave of the week. This is where I throw out a topic, and I take a couple of minutes to either rant or rave about it. So now today my rant is really around enterprise software vendors and their messaging. So as everyone knows, everyone is talking and has been talking about generative AI and the different features that they have, and we all know this sort of low-hanging fruit, content generation, summarization, those types of work, or those types of use cases. Now, everyone claims that their differentiation is around the amount of data that they use to train and tune their models. And again, I’m talking about interaction data, not necessarily things like PII or anything like that. Pretty much all the vendors out there of note have said they are not using PII to train their models. They’re taking steps to really strip that out, and that’s a good thing. And, of course, everyone’s talking about how they have the best and brightest working on their technology and, of course, you would hope that they would feel that way, otherwise, they wouldn’t be hiring these people. The problem is I think the buyers are going to need more. They’re going to need more transparency around the important things. Like how do you really ensure that the models are doing what they’re supposed to be doing? How are they grounded in specific datasets or CORBA? All of that kind of stuff, to me, it’s sort of glossed over when we’re talking, well, at least in my conversations with vendors, they tend to be focused in on feature set, and a lot of times they say, “We’re the only ones who have this.” Well, guess what? You’re not. There are a lot of different companies and vendors out there, both large and small, that are offering very, very similar features or tools using generative AI to do very similar things. Will there be a difference in how well it’s done based on the data that’s being used? Absolutely. But over time, enterprises are also going to learn how to tune that to work for them in their specific situation.

Ultimately, what I think is going to be most important, particularly as the market really evolves and matures, is how well organizations, vendors that is, are able to convey sort of the more important things to the buyer, which is, how are you working with customers to really identify the right use cases? And I’m talking about some more complex ones, not the sort of low-hanging fruit. How are you helping them with the process of deploying generative AI, looking at piloting, evaluation, model tuning and training, revisions to the interface, particularly when it comes to things like using generative AI to create composable interfaces? And then, of course, what processes do you have in place to ensure a reasonable time to market for some of these projects? Ultimately, it’s great that vendors are able to say, “Okay, it’s only been 6, 8, 10 months since generative AI became sort of the shiny new object in the room. They’ve rolled out products.” But really, what’s going to really make generative AI take off is vendors that are able to help their organizations go from that sort of “not sure how we’re going to use this” to really rolling it out in a way where there is ROI being generated. These are the questions and areas I believe will help companies stand apart. My rant is that every conversation I have, it’s always leading with feature set, feature set, feature set and, ultimately, all of that is going to become commoditized over time. Not saying that they don’t have to talk a little bit about it and say, “Yes, our technology meets these certain benchmarks,” but really it comes down to bigger issues.

If we look at any kind of technology, I’ll go back to the spec wars of desktops and laptops in the mid-1990s when I used to review them. You’d have vendors sending me these laptops, along with sell sheets that highlight the performance and feature specification, benchmarking scores from other magazines, all of that kind of stuff and all of the messaging is around raw performance. “Our contacts have access times that are lower than our competitors. Our processor speeds are this, so on and so forth. We can download a video much quicker than another one.” But over time, vendors started to wake up to realize that for most enterprise buyers and users, the overall performance, which will include factors like ease of use, ease of integration into the existing environment, how well does it support our company’s specific workflows and processes and, of course, total cost of ownership, what level of support is provided and, overall, ROI, those are the reasons that a vendor would be selected, not simply because they check the boxes or the most number of boxes on a list of features. So with that, I’ll conclude my rant. We are out of time. I want to thank you all again for joining me here on Enterprising Insights. Next week, we’re going to be discussing enterprise software pricing and the factors that are driving them. So thank you all very much for tuning in and be sure to subscribe, rate and review the podcast on your preferred platform. Thanks, and we’ll see you soon.

Author Information

Keith has over 25 years of experience in research, marketing, and consulting-based fields.

He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.

In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek,, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.

He is a member of the Association of Independent Information Professionals (AIIP).

Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.


Latest Insights:

Six Five's Diana Blass heads to Dell Tech World, for a journey inside The Dell AI Factory, where AI-innovation has transformed nearly every industry vertical.
Company’s Strength Across Clouds Delivers Record Quarterly Revenue
Keith Kirkpatrick and Daniel Newman with The Futurum Group, cover Adobe’s Q2 FY2024 earnings, and the products, segments, and approaches that have propelled the company to a record quarterly revenue figure.
Steven Dickens, VP and Practice Leader, discusses Broadcom's Q2 2024 performance, driven by strategic investments in AI and the successful integration of VMware.
Oracle, Microsoft, and OpenAI Collaborate to Extend Microsoft Azure AI Platform to OCI to Ensure OpenAI Can Scale Fast-growing Massive LLM Training Demands
The Futurum Group’s Ron Westfall and Steven Dickens explore why the collaboration with OpenAI to extend the Microsoft Azure AI platform to OCI validates that Oracle Gen2 AI infrastructure, underscored by RDMA-fueled innovation, can support and scale the most demanding LLM/GenAI workloads with immediacy.