Search
Close this search box.

Impacts of AI & GenAI to DevOps | DevOps Dialogues: Insights & Innovations

Impacts of AI & GenAI to DevOps | DevOps Dialogues: Insights & Innovations

On this episode of DevOps Dialogues: Insights & Innovations, I am joined by The Futurum Group’s Stephen Foskett and Keith Townsend, for a conversation on the impact of AI and Generative AI on DevOps, application modernization, and innovative development methodologies.

The discussion covers:

  • How the integration of AI, including Generative AI (GenAI), has impacted the efficiency and effectiveness of DevOps practices within modern software development lifecycles
  • Specific examples where AI technologies have enhanced or optimized DevOps processes such as continuous integration, continuous deployment, and automated testing
  • The challenges when implementing AI/GenAI within DevOps frameworks, and how can organizations address issues related to transparency, accountability, and bias in AI-driven decision-making
  • Ways that AI contributes to the evolution of DevOps culture, collaboration, and cross-functional team dynamics, particularly in environments focused on agility and innovation
  • How AI-powered analytics and predictive capabilities influence decision-making processes in DevOps, including resource allocation, risk management, and performance optimization across development, testing, and deployment pipelines

These topics reflect ongoing discussions, challenges, and innovations within the DevOps community.

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Listen to the audio here:

Or grab the audio on your favorite audio platform below:

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this webcast. The author does not hold any equity positions with any company mentioned in this webcast.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Transcript:

Paul Nashawaty: Hello and welcome to this episode of DevOps Dialogues, where we talk about insights and innovation. Today I’m joined by The Futurum Group, Steven Foskett, and Keith Townsend. Thank you guys for being here today, and we’re here to talk about the conversations that impact AI and generative AI on DevOps, app modernization and innovation and DevOp methodologies. So we’re coming off AI Field day, and it was a really exciting time. Lots of conversations around what was happening in AI field day, lots of topics that were really impacting the DevOps community. When we look at DevOps in general, and we looked at some of the conversations that we were talking about, there’s an impact of software modernization. Keith, why don’t I start with you. When we look at integration around AI and gen AI, what are your thoughts around some of the findings that we heard around software modernization?

Keith Townsend: Yeah, so the data pipeline and AI pipeline is not the same as development pipeline. So as you think about your CICD process, your development pipeline, there may be some incompatibilities. We are not thinking about revisions of model and retraining and rag updates to modeling, getting your data and refining your data in your models and your infra is not going to be the same as your process for rapid deployment of code. At the same time, we’re going to see AI impact our ability to get code out faster. We got a lot of examples of basically this copilot concept of being able to code faster, better code reviews, et cetera. So there’s kind of these two pieces of it. AI impacting your overall ability to get the data sets out faster and your ability to actually create code, create and review code faster.

Paul Nashawaty: Yeah, absolutely true. And when I talked to CIOs in this space, their number one challenge is this application modernization and generation of code and code cadence coming out very quickly. I’m seeing in our research that we see that 24% of organizations that responded to our research want to release code on an hourly basis, yet only 8% of organizations can do so. Stephen, I think when we look at this, and we heard all the conversations around monetization, we heard the conversations around AI, the impacts of it. What are your thoughts on that when the impacts to CIOs and what their challenges are?

Stephen Foskett: Well, I certainly think that the only way to get to that kind of cadence of hourly release cycles, the only way to do that is with the integration of AI tools, essentially having AI handle lots of the mechanical processes involved with preparing, with testing, with compiling, with rolling out. And I think that it’s really inevitable that that’s going to happen. I mean, we’re seeing AI tools integrated into every area of the stack, every point of the stack. And to Keith’s point too, I think that one of the things that we can see is that although the AI development process is not the same as the modern DevOps process, I think what we’ve seen here too is that there’s a lot of flexibility to AI, maybe more than people think.

I guess that probably a lot of people who are coming into it and seeing the impact of GitHub AI features think that this is some ivory tower, some big heavy in the cloud kind of thing. That’s, I think, not what’s going to happen. I think what we’ve seen from all of the AI field day presentations is that this stuff is going to run locally, it’s going to be implemented everywhere, and it’s going to quickly appear everywhere at every stage. And it’s not going to be something where you have to wait until the development platform that you’re using integrates it. I think that you’re going to have AI just at every point, and that may actually lead to this pretty optimistic world where you’re doing that kind of a software development cadence.

Paul Nashawaty: Yeah, I agree. And one of the things touching on that, and double click on that is really taking those stages that you were talking about or those steps that you’re talking about moving into the CICD pipeline or the continuous development, continuous improvement pipeline. It’s an infinity loop. So there’s constant agile software development methodologies that organizations are building and then pushing that code out the door faster. But to your point, Steven, is you really need to have that automation that kind of helps drive it and that repository of where that information lives and make it repeatable and faster to get out.

I mean, what we’re seeing in research, we’re seeing that organizations are doing two to three times more work now than they were just a few years ago with half the resources. So automation is the only way to get there. So Keith, I’m curious of your thoughts. When we look at the CICD pipeline and we look at the stages, I look at it as in the context of build day zero, day one, day two, build release and operations, and we look at build and release, there’s testing, there’s pieces. What are your thoughts on how AI would help with that?

Keith Townsend: So actually even before AI Field day, I was having this conversation earlier on, if we think about why we can’t get to the point where we’re releasing code every hour, there’s an element of automation. So we’ve kind of done this for the past few years, automated the testing that we can automate. But the problem with automated testing, it is just that. It’s automated, it’s rigid. We can’t inject random parameters. I dealt with this when we were developing a software base for video broadcasting at Lockheed Martin where we could send out bots, we could put bots at the end that would pull down the stream, but they didn’t act like humans.

So it wasn’t a variability to it. And every time we pushed the new release, we crossed our fingers and hoped that the scale testing that we did was enough. And in a few examples it wasn’t. But what happens when we can get AI bots, AI enabled bots at the edge that will do things that humans will do, rewind randomly, pause, break the system in ways that we never thought of, and this gets us to a more assured state of testing so that we can with confidence, get to this higher cadence.

Paul Nashawaty: No, I like that where you were going with that. That makes a lot of sense because when I think about the trending data that I’ve seen in the industry research in 2022, I ran a study that showed in the CICD pipeline, continuous integration testing, only 29% of organizations were doing it, but yet I ran that same study in 2023 and found that 66% of organizations are doing it. So why is that? Because exactly to your point, Keith, the cadence of releasing code DevOps business KPIs is to push the code out the door. We talked about release code very quickly, rapidly, hourly or daily. They’re not looking at it from the testing perspective. So when testing raise their hand and say, Hey, we have a challenge here. There’s a problem. They’re pushing the code out the door because their business KPI does that. So Stephen, what are your thoughts on that?

Stephen Foskett: Yeah, it’s funny. I would actually zoom in on the same point, testing. Developing AI assisted testing. I think this is going to be a game changer because as you’re saying, instead of having a test plan that is very specific and strict, and first we do this, then we do that. That’s almost necessarily going to miss things If you have more AI assisted testing, absolutely. It’s going to be huge. And of course, operations. I mean, I’m more from the operations side myself. I think AI is impacting that a lot too; but really looking at how to make this cadence quicker, I think test plans is going to be the big thing that’s going to go out the window and we’re going to have AI bots doing testing.

Paul Nashawaty: Yeah, I agree. I think that also it comes down to a level of maturity within organizations as well, right? I mean, some organizations are, you go at that bell curve, right? Some organizations are going to be far to the right and very, very mature, and some are very much in the infancy of doing this. So this very manual process, but pivoting the conversation a little bit, when we look at the DevOps process and methodology, AI also introduces challenges. Some of the challenges might be ethical challenges or legal challenges, copyright challenges or whatever it may be. What are your thoughts on that, Keith?

Keith Townsend: We actually talked about this a lot during the field day. What does it mean to have a safe model? What does it mean to have safe inference? And this is something that’s different for every organization. And what I learned specifically that each organization needs to take this seriously. The CIOs, CTOs need to think about standing up independent AI ethics offices to think through what is it that they want to get out of AI and what do they want to avoid. At the start of this, I remember getting a quote from one of the big vendors that we’re going to do a lot of stupid stuff. And quite frankly, you don’t get a pass in this climate to do stuff that violates your firm’s ESG policies, your legal policies.

As a consumer, I don’t care if AI was the thing that released my personal data, I don’t care if AI was the thing that said that I can get a refund on an airfare, and it is against your legal policy. That’s irrelevant. That AI chat bot is you. And I think this is becoming more and more in view that AI is not just this thing that we can pass off, say, oh, the driverless car hit a pedestrian, someone is responsible for that, and someone will see litigation as a result. And these are things that we have to take serious even in our CICD process. I think Intel brought up, or it might’ve been Kamiwaza that brought up the challenge of gates that you set up in your CICD process. If you’re going to use AI for those gates, you better have some type of human random testing or some type of augmentation by humans.

Paul Nashawaty: Absolutely. And the way I view it, I think I kind of use this analogy is it’s kind of the screwdriver and a drill, right? It’s like AI is like the drill versus a manual screwdriver turn in the screw itself. It’s faster to use AI, but you can also really do some damage if you don’t know what you’re doing with the drill, right? So it’s important to do that. But I think Stephen, when I look at it from an accountability perspective, what are your thoughts there around this?

Stephen Foskett: Well, it’s funny. I think that the answer there is surprisingly straightforward, and that is that like any tool, to Keith’s point, it comes down. Somebody’s going to be held responsible for it, and it’s going to be the person who chose to use that tool for that process. And if you have decided that you’re going to use an AI assisted tool to do software development, well then you had better make sure that you thoroughly test and evaluate the outputs of it and make sure that it’s actually giving you what you want it to give you. And it’s not plagiarized and it’s not dangerous. And it’s the same with the rest of this too. I mean, if you’re going to apply AI to the problem, you made that decision, it’s not something that happened in a vacuum. And if the AI is not the right tool, and if it doesn’t have checkpoints and it doesn’t have controls and it’s going to give you the wrong answer and maybe act in a dangerous way, that’s on you as the decision maker, not on the AI having made the wrong decision.

Paul Nashawaty: Right.

Stephen Foskett: I mean, if you decide that your toddler can drive your car, it’s on you when he hits something, not on him or the car maker.

Paul Nashawaty: No, absolutely. And again, it’s a tool right? And you are responsible for the tool. So again, so looking at where this conversation’s going, there’s a lot of different facets around AI. There’s a lot of different things that happen with DevOps and such. And what I found interesting about the use of making things faster is it often changes the culture of organizations as well, right? So during the recent AI field day event, I challenged a lot of the vendors that came up and talked about academic approaches versus reality. And because you have the ability to do things and test different models or test different cases, you may be able to do it much faster now with AI. And I’m curious on both your thoughts, Stephen, I’ll start with you, on when we look at testing out these different academic models to say what if scenarios, what if this, what if that happens? How does AI change that culture within organizations specifically DevOps?

Stephen Foskett: Well, I think one way that the culture is going to be challenged by AI is the unique nature of rolling out these tools. So as we’ve heard here, as companies implement AI, it is very likely that it’s not going to line up nicely with their organizational lines and their lines of accountability and responsibility and so on. So I do think that organizations are going to have to look very carefully at forming cross-functional tools to bring AI into their environment. They’re going to have to figure out how the company’s policies are going to be impacted by using these new tools, and they’re going to have to figure out basically how they’re going to approve or deny the use of new tools and also the use of the outputs from those tools.

One of the anecdotal stories that one of the delegates at AI Field Day was telling me was a situation where you had different software development teams and one of the teams decided that they were going to outlaw any kind of large language model generative AI, and another team decided that they were going to embrace it. Well, guess what? That’s an organizational problem, and that’s a problem that has to be solved at a very high level of management. And that’s a reality I think that we’re going to start seeing in the world. We’re going to start seeing people making different decisions and companies are going to have to decide which direction they’re going to take.

Paul Nashawaty: For sure. For sure. And when I hear this, it was really interesting, you were kind of talking about it, deciding which direction to take. From the practitioner’s perspective, I often hear, oh, it’s going to take my job, it’s going to take my job away. And in fact, what we see in research is 67% of organizations in a recent study indicated that they’re looking to hire IT generalists over IT specialists. And that doesn’t scare me so much as taking people’s jobs away. But Keith, I’ll turn to you. What are your thoughts? Is the DevOps community going to go away now and AI is taking it over? Is that what’s going to happen?

Keith Townsend: So we’ve heard this story, not just we can go back to the industrial revolution about automation around jobs. We can talk about the introduction of spreadsheets and this impact on the accounting industry and the bookkeeping industry, and now we can go back to modernization of just IT operations. There’s this fear in the networking community that automation will take their jobs, AI will take their jobs. And I think that community is just coming around to realizing not only is AI and automation not taking my job, it’s bringing me higher up the stack and using my experience and knowledge around network operations and making it even more valuable.

I think my concern is more about the entry level because these, if I think about it, for $20 a month, I’m basically getting a junior developer or a junior network admin or a junior SRE, these people that lighting might load. So the entry path into these professions might be, we will have to figure out that part of it, but as far as the person that’s advanced in their career, talked to one of the tech field day AI, field day sponsors, just the day before this recording, and he was saying how he hasn’t developed code in 20 years, but because he understands code development, he’s now writing some pretty complex routines to understand where in the world people are visiting his website from. This is not his business website. This is just something that he’s doing on the side. So AI is definitely an accelerator, not just for business opportunity, but for career.

Paul Nashawaty: I like that. I like that comment because the way I see it as well is it really helps the organizations and DevOps teams and developers, and I know developers really hate the term using developer velocity, but it really does help organizations increase the productivity of the individuals. But more importantly, it allows the individuals to be more innovative and focus on those high value tasks versus focus on the more mundane tasks. So Stephen, one last point here as our last topic point. How is AI going to impact the purchasing decision of solutions that are in the market? So if we’re looking at different test harnesses, we talked about that, we talked about the CICD pipeline, we talked about the build release and operations pieces. Where do you see predictive kind of results and purchasing impacting or basically being impacted by AI?

Stephen Foskett: Well, there’s a lot of different ways I think that it could be impacted, but I’m going to bring up one that maybe people aren’t thinking of, and that is the infrastructure that’s used for developing software developers. I think a lot of people are used to software developers. Basically the big thing they need is what, a monitor and keyboard. I mean, I think that going forward we may see more demands for hardware silicon being deployed in the hands of software developers. And I know that sounds strange because it’s been a long time since they’ve had any real hardware needs, but maybe we’ll see that. Who knows?

Keith Townsend: Yeah. So I actually think this will impact non areas of AI because we get back to your first question, the legal and ethical impacts of AI, it has unintended consequences. We’re now giving this super powerful tool, this super powerful, inferencing tool, not just to developers, but to end business users who have no formal training around ethics, around data mining. So in theory, a developer who had no data science capability can now create an application that basically red lines loan applicants without them knowing that they’re redlining applicants.

So when we think about the procurement lifecycle, the procurement departments of these really large organizations really have to think through what’s the impact of this new tool, whether it’s an AI tool or AI adjacent to a tool that feeds data. We at the Futurum group, we have this intelligence platform that has fabulous data points in this, but a procurement department, I hate to say it and slow down people buying more intelligence, please go out and buy the intelligence platform. I would love for to see more subscribers, but what happens when I give the wrong people, or at least at the wrong time in their maturity, access to all of this data?

Paul Nashawaty: And I have little insights with the intelligence platform, the data sets that I’ve been citing during this presentation today is from the intelligence platform as well. But no, there’s a whole other aspect of finops and AIOps, which that’s going to lead into a whole different conversation. And unfortunately we are out of time today. So I do want to thank Keith, you and Steven for your perspective and insights. I really appreciate your time today and I want to thank the audience for attending and watching the session today. Please reach out to us if you have any questions and look forward to the next episode. Thank you and have a great day.

Other insights from The Futurum Group:

Application Development and Modernization

The Evolving Role of Developers in the AI Revolution

Docker Build Cloud Aims to Revolutionizing DevOps – The Futurum Group

Author Information

Paul Nashawaty

At The Futurum Group, Paul Nashawaty, Practice Leader and Lead Principal Analyst, specializes in application modernization across build, release and operations. With a wealth of expertise in digital transformation initiatives spanning front-end and back-end systems, he also possesses comprehensive knowledge of the underlying infrastructure ecosystem crucial for supporting modernization endeavors. With over 25 years of experience, Paul has a proven track record in implementing effective go-to-market strategies, including the identification of new market channels, the growth and cultivation of partner ecosystems, and the successful execution of strategic plans resulting in positive business outcomes for his clients.

SHARE:

Latest Insights:

Managing Cloud Costs Amid AI and Cloud-Native Adoption
Paul Nashawaty and Steven Dickens of The Futurum Group cover IBM's acquisition of Kubecost, a cost monitoring and management tool for Kubernetes, marking a step toward providing a comprehensive cost management platform for cloud-native applications.
Veeam Makes a Strategic Move to Enhance Positioning in Next-Generation, AI-Driven Cyber Resilience
Krista Case, Research Director at The Futurum Group, covers Veeam’s acquisition of Alcion and its appointment of Niraj Tolia as CTO. The move will strengthen its AI cyber resilience capabilities.
Google’s New Vault Offering Enhances Its Cloud Backup Services, Addressing Compliance, Scalability, and Disaster Recovery
Krista Case, Research Director at The Futurum Group, offers insights on Google Cloud’s new vault offering and how this strategic move enhances data protection, compliance, and cyber recovery, positioning Google against competitors such as AWS and Azure.
Capabilities Focus on Helping Customers Execute Tasks and Surface Timely Insights
Keith Kirkpatrick, Research Director with The Futurum Group, shares his insights on Oracle’s Fusion Applications innovations announced at CloudWorld, and discusses the company’s key challenges.