In this episode of The Main Scoop™, hosts Greg Lotko and Daniel Newman discuss the importance of observability and generative AI strategies in operations with Cory Minton, Field CTO – Americas at Splunk.
It was a great conversation and one you don’t want to miss. Like what you’ve heard? Check out all our past episodes here, and be sure to subscribe so you never miss an episode of The Main Scoop™ series.
Watch the video below:
Listen to the audio here:
Or stream the episode from your favorite platform:
Disclaimer: The Main Scoop Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.
Transcript:
Greg Lotko: Hey, folks. Welcome to the next episode of The Main Scoop. I’m Greg Lotko and I’m joined here by my co-host, Daniel Newman.
Daniel Newman: Greg, it’s good to be back with you, bud.
Greg Lotko: Good to see you. We’re going to have a lot of fun. Today, we’re talking about operational intelligence and observability. That can mean a lot of different things to a lot of different people, but we all want to know what’s going on in our IT, don’t we?
Daniel Newman: Yeah. This is a space that’s piqued a lot of interest over the last couple of years. We’ve seen some big M&A, we’ve got some impending M&A. We’ve got a number of companies that have really turned to… We’ve got this fire hose of data that we have to pay attention to. It’s operational data, it’s security operations data. And with this has been this massive rise, Greg, of interest in observability, which has become a bit of the buzzword. It’s not generative AI, but next to generative AI, one of the biggest buzzwords in all of tech.
Greg Lotko: And it wouldn’t be an episode of The Main Scoop without Daniel referencing generative AI.
Daniel Newman: We’re not.
Greg Lotko: But let’s keep on operational intelligence here and observability.
Daniel Newman: Sorry. You’re right. It’s not very important. Nobody’s talking about it.
Greg Lotko: Didn’t say it wasn’t important. We just, let’s stay focused here on what we’re talking about today. All right. All right. We’re going to get some help as we always do, and we’ve got Cory, the big data beard, Minton with us from Splunk. And I don’t think that means that he goes crawling around in caves. Tell us a little bit about yourself and your role and then dive in. Join us in the conversation, Cory.
Cory Minton: For sure. Well, thanks for having me. And yes, as you can tell, I’m probably not built for cave diving. I’m a little too large for the crevasses that you might find.
Greg Lotko: You want big caves.
Cory Minton: Very large. Claustrophobia is a real thing. But yeah, so I’m a field CTO for Splunk. And have been there about four years, but have been in the just general big data AI space for a number of years. And operational resilience is something that’s near and dear to my heart, just as I get to work with a lot of big organizations that are trying to make sure things stay securely up and running. And so, it’s an interesting place to be, because as you said, there’s lots going on in the market. I would actually say that I think from an observability and operations perspective, you can’t have a conversation without talking about the regulations that are being handed down, that are driving people, that it’s no longer optional. And so I think that’s maybe why observability is getting so much attention right now, is because executives are hearing from places like the SECC that are saying, “Hey, if you have a material incident where something that affects your operations in a way such that the shareholders might care about it, then you only have four days to respond.”
Greg Lotko: Tell us what happened.
Cory Minton: Exactly. And give us some material ideas of what was it, what was the thing that happened, what was the impact? And so-
Greg Lotko: How can we avoid it the next time?
Cory Minton: Well, and that’s the second part of the legislation is that it’s like not only do you have to report it, but you actually, when a thing happens, you also have to report what are you doing as an organization from the board level all the way down to operations to strengthen your resilience. And they’re using that term resilience. NIST has been using it as the encapsulating function of if you do operations well, you’ve built digital resilience. I think that’s why observability is very buzzy right now.
Daniel Newman: Yeah, we’ve definitely seen this pivot too in the boardroom where cyber resiliency and cybersecurity has gone from a, “What is the least we could spend to get away with it?” to, “We absolutely have to get ahead of this.” And so, I want to start in the basics, Cory. Observability. I got to be candid with you. I’ve been covering Splunk as an analyst for almost half a decade. I’ve gotten to know your last two CEOs very, very well. And now I’ve been close to… We work closely with Chuck too at Cisco, and I’ve watched the investment and the growth over at Cisco. And now Gary comes in and the acquisition. But I still get asked, it’s ask 10 people, what’s observability? I get 10 answers. What’s the Splunk answer? How do you guys introduce observability to the world?
Greg Lotko: Is there one answer do you say?
Cory Minton: No, I think that’s part of the challenge is that these terms get conflated by candidly oftentimes it’s marketers that are trying to put their spin on something. But I think those of us have lived in this operational space, observability is how do you make sure foundationally what’s happening in my digital systems? There’s everything… The last decade has really been defined by organizations digital innovation, like digital transformation objectives. As you digitize more of the ways that you interact with your customers, the ways that you collect revenue, the ways you create value in the market, as that becomes more digital, well, it becomes more mission-critical. It’s no longer we’re just protecting a manufacturing facility now. It’s a digital interaction that we have to protect.
At the end of the day, I think observability is how do you apply a thoughtful lens of I need to understand what’s foundationally happening in our systems that have now become very distributed in certain nature. From traditional mainframes running the data centers to VMware and distributed systems, running the data centers to hyperscale public clouds that are distributed systems used as a service. Most of the time organizations have a little bit of all of that. And they just need to understand when something goes bump in the night, there’s an outage, there’s a breach, there’s an attack, how’s it manifesting? What’s it actually affecting? I think that’s what observability does. It says, let’s instrument all the things in a way such that I can answer questions really rapidly about the interconnected digital services. And how do I make sure that they’re secure and up and running?
Greg Lotko: And I love a couple of things in there that you talked about, the interconnected systems across. There’s different pieces going on different platforms. Observability isn’t with your blinders on running down the horse track, it’s observability of what’s going on across my workload? How are these things interacting? Can I pull that all together and tell you what’s going on across? Because when something goes fluey, you may see it at the manifestation at the end, at the screen. But you want to understand what happened here and then you want to understand how can I make it so I can avoid that next time?
Cory Minton: Yeah. And we’ve had other terms for this in the last 25 years. Business process monitoring has been one. We had this emergence of the application performance monitoring management space that candidly, if you look at the market, most of what is observability noise is really just APM providers saying APM is observability. It’s like they’ve made those two. I think it includes more than that because applications at the end of the day have to have infrastructure. It has to include infrastructure monitoring, have to understand what’s going on there. It’s APM, how does that application actually manifest in end user interaction? It’s not always like a application that’s run inside your data center. Oftentimes it’s a web-based app that’s got some sort of experience, whether it’s digital or, excuse me, whether it’s mobile or in the web. And you have to connect all of those. And I think that’s where we went for business process monitoring where it’s like, well, we need to understand the business impact of the digital system. Well, fast-forward, that’s just the same reality. It’s that things just got more distributed and complex. And so, solving that problem has just gotten harder. And I think that’s one of the reasons why observability gets a lot of attention. It’s because it’s getting hard,
Greg Lotko: I think words and terms drive behavior or perspective. I think a lot of the times when people were thinking about business process monitoring, they broke it up and talked about this process or that sub-process. Observability makes me think much broader scope. And clearly there’s a lot of technologies on each platform from different providers that allow you to have the lens into individual things. But I know for some of our customers, a significant amount of our customers, they want to have a bunch of that data going off to Splunk or other service providers so that they can have it coming from across the whole process. And then they can look at it holistically. And it makes a lot of sense. That’s why we’re investing in doing the connection and being able to drive and push data out about what’s going on in the mainframe environment to places like Splunk. That way you get the bigger understanding, not just of what’s going on with an individual piece, you can see it end-to-end.
Cory Minton: Yeah, I think one of the interesting things that’s happening in the observability space is that I think a lot of the vendors are getting a lot of pressure from their customers and their users and saying, “Look, I don’t really want your value, the thing I pay you to do, I don’t want it to be about that you can instrument my app or that you can instrument my cloud. You can instrument my infrastructure.” They want the value of the analytics. What is it that you’ve done to help me ensure that whatever that digital service is that’s really important, that’s revenue generating, customer facing. How are you actually ensuring that we’re getting the outcomes we want? And so I think that’s one of the areas where there’s this open source project called OpenTelemetry, which is really around the idea of if we want to have a foundational view of all the things that are in our complex systems, we ought to have some sort of a standard way to instrument all the things.
Greg Lotko: And common ways to talk about it and all the things.
Cory Minton: Exactly. And a normalization function of the data and the semantics of a main data that comes off of a mainframe that looks maybe different than what comes out of a server from Dell that looks like what comes out of CloudWatch or what comes out of CloudTrail or Google Compute. But at the end of the day, there’s some things that are normalized and should be thought of as the same, but how do you instrument all of those very different and unique systems in ways that now I’m not paying the vendor for the instrumentation? It’s how do you give me value out of those top level things? And I think that’s what OpenTelemetry seeks to do is it’s a open source project that’s seeking to standardize the instrumentation function of how do you instrument modern technology in a way that’s not locking you into any one particular vendor.
What it does is it puts the onus back on the vendor to say, “Don’t just, I don’t want to buy instrumentation. I want to buy outcomes from you.” And I think that’s an interesting space where observability is getting more interesting because people are getting less focused about, well, look, we have this agent that’s our proprietary thing that we can deploy in this framework. No, it’s about what is the outcome and how do you connect to the actual? When I say business outcome, I mean simple things like the revenue that’s being created. If you have an outage, how much money did you lose? If you have a breach, how much money are you losing? Those measurable impacts. At the end of the day, we still have to answer to those executives who are having to answer to those regulatory bodies rapidly and quickly that we can answer those questions. I think it’s an interesting kind of direction.
Greg Lotko: And I actually like that the regulatory bodies are driving consistency or responsibility across businesses. But I do believe in today’s litigious world, there is a greater focus on it at the board level and the C-suite to say, “Hey, look, if I have a problem and if I can’t describe what happened and assure my customers that I know what happened, I know how to avoid it going forward.” There’s severe reputational risk. It’s, yeah, there’s legislation driving from one side, but it’s a business imperative driving from the other side, which is a nice dovetail.
Cory Minton: Yeah, for sure. And I think one area that we’d be remiss not to talk about, which is okay, if let’s say we do a good job at observability, we instrument all the things, we have foundational understanding of what’s happening and we have a connection to the business outcomes. What happens when you have this reality of we don’t have enough talented people that really understand the complexities of these modern digital systems? How are we building capabilities to break down the barriers of knowledge, the technology breakdowns? And I think that’s where you can conflate generative AI with this observability space. And I’m just generally a big fan of if you can have two big technology sort of macro things come together, gen AI and observability I think are a nice fit. And I’ll give you an example. It’s like if you have something that you have an outage, but you have a group of people that maybe don’t know how to craft a query in a particular product, it doesn’t matter what product. But they don’t know how to go look for the data. What if a generative AI could actually recognize what has happened? Give guidance to say, “Hey, I think that it’s this. We should go investigate these three things.” You go do that with an assistive AI that’s actually helping drive you down that path to where now you may not have to have those deeply technical skills of I know how to clicky-clacky and write a query that gives me a result. I just have to have the business and domain expertise to thoughtfully understand impacts.
Greg Lotko: Well, there’s multiple levels on that, right? Because there’s, you want each of these platforms with standards to manifest themselves without somebody having to have the expertise in the specifics and then be able to take action. And then move the people who understand the deep technological about that platform to develop the things that will differentiate and provide those capabilities. And I guess I keep jumping in on you. I’m sorry. Get a word in edgewise.
Daniel Newman: It’s okay. It’s been great to witness this conversation. You look at what’s going on with something like EDA tools, that are being used to develop next generation silicon. And basically you saw this week coming called Synopsis, they rolled out a generative tool using OpenAI where basically you can have an engineer, so you need a basic level of skill and competency. But they can ask it for things like, “Give me the right calculus or trigonometry sets to basically understand how the architectural layouts need to be completed.” And it can almost become like a GitHub for coders. And this is for engineers. And now we’re going to need, the same thing for CISOs and security architects is we’re going to need a way to do what I would call human machine interfacing. To create almost an empathic sort of interaction that can take place, that can allow people that understand philosophically what’s going on inside of the environment to be able to query it in a way. Even if they can code. It’s not can you code. That won’t be the question. The most optimal user of this technology in the future will be someone that can actually code but doesn’t need to. That’s where the real genius comes from this. You will see generative AI be layered right on top of observability for basically two purposes. One is going to be for being able to interact and be able to actually have meaningful empathic interactions with that system. The second will be for resolution. Because the resolution is going to need to be the machine giving human language back to us instead of giving a string of code, “Hey, this is where you’re disrupted.”
Greg Lotko: Give me more on the empathic because I’m thinking, I compute with empathy. I love all my computers.
Daniel Newman: No.
Greg Lotko: I’m trying to figure out the…
Daniel Newman: What I’m saying is human machine it’s-
Greg Lotko: Or am I going to think at it and it’s going to code? Give me the empathy.
Daniel Newman: Human machine. Okay. What we’ve basically built with generative AI is the human speaking to the machine in a language that we understand and then the machine giving us back context in which we understand. The machine doesn’t actually have empathy, but when you can have these interactions, the whole her and this whole kind of era in the future of a computer being able to have a very human-like relationship is because it talks to us in a language we understand. It’s semantic, it’s contextual, it’s human in nature. We’ve learned to speak machine.
Greg Lotko: You want us all to feel like our computers love us.
Daniel Newman: We’ve learned to speak machine. All coders have learned to speak machine.
Greg Lotko: True, true.
Daniel Newman: Now machines are learning to speak human. And when you do that, what I’m saying is we solve problems in a way that works more in line congruently with the way our brains work. Period.
Cory Minton: Yeah. It’s the synergy between the two. It’s artificial intelligence because we’re having intelligence that was ours to begin with. We were the ones that owned intelligence. Humans were. We’re mimicking many of the things that make us human, but we’re codifying them in digital systems. Which is why I think so many folks struggle with understanding neural networks, which is the basis for so much of this generative AI. It works like your brain does, and we don’t even know 10% of how our brain actually works. It’s an interesting pace because it’s modeled after us. It’s modeled after data that was created by humans. It’s hard not to see an AI feeling very much like us because it is us. It’s just… Yeah.
Greg Lotko: I’m trying to figure that out. If we’re working on artificial intelligence, are the computers working on developing human intelligence?
Daniel Newman: Yeah. Kind of. I mean, not actually. Yes.
Cory Minton: I mean at some point we output.
Greg Lotko: But that would be nirvana.
Daniel Newman: The output looks like it.
Greg Lotko: Well, no, that’s called Skynet.
Daniel Newman: The output kind of looks like it. But I really liked your idea, and I liked what you said about the two technologies…
Greg Lotko: They should.
Daniel Newman: Finding symmetry.
Cory Minton: Agreed.
Daniel Newman: And working side by side. Because here’s the one thing I think we can all agree on is there’s this kind of, everyone likes the hype. There’s so much data being created. Yes, there’s a lot of data, but the exponential volume of data being created basically will leave us in a position where no company will have enough eyes and enough software and enough ears and enough monitoring and enough tools to manage. The only way it works is really sophisticated algorithms where the white hats move faster than the black hats so that we’re able to identify, find…
Greg Lotko: I agree.
Daniel Newman: And cure at a rate that is greater than whatever those that are trying to be nefarious are able to attack.
Greg Lotko: I agree with that.
Cory Minton: I simplify that as I call that non-human scale problems.
Greg Lotko: We’re there.
Cory Minton: Those are, they’re solving non-human scale problems.
Daniel Newman: It’s like quantum-
Cory Minton: 100%.
Greg Lotko: We’re there and we’ve been there for years.
Cory Minton: And that’s one of the things, machine learning has been… Again, we talk about terms that get conflated and weird, like AI. We have to remember scientifically, AI is a collection of sub principles of science. Machine learning is a sub principle of AI. In machine learning, you have other sub principles like deep learning that use different fundamental technology, but there still, are they AI? Yes. Some would actually argue that things like robotic process automation is actually AI, is a fundamental study. When you bring all this together, there are these marketing terms that get conflated and put into weird spots. But at the end of the day, it’s about what is the outcome I’m trying to drive. And in machine learning, we’ve always solved non-human scale problems, which is the volume of data coming in is too high. I can’t, as a human look at this data and extract a pattern, even though we’re great pattern recognition engines. I just can’t consume that. How do I apply it to a machine?
Greg Lotko: It’s the fire hose.
Cory Minton: Yeah. And so I think it goes back to when those two come together, I think that’s why making things like instrumentation trivial to where everybody does it the same way, it allows us to start working on this stuff that is transformative, that is interesting and goes further. But I will challenge you on one thing you said about will the white hats have to work a little faster than the black hats? Black hats have this, in my opinion, distinct advantage with AI. And the simple one is we are going to have rules. If you were to… The good guys are going to have rules and ethics. The bad guys don’t. And you can even look at the executive order that came out from the White House about the use of AI. And they’ve basically said, “We’re going to investigate the use of AI. And if it has an impact on our citizens’ privacy and wellbeing, there’s going to be 12 different government agencies that are going to be inspecting all of the AI.”
Daniel Newman: It’s been a problem for a while.
Greg Lotko: On the one hand, I think that’s fair. And I was going right where you’re going. It’s been a problem for a long time. I agree. It does concern me, but think about any technology or anything societally. The nefarious folks have always had no rules, and the good folks have always had rules. And we’ve survived as a society. I want to put faith in human and AI kind for the good…
Cory Minton: That’s right.
Greg Lotko: That this will work out and will work out at the right pace. Now, I do find it very foundationally interesting, a thread that has been through all of our episodes. Talking about the idea of generative AI with operational intelligence, talking about cloud on-prem, off-prem with mainframe or other technologies. If you think about it, the theme we’ve had and discovered through every conversation we’ve had, it’s always been the and. It’s always been the combination of technologies that brings us faster forward and we accomplish much more. I mean, I think about The Main Scoop. I mean, this might have a certain level of value if it was just me or it was just you and I together. And then the guests that we bring on here, we provide such a rich conversation.
Daniel Newman: There’s a word, it’s called synergy.
Cory Minton: You have to do this when you say that word.
Daniel Newman: One plus one equals three.
Greg Lotko: We’ll all do it.
Cory Minton: Synergy. Synergy.
Daniel Newman: And how do you like them apples?
Greg Lotko: Yeah, they’re a little light.
Daniel Newman: Take a bite out of that one. But Cory, we’re going to wrap it up there.
Cory Minton: It’s all good.
Daniel Newman: I want to say thank you so much for joining us here on The Main Scoop. Let’s have you back again sometime soon.
Cory Minton: Sounds good, man. Thanks, y’all.
Greg Lotko: Pleasure having you on.
Daniel Newman: All right, everybody. There you have it. Another episode of The Main Scoop in the bag. Cory Minton from Splunk. Want to thank him a lot. We talked about what is going on in an observability, and we talked about generative AI, which was really important today. But in serious, Greg, it’s a big topic. Operations meet the future of a data exponential growth curve.
Greg Lotko: I felt very observable today.
Daniel Newman: And for you who are observing, I want to thank you so much for tuning in. Hit that subscribe button, join us for all of our shows, but for this episode of The Main Scoop…
Greg Lotko: See you next time on The Main Scoop.
Daniel Newman: Bye-bye.
Author Information
Daniel is the CEO of The Futurum Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise.
From the leading edge of AI to global technology policy, Daniel makes the connections between business, people and tech that are required for companies to benefit most from their technology investments. Daniel is a top 5 globally ranked industry analyst and his ideas are regularly cited or shared in television appearances by CNBC, Bloomberg, Wall Street Journal and hundreds of other sites around the world.
A 7x Best-Selling Author including his most recent book “Human/Machine.” Daniel is also a Forbes and MarketWatch (Dow Jones) contributor.
An MBA and Former Graduate Adjunct Faculty, Daniel is an Austin Texas transplant after 40 years in Chicago. His speaking takes him around the world each year as he shares his vision of the role technology will play in our future.