Generative AI Observability, Policy Management | The AI Moment – Episode 14

Generative AI Observability, Policy Management | The AI Moment – Episode 14

On this episode of The AI Moment, we discuss an emerging generative AI trend – AI Observability and Policy Management.

Organizations want to develop in-house generative AI capabilities making centralized generative AI command and control products very desirable. Cisco Outshift drops a comprehensive, product-agnostic solution in Motific. What will the impact of Motific and other similar products for AI observability and policy management be?

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Listen to the audio here:

Or grab the audio on your favorite podcast platform below:

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this webcast.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.


Mark Beccue: Hi, everybody. I’m Mark Beccue, Research Director for AI with The Futurum Group. Welcome to The AI Moment, our weekly podcast that explores the latest developments in enterprise AI. We are literally in a moment where the pace of change and innovation in AI is unprecedented. The world has never seen anything like what we experienced when ChatGPT launched in October of ’22 and kickstarted the generative AI era, and with The AI Moment podcast, we distill the mountain of information, separate the real from the hype, and provide sure-handed AI market analysis from the latest advancements in AI technology and the mutating vendor landscape, to AI regulations, ethics, and risk management. Typically, the show is somewhere between 20 and 30 minutes long, sometimes different types of segments.

Today, we’re going to cover a topic that I’m watching very closely right now. I’m calling it “Generative AI Observability and Policy Management.” And what we’re going to do here today is talk about a new product launch, came out from Cisco, called Motific, and we’re going to talk about that and some similar products, and why they’re needed. And, really, what I’m going to try and dig into a little bit is… There’s a difference between some things that are going on in the marketplace. There’s some terms, which we’re used to in AIT, called observability, security, control, and so there are products that are being developed right now that will serve for observability, security, and control for generative AI. On the other hand, there’s a couple other things that are going on. One would be the idea around AI policy management and that being something that is related or different from AI governance.

So what we’re going to do first is I’m going to take you through a little bit of the Cisco information and then we’ll circle back to some of these definitions, so here’s what happened on the sixth, so it was yesterday, at Cisco Live in Europe. Cisco announced the preview of this launch, of a product they’re calling Motific, and they’re describing it as a SaaS product that will help organizations deploy trustworthy generative AI, and it came out of their incubator part of their company called Outshift. And what it does is it provides this centralized view across an organization’s generative AI and it empowers the IT and security teams to deliver generative AI across an organization with control over the data, it provides security for all of that that’s going on, it looks into responsible AI, and it manages costs. So here are the key details.

One of the things that I think is important right away is to understand that it’s a vendor-agnostic solution, so foundation models don’t matter, it doesn’t matter what kind of vendors the company has. It’s an overlay-type of solution to all those things that are going on within a company, so the cool parts about what… At least what Cisco’s saying it will do is that it cuts deployment times with these compliance controls for over usage, for overrun spending, and the integration of organizational specific data sources. So what they mean by that is proprietary data from your own data sources. And then what it does is it gives some automation to configuring assistants, abstracted APIs or RAG as a methodology. And, going further, it’s got these built-in policy controls, which an organization can use to customize or innate in… And it allows those companies to provision based on their internal policies.

So controls include things like controls for sensitive data, like PII, personal identifiable information. In terms of security, it’s got controls for prompt injection and it also has controls for trust-related risks, which are the ones that we’ve talked about a lot on this show, which are toxicity and hallucinations, and those kinds of things that language models tend to have. And what the enterprise controls can do are they detect and mitigate these issues and risks for any kind of LM responses, so we’ve seen that idea as well before from other companies. The final piece would be that Motific tracks business processes and the prompt usage intelligence with these ROI and cost analyses. That includes stuff like an audit trail, key metrics for tracking all the user requests, and Cisco is saying it’ll deter shadow AI usage in organizations, providing visibility into how an unimproved third-party gen AI capabilities and helping IT administrators provision all of this with compliance, and things like that. So that’s one of those things we’ll talk about a little later that Microsoft builds into some of their products that are specific to the Microsoft platform.

All of that said, the product’s going to be available in June. So where I was looking at this, here’s the challenge and the premise of our conversation today. Companies today, or organizations today, are having big questions. In-house development or outsource? When it comes to generative AI, the trend continues to be a strong desire by most organizations to develop in-house generative AI capabilities, which makes this idea of this centralized generative AI command control, policy control, a very logical and desirable product that we’re getting into now, so here’s Cisco with Outshift, comes out with this… They’re leaning on a very historic experience that they have with IT policy controls, they come in and they put out this product that’s a comprehensive and product-agnostic solution that’s called Motific, right? So there’s a few things I wanted to run through and we’ll get into some of those definitions again and where I think this is going. It’s got some potential for what people really need to be looking at.

So first, you have to understand the difference between AI management and AI governance, and there’s some overlap to this and there’s some products out there that do some of these things so right. There are these products that are going to provide observability, security control. They’re all growing, there’s a few. One’s called… From Dynatrace, it’s very similar to Motific, and a startup called CalypsoAI, they offer similar products and they’re designed as these overlays to any generative AI product and to other non-generative AI products that interact with generative AI within an organization. And I think the approach leans into these legacy command and control policy control systems that have been very effective and proven to be very trustworthy for organizations for a long time, and the interesting thing there is there’s a certain amount of trust organizations have with these legacy vendors and that puts Cisco in a good spot.

But on the other hand, you have this family of AI governance products and even data governance products that are related. In particular, we’ve talked earlier, there were products that came out. One was watsonx.governance from IBM and AWS’s Guardrails for Amazon Bedrock. These are products that help organizations manage generative AI systems, but in both those examples, they’re not vendor product-agnostic, so they only work within the IBM or the AWS constructs. That would be the same as we just mentioned for Microsoft Copilot, it works within Microsoft applications with a lot of these controls, but not outside of it, so let’s go over here and talk about these definitions that might help us a little bit, so there are terms being thrown out there, and I’ll give you a few. It’s observability or AI observability, there’s AI policy management, and there’s AI governance.

So I went around, I looked at some definitions. So for AI observability, I pulled up one from YLabs, which is a startup that does AI observability, and this is what they said, this is how they described it, “An AI observability systems collects statistics, performance data, and metrics from every single step of your machine-learning lifecycle and delivers actionable insights to stakeholders. That’s a system that needs a view into each stage of the data pipeline, and thus, should be relatively infrastructure agnostic while providing scalability to your data size. By automating the insight extraction process, teams can collaborate and deliver models and, in turn, respond to issues more effectively.” So they said that the result is an end-to-end observability pipeline, is that the organization… Of that pipeline, is that an organization will get timely insights about changes to data and model behavior in production, especially useful for surfacing common machine learning issues such as drift, stale models, data quality changes, and these signals can be fed back into the ML process and accelerate the model development lifecycle.

So you get an idea of what people are talking about when they say “Observability.” On the other hand, you have policy management, so AI policy management. There was an article in eWeek that just put out a definition I thought was good, so we’ll read that for a second. It says, “An AI policy is a dynamic documented framework for AI governance that helps organizations set clear guidelines, rules, and principles for how AI technology should be used and developed within the organization.” It goes on to talk about what some of those are, “Vision for usage within the organization, mission statements, and clear objectives or KPIs that align with that mission,” and many of the things we’ve heard about that are part of AI governance frameworks, right? So when they talk about policy management, they’re really talking about AI governance.

So now you can see there might be some confusion in the marketplace about these terms and there’s some overlap between what certain products do, but the general idea would be AI governance is a family of things that might also be called AI policy management, but then there’s this other part that’s AI observability and security, which is this different idea as well. They’re slightly different, both are interesting. So I would say this, another piece that I think is going to be interesting is that there… Wonder about these products is that there’s a lot of pieces to connect, and if you set aside particularly the vendor-agnostic pieces. So I was looking at the Motific piece and I think that anything’s related to that that’s non vendor specific, the biggest challenge is going to be these integrations, the complexity of the integration.

So in a generative AI case, that would mean you may have one or more foundation model integrations, you may have third-party AI development platforms to integrate, like Hugging Face or GitHub or Amazon Bedrock or Google Vertex, on and on like those. You’d have third party or internal data management systems, you have third-party or internal computing platforms, so you can see there’s a lot of different pieces, and I think it’ll be interesting to see if these new AI management systems can connect all the dots and how quickly they can do that. And the good news is that legacy observability security control products from companies like Cisco, they’re used to these deep integrations and these wide-ranging integrations, so will it be a little different for AI? Sure, but these are going to be… When you’re looking at these vendors that have done this for other types of AI integrations, maybe it won’t be such a big issue, but the big question will be how long will it take to do those kinds of things and get you up and running.

The second thing, or the last piece I’m going to talk about here is this idea that most organizations are thinking about… They’re in this process of deciding where generative AI operations are going to live for their use. Will it be public cloud? Will it be private or on-prem clouds? And are these products that we’re talking about, Motific particularly, is it better suited for one over the other? The Outshift executives I spoke to told us that Motific works in either of those types of environments and that Cisco was really, since we’re at the front end of these things, is really flexible about where to go with that, whether it’s on-prem or in public cloud, they don’t see a real difference. So that’s good news there, but it could be interesting how these things roll out and work if they’re in a public cloud setting versus on-premise type of setting.

So in conclusion of that, this is… These are products that… I think there’s some challenges to those ideas, whether it’s the AI management products particular versus maybe the governance ones that we talked about that were more application-specific, but I think the demand and the need for them is going to grow dramatically over the next year and these are the types of systems, along with other types of governance tools, that will help organizations really tame generative AI risk and it will push us towards that idea that organizations will use AI responsibly when they do this type of thing. We think about it, there’s a precedent. Companies have been thinking about these kinds of controls outside of AI and how policy controls and observability was important to them outside of AI, and I think that this is a really good… We’re going to see a lot of companies looking to implement these kinds of controls over the next year.

And I believe that that’s really going to help settle the industry down, settle people’s fears about what AI could do in a rogue situation. I think we’re going to see a lot less of that because of these types of controls. Okay, that’s really all I had today, is a quick view of those kinds of things. And keep looking out, we’re going to watch and see what other companies end up in the space, but I think it’s a theme we’ll be looking at through the year. So I want to thank you for joining me here today on The AI Moment, be sure to just subscribe and rate and review the podcast on your preferred platform, and we’ll see you next week

Other insights from The Futurum Group:

Gen AI Case Study: Amazon Pharmacy| The AI Moment – Episode 13

Microsoft Copilot Forecast, Fairly Trained, Google ASPIRE | The AI Moment – Episode 12

Lawsuits and Probes, How OpenAI & Microsoft Are Impacting the Trajectory of AI | The AI Moment – Episode 11

Author Information

Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.

Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.


Latest Insights:

On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss AWS Summit New York 2024, Samsung Galaxy Unpacked July 2024, Apple & Microsoft leave OpenAI board, AMD acquires Silo, Sequoia/A16Z/Goldman rain on the AI parade, and Oracle & Palantir Foundry & AI Platform.
Camberley Bates at The Futurum Group, reflects on NetApp’s Intelligent Data Infrastructure across hybrid and multi-cloud environments, enhancing operational consistency and resilience.
All-Day Comfort and Battery Life Help Workers Stay Productive on Meetings and Calls
Keith Kirkpatrick, Research Director with The Futurum Group, reviews HP Poly’s Voyager Free 20 earbuds, covering its features and functionality and assessing the product’s ability to meet the needs of today’s collaboration-focused workers.
Paul Nashawaty, Practice Lead at The Futurum Group, shares his insights on the Aviatrix and Megaport partnership to simplify and secure hybrid and multicloud networking.