Analyst(s): Fernando Montenegro
Publication Date: December 19, 2025
Futurum’s latest report addresses the prevailing confusion within the industry regarding the definition and scope of “AI security,” arguing that current terminology often conflates distinct defensive and adversarial concepts. Futurum examines the necessary strategic pivot from treating AI merely as a productivity tool to managing it as a fundamental architectural shift toward autonomous agents. The analysis provides a framework for understanding AI as a tool, a target, and a weapon, while highlighting the governance challenges posed by rapid technological evolution.
Key Points:
- Security leaders must urgently distinguish between protecting workforce productivity tools and securing the autonomous decision-making logic of enterprise workloads to clarify current strategic confusion.
- The industry faces a significant evolutionary mismatch where static, deterministic controls appear increasingly inadequate for governing stochastic, probabilistic AI systems.
- Organizations should approach agentic AI as a horizontal architectural layer requiring new identity protocols rather than a vertical, siloed product category.
Overview:
We are roughly three years into the generative AI revolution, and what began as consumer fascination with chatbots has evolved into a structural transformation of the global economy. However, this rapid ascent has generated a “fog of war” for cybersecurity teams, who often struggle to distinguish between the risks of using these tools and the risks inherent in building systems upon them. Futurum categorizes this landscape into three distinct buckets to cut through the noise: AI for Security (the tool), Security for AI (the target), and Security from AI (the weapon). This categorization is critical because the controls required for a coding assistant differ vastly from those needed for an autonomous customer service agent.
A central tension identified in the analysis is the “evolutionary mismatch” between the speed of AI development and the human capacity for governance. Security teams are attempting to apply static, perimeter-based controls to systems that are probabilistic by nature. Large Language Models (LLMs) do not “know” facts; they manipulate token sequences in high-dimensional probability spaces. Consequently, a firewall rule that is “99% probable” represents a failure in a security context. This suggests that the industry must move beyond “black box” acceptance and embrace “AI mechanics” literacy. Future security architectures will likely need to reintroduce formal logic, such as using neurosymbolic AI or formal verification, to validate the decisions of autonomous agents before they execute transactions.
As we look toward 2026, the conversation is expected to shift heavily toward “agentic AI.” Rather than being a niche feature, agency represents a horizontal capability where software executes multi-step goals without direct human oversight. This transition demands a rethink of identity and authorization; if an agent can move funds or alter permissions, it requires its own identity lifecycle. We are witnessing the emergence of new protocols, such as the Model Context Protocol (MCP), which aim to standardize how these agents interact with data and tools. Major platform providers, such as Microsoft and Palo Alto Networks, among others, are leveraging their “data gravity” to consolidate the market, potentially crowding out smaller players. For practitioners, the path forward involves ignoring the “shiny objects” and using AI as a forcing function to fix foundational debt in data classification and identity management.
What to Watch:
- Will the volatility in AI model performance and vendor valuations trigger an “AI winter” that constrains security budgets and roadmaps?
- How will identity frameworks evolve to treat autonomous software agents as distinct entities with granular permissions and lifecycle management?
- Can organizations successfully pivot from measuring vanity metrics such as “time saved” to quantifying actual risk reduction in their AI deployments?
The full report is available via subscription to Futurum Intelligence’s Cybersecurity & Resilience IQ service—click here for inquiry and access.
Futurum clients can read more in the Futurum Intelligence Platform, and non-clients can learn more here: Cybersecurity & Resilience Practice.
About the Futurum Cybersecurity & Resilience Practice
The Futurum Cybersecurity & Resilience Practice provides actionable, objective insights for market leaders and their teams so they can respond to emerging opportunities and innovate. Public access to our coverage can be seen here. Follow news and updates from the Futurum Practice on LinkedIn and X. Visit the Futurum Newsroom for more information and insights.
Declaration of Generative AI and AI-assisted Technologies in the Writing Process. While preparing this work, the author used Google Gemini to summarize the original report. After using this service, the author reviewed and edited the content as needed. The author takes full responsibility for the publication’s content.
Author Information
Fernando Montenegro serves as the Vice President & Practice Lead for Cybersecurity & Resilience at The Futurum Group. In this role, he leads the development and execution of the Cybersecurity research agenda, working closely with the team to drive the practice's growth. His research focuses on addressing critical topics in modern cybersecurity. These include the multifaceted role of AI in cybersecurity, strategies for managing an ever-expanding attack surface, and the evolution of cybersecurity architectures toward more platform-oriented solutions.
Before joining The Futurum Group, Fernando held senior industry analyst roles at Omdia, S&P Global, and 451 Research. His career also includes diverse roles in customer support, security, IT operations, professional services, and sales engineering. He has worked with pioneering Internet Service Providers, established security vendors, and startups across North and South America.
Fernando holds a Bachelor’s degree in Computer Science from Universidade Federal do Rio Grande do Sul in Brazil and various industry certifications. Although he is originally from Brazil, he has been based in Toronto, Canada, for many years.
