At RSA 2026, Varonis CEO spotlighted how AI is fundamentally altering both enterprise defenses and attacker tactics [1]. The stakes are high: 62.1% of cybersecurity leaders now say AI-powered tools are a necessity, not a luxury, as attackers weaponize generative AI. Understanding how can ai-powered security defend against these threats is critical, especially as 62.0% of organizations have observed a significant increase in sophisticated AI-driven social engineering attacks according to Futurum Group’s 2H 2025 Cybersecurity Decision Maker Survey (n=1,008).
What is Covered in this Article
- Varonis’s AI security vision and implications for enterprise buyers
- Escalating arms race between AI-powered defense and AI-driven threats
- Market impact on security vendor strategies and reference architectures
- Key execution risks and what enterprise CISOs must prioritize next
The News
At RSA 2026, Varonis CEO Yaki Faitelson delivered a keynote focused on the disruptive impact of AI on enterprise security [1]. He argued that AI is not just a new tool for defenders but a force multiplier for attackers, accelerating the speed and sophistication of threats. Varonis is positioning its platform to demonstrate how can ai-powered security enables real-time threat detection, automated response, and deep data context analysis. The company claims these capabilities are essential as generative AI enables attackers to launch highly targeted, automated campaigns at scale. Faitelson also emphasized the need for security architectures that can adapt quickly as both threat vectors and enterprise data environments evolve.
According to Futurum Group’s 2H 2025 Cybersecurity Decision Maker Survey (n=1,008), 62.1% of leaders agree AI-powered defensive tools are now a necessity. The data shows that 62.0% have observed a significant increase in sophisticated AI-driven social engineering attacks, reinforcing why understanding how can ai-powered security responds to these threats matters for enterprise resilience.
Analyst Take
AI is now the defining force in enterprise security, but it’s a double-edged sword. Defenders must use AI to keep up with attackers who are automating and personalizing campaigns at scale. The market is shifting from incremental improvements to existential questions about trust, control, and execution risk.
How can AI-powered security Address the AI Security Arms Race Escalating Faster Than Budgets
Varonis’s message at RSA is clear: AI is not just a feature, it’s a battleground. Attackers are already using generative AI to automate phishing, deepfake voice calls, and lateral movement. According to Futurum Group’s 2H 2025 Cybersecurity Decision Maker Survey (n=1,008), 62.0% of organizations have seen a significant rise in AI-driven social engineering attacks, and 82.3% experienced at least one major incident in the past year. Yet, 73.2% expect budgets to rise, but only 23.1% plan increases above 15%. The gap between threat velocity and resource growth is widening. Understanding how can ai-powered security reduces measurable risk is essential, as vendors that can’t demonstrate AI-driven risk reduction will be left behind.
How can AI-powered security Navigate Reference Architectures and the GPU Blind Spot
As AI factories proliferate, traditional endpoint detection and response tools are losing visibility. According to Futurum Group’s February 2026 report ‘Do AI Factories Signal a New Mandate for Certified Security?’, a ‘GPU Blind Spot’ is emerging because most security tools only monitor CPU and OS activity, not GPU workloads. Security vendors are racing to certify on NVIDIA’s reference architectures using BlueField DPUs. Varonis’s push for adaptive AI-driven security must address this visibility gap or risk irrelevance in the AI data center era. Understanding how can ai-powered security secures both CPU and GPU workloads is crucial as agentic AI scales. Enterprises should scrutinize whether vendors demonstrating how can ai-powered security operates across heterogeneous infrastructure can truly protect modern environments.
Execution Risk: How can AI-powered security Help Teams Actually Operationalize AI
AI-powered defense is not plug-and-play. According to Futurum Group’s 2H 2025 Cybersecurity Decision Maker Survey (n=1,008), talent scarcity is a top challenge, and 62.1% agree that relying solely on human analysts is no longer viable. But operationalizing how can ai-powered security functions requires new skills, governance, and trust in automated response. Varonis and peers such as CrowdStrike and Palo Alto Networks must invest in explainability, granular controls, and integration with enterprise data governance. Understanding how can ai-powered security maintains oversight while automating response is critical. Otherwise, CISOs will hesitate to cede control, and AI will remain underutilized in practice.
What to Watch
- GPU Security Gap: Will vendors close the monitoring blind spot in AI data centers by 2027?
- AI Budget Reality Check: Can CISOs justify rising AI security spend as attack sophistication outpaces budget growth?
- Operational Trust: Will security teams trust automated AI-driven responses, or will human-in-the-loop remain mandatory?
- Reference Architecture Lock-In: Will NVIDIA and Cisco’s certified designs become the new security baseline, squeezing out niche vendors?
Sources
1. At RSA, Varonis CEO lays out how AI is reshaping enterprise security
Declaration of generative AI and AI-assisted technologies in the writing process: This content has been generated with the support of artificial intelligence technologies. Due to the fast pace of content creation and the continuous evolution of data and information, The Futurum Group and its analysts strive to ensure the accuracy and factual integrity of the information presented. The Futurum Group makes no guarantees regarding the completeness, accuracy, or reliability of any information contained herein. Readers are encouraged to verify facts independently and consult relevant sources for further clarification.
Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.
Read the full Futurum Group Disclosure.
Author Information

FuturumAI
This content is written by a commercial general-purpose language model (LLM) along with the Futurum Intelligence Platform, and has not been curated or reviewed by editors. Due to the inherent limitations in using AI tools, please consider the probability of error. The accuracy, completeness, or timeliness of this content cannot be guaranteed. It is generated on the date indicated at the top of the page, based on the content available, and it may be automatically updated as new content becomes available. The content does not consider any other information or perform any independent analysis.