Menu

Automation Anywhere Outlines Generative AI Guardrail Strategy

Automation Anywhere Outlines Generative AI Guardrail Strategy

The News: Automation Anywhere held a briefing with industry analysts to discuss its ongoing initiatives around AI and automation, and provided additional insight into the way it is deploying generative AI guardrails. Steve Shah, SVP Product at Automation Anywhere, laid out the specific steps the company is taking to safely deploy generative AI within its platform.

You can read the original Press Releases focused on Automation Anywhere’s recent generative AI announcement with Google Cloud and Amazon Bedrock, and the company’s announcement of Automation Co-Pilot + Generative AI for Business Users and Automators, and Document Automation + Generative AI.

Automation Anywhere Outlines Generative AI Guardrail Strategy

Analyst Take: Automation Anywhere has recently made several generative AI announcements, including those announcing partnerships with Google Cloud and Amazon Bedrock, as well as the company’s announcement of its core AI offerings, including Automation Co-Pilot + Generative AI for Business Users, Automation Co-Pilot + Generative AI for Automators, and Document Automation + Generative AI. In addition to discussing the enhanced functionality and efficiency afforded by incorporating generative AI into the platform, Steve Shah, Automation Anywhere’s SVP of Product took time to discuss the specific steps the company is taking to enact generative AI guardrails. He notes that data governance and the proper application of controls are at the forefront of Automation Anywhere’s AI strategy.

Securing Partnerships With LLMs That Respect Data Governance and Customers

At the heart of Automation Anywhere’s strategy for implementing generative AI guardrails is the decision to only partner with LLMs that respect data governance and privacy of Automation Anywhere’s customers. Further, Shah says that the company puts a premium on the use of secure models, and its selection of partners such as OpenAI, Google, Amazon, and Vertex AI underscore the commitment to only using models that handle data properly and securely.

Implementation of Prompt Guardrails to Limit Abuse or Misuse

Shah described several steps the company is taking to eliminate and lessen the likelihood of prompt abuse. Automation Anywhere is doing a significant amount of prompt engineering and controls development to ensure that users cannot submit prompts that are designed to elicit inappropriate or inaccurate results. Shah says that the company uses a lot of checks and balances to ensure that prompts are not able to go off the rails, and has established best practices for both prompt engineering and secure development with AI.

This is a sound and proper approach to implementing generative AI guardrails. A key best practice revolves around ensuring that inadvertent or intentional instances of prompt abuse or manipulation have been considered and addressed, prior to the tool being rolled out. However, it is a process that needs to be revised and updated, particularly as the models change, and the number and types of users and use cases increase.

Restricting the Use of Generative AI

Generative AI tools are new and shiny, and everyone wants to experiment and play with them. However, Shah says another key strategy for enacting generative AI guardrails is to provide enterprises with the ability to restrict its use. Shah adds that not everyone needs nor should have access to generative AI tools, and Automation Anywhere has implemented a control level that can be administered by the organization’s Center of Excellence (CoE) Manager, which has the ability to turn features and users on and off.

As the old sports adage goes, not everyone gets to touch the ball. While there will be some grumbling, it is far more prudent to limit the use of newly-minted generative AI tools to the employees and roles that truly will see and drive benefits from its use.

Deploying a Human in the Loop

Like other vendors deploying generative AI, Automation Anywhere also promotes the use of a human in the loop (HITL) to review generated answers or content. This ensures that no matter what the AI does, there is a path for human to double check the result before the content is executed or acted upon. Shah says that due to efficiency and productivity benefits, the slight slowdown in workflow due to a HITL worth it, in terms of implementing generative AI guardrails that reduce the risk of blindly relying on generative AI output.

Continuous Performance Monitoring

The final step deployed by Automation Anywhere is the use of continuous monitoring of generative AI tools to ensure they are delivering value and are being used responsibly. Shah says this message has been resonating with enterprises, which are rightly focused on making sure that investments in generative AI are providing ROI, which will continue to rise in importance as the costs of rolling out generative AI across dozens or hundreds of use cases continues.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

UK AI Regulations Criticized: A Cautionary Tale for AI Safety

Generative AI Investment Accelerating: $1.3 Billion for LLM Inflection

Google Cloud Engineering Exec: Welcome to Generative Engineering

Author Information

Keith Kirkpatrick is Research Director, Enterprise Software & Digital Workflows for The Futurum Group. Keith has over 25 years of experience in research, marketing, and consulting-based fields.

He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.

In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek, CNBC.com, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.

He is a member of the Association of Independent Information Professionals (AIIP).

Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.

Related Insights
NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy
December 16, 2025

NVIDIA Bolsters AI/HPC Ecosystem with Nemotron 3 Models and SchedMD Buy

Nick Patience, AI Platforms Practice Lead at Futurum, shares his insights on NVIDIA's release of its Nemotron 3 family of open-source models and the acquisition of SchedMD, the developer of...
Will a Digital Adoption Platform Become a Must-Have App in 2026?
December 15, 2025

Will a DAP Become the Must-Have Software App in 2026?

Keith Kirkpatrick, Research Director with Futurum, covers WalkMe’s 2025 Analyst Day, and discusses the company’s key pillars for driving success with enterprise software in an AI- and agentic-dominated world heading...
Broadcom Q4 FY 2025 Earnings AI And Software Drive Beat
December 15, 2025

Broadcom Q4 FY 2025 Earnings: AI And Software Drive Beat

Futurum Research analyzes Broadcom’s Q4 FY 2025 results, highlighting accelerating AI semiconductor momentum, Ethernet AI switching backlog, and VMware Cloud Foundation gains, alongside system-level deliveries....
Oracle Q2 FY 2026 Cloud Grows; Capex Rises for AI Buildout
December 12, 2025

Oracle Q2 FY 2026: Cloud Grows; Capex Rises for AI Buildout

Futurum Research analyzes Oracle’s Q2 FY 2026 earnings, highlighting cloud infrastructure momentum, record RPO, rising AI-focused capex, and multicloud database traction driving workload growth across OCI and partner clouds....
Adobe Q4 FY 2025 Record Revenue, AI Adoption, ARR Targets
December 12, 2025

Adobe Q4 FY 2025: Record Revenue, AI Adoption, ARR Targets

Futurum Research analyzes Adobe’s Q4 FY 2025 results, emphasizing AI distribution via LLMs, enterprise adoption of Firefly Foundry, and a credit-based monetization model aligned to FY 2026 ARR growth and...
Five Key Reasons Why Confluent Is Strategic To IBM
December 9, 2025

Five Key Reasons Why Confluent Is Strategic To IBM

Brad Shimmin and Mitch Ashley at Futurum, share their insights on IBM’s $11B acquisition of Confluent. This bold move signals a strategic pivot, betting that real-time "data in motion" is...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.