Automation Anywhere Launches Generative AI Tools

New Capabilities Permit More Natural Interactions with Applications and Automations Across the Enterprise

[the_ad_placement id="news-banner-top"]

The News:

Automation Anywhere, a cloud native intelligent automation vendor, announced on June 1 that it had infused generative AI across its Automation Success Platform. The company highlighted three specific product announcements that incorporate generative AI technology, which enable more natural interactions with applications across the enterprise.

  • Automation Co-Pilot + Generative AI for Business Users: Available now, Automation Co-Pilot leverages any generative AI technology and use case across any system, from creating and summarizing content to sending emails and providing recommendations.
  • Automation Co-Pilot + Generative AI for Automators: Automation Co-Pilot, which will be available for preview in July, can leverage generative AI to create automations by having a natural language conversation with Automation Co-Pilot, opening up automation development to virtually anyone in an organization.
  • Document Automation + Generative AI: Document Automation, which will be available in Q3 2023, incorporates generative AI for fast understanding, extraction, and summarization of data from a variety of unstructured document types, in addition to structured and semi-structured documents.

Perhaps most interesting is that Automation Anywhere has taken an agnostic approach to incorporating large language model (LLM) technology with Automation Co-Pilot; it allows customers to connect the platform with the generative AI provider of their choosing, including Open AI Chat GPT, Microsoft Azure’s OpenAI service, Google Vertex AI, and, in the future, Amazon Bedrock.

You can view the press release highlighting the announcements at this link.

Automation Anywhere Launches Generative AI Tools

Analyst Take:

Automation Anywhere announced the launch of three products that incorporate generative AI technology to support more natural interaction with automations, including Automation Co-Pilot + Generative AI for Business Users; Automation Co-Pilot + Generative AI for Automators; and Document Automation + Generative AI. The first two products are designed around a bring-your-own-license option, so customers can choose the generative AI services they want to use, and connect to Automation Anywhere’s platform via connectors. However, the Document Automation product will embed generative AI technology, and may include multiple LLMs, based on their relative strengths with handling specific types of documents.

Enabling More Natural Interactions with Automations and Applications

The key value proposition behind the Automation Co-Pilot + Generative AI product revolves around the ability to use generative AI to handle routine tasks that eat up time, and take the focus away from more important tasks. Automation Anywhere points out that generative AI can be used across a wide range of use cases and applications, such as summarizing content for emails, product briefs, or other outputs, as well as providing specific next-best-action recommendations. Because the generative AI is being deployed as part of an automation platform, these tasks can be directly integrated and automated into an employee’s workflow seamlessly, enabling more natural interactions with various applications, and reducing the friction and time required to complete these tasks.

One interesting aspect, however, is that many of the major productivity and customer experience platforms are also incorporating similar generative AI capabilities into their product feature sets. As such, workflow managers will need to assess the relative ease of integration of Automation Anywhere’s cross-application generative AI implementation, versus the platform-specific implementations of generative AI.

Bring-Your-Own Generative AI Approach

Automation Anywhere is incorporating a bring-your-own-model approach to its Automation Co-Pilot + Generative AI product, which appears to be a shrewd strategy. Indeed, each type of commercialized generative AI technology is unique, in terms of the scale of data that has been used to train the model, the number of parameters that can be adjusted when pre-training or tuning the model, and the different embeddings used to estimate semantic proximity of words. All these elements impact the overall accuracy and effectiveness of the model, and will impact the ability to support natural interactions between workers and their applications.

Further, development teams may have specific preferences or greater familiarity with say, OpenAI’s ChatGPT-3 versus Google Vertex, and providing the opportunity for developers to integrate their choice of model is simply good CX. Of course, Automation Anywhere has indicated that certain guardrails around how the AI is deployed on its platform are in place, to ensure responsible usage.

Freeing Developers to Handle More Advanced and Complex Tasks

The forthcoming launch of Automation Co-Pilot + Generative AI for Automators is a clear win for organizations that are IT-resource constrained. Using generative AI to create automations through a conversational interface, while streamlining the building and testing phases through full automation likely will result in faster deployment of automations. This also permits developers to remain focused on more complex, higher-value development tasks, and ensures that IT can quickly and regularly roll out newly requested automations.

Author Information

Keith has over 25 years of experience in research, marketing, and consulting-based fields.

He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.

In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek, CNBC.com, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.

He is a member of the Association of Independent Information Professionals (AIIP).

Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.

SHARE:

[the_ad_placement id="news-sidebar-ad"]

Latest Insights:

Google Cloud Offers Options to Build Customized Infrastructure for Your Generative AI Applications in Their Locations
Futurum’s Alastair Cooke examines building customized infrastructure for generative AI applications using the Google Cloud Platform, which offers numerous options ranging from managed services to Infrastructure-as-a-Service (IaaS).
On this episode of The Six Five Pod, hosts Patrick Moorhead and Daniel Newman discuss the recent US-China trade deal, the Middle East AI technology push, and Qualcomm's unexpected data center chip announcement. The hosts debate the future impact of AI on information workers and analyze market reactions to recent tech deals. They also explore Cisco's impressive earnings and leadership changes. Throughout the episode, Moorhead and Newman offer insightful commentary on the interconnectedness of global tech markets, the rapid pace of AI advancements, and the strategic moves of major tech players in response to evolving industry dynamics.
Zoho Launches Zoho Payments To Bring Native Payment Capabilities to US Businesses, Strengthening Operational Workflows and Boosting Financial Visibility
Keith Kirkpatrick, Research Director at Futurum, shares insights on Zoho Payments and how its in-house payment stack aims to reduce financial workflow friction and improve transaction success rates across business operations.

Latest Research:

In our latest Research Brief, Oracle Database@Azure: The Genesis of Oracle’s Multi-Cloud Leadership, completed in partnership with Oracle, The Futurum Group explores how enterprises can simplify migration, reduce costs, and modernize operations while gaining a competitive edge in AI-driven application development.
In our latest Research Brief, Hammerspace Tier 0: Unlocking Greater Efficiency in GPU-Driven Computing, The Futurum Group explores how organizations can overcome latency and storage inefficiencies by unlocking stranded NVMe capacity within GPU servers.
In our latest market brief, Enhancing Cyber-Resilience: A Multi-Layered Approach to Data Infrastructure, Protection, and Security, The Futurum Group, in partnership with Lenovo, explores how organizations can design resilient systems that reduce downtime, safeguard critical data, and empower lean IT teams to act swiftly in crisis moments.

Book a Demo

Thank you, we received your request, a member of our team will be in contact with you.