More Than 50% of Workers Admit to Using Unapproved Generative AI Tools

More Than 50% of Workers Admit to Using Unapproved Generative AI Tools

The News: Salesforce released the results of a survey that found many users of generative AI in the workplace are leveraging the technology without training, guidance, or approval by their employer. According to the most recent iteration of its Generative AI Snapshot Research Series, The Promises and Pitfalls of AI at Work, 55% of workers have used unapproved generative AI tools at work, and 40% of workplace generative AI users have used banned tools at work, with many workers still recognizing that the ethical and safe use of generative AI means adopting company-approved programs.

Salesforce conducted a double-anonymous online survey in partnership with YouGov from October 18-31, 2023, which included more than 14,000 full-time employees representing companies of a variety of sizes and sectors in 14 countries. More information on this survey and research can be found on the Salesforce website.

More Than 50% of Workers Admit to Using Unapproved Generative AI Tools

Analyst Take: Salesforce’s survey of more than 14,000 workers from a variety of companies located in North America, Europe, Latin America, and the Middle East found that despite the promise generative AI offers workers and employers, a lack of clearly defined policies and enforcement mechanisms designed to control its use may be exposing their company to operational, legal, and regulatory risks.

Several key insights were surfaced, which portend specific solutions or actions that should be taken by company leadership to ensure that generative AI can be deployed safely and responsibly.

Workers Are Using Unapproved Generative AI Tools At Work

According to the survey, 55% of workers surveyed are using unapproved generative AI tools at work. This is a significant issue for enterprises. First, using unvetted generative AI tools can introduce significant IT and cybersecurity risks, such as the possibility of inadvertently exposing private company data, customer information, or trade secrets by incorporating work information into a prompt. Some unvetted generative AI tools might not be secure themselves and could be compromised with malware or other viruses that can infect unsuspecting users.

The Salesforce survey found that many users were still using unapproved generative AI tools, even though they recognize that ethical and safe use of generative AI at work is best achieved by using company-approved technology. However, this unapproved use might be the result of a lack of clear policies or training around generative AI in the workplace.

According to the survey, 69% of workers globally have never received or completed training around using generative AI, with the same percentage of workers indicating they have not received nor completed training around using generative AI safely. Meanwhile, 71% of workers say they have not received or completed training around using generative AI ethically.

Workers Are Using Banned Generative AI Tools at Work

Similarly, 40% of workers admit to using banned generative AI tools at work. While 21% of respondents say there are clearly defined policies for using generative AI at work, 79% of workers say that there are either no defined policies (37%), loosely defined policies (15%), or they do not know (27%) whether their organization has any generative AI policies in place.

More Than 50% of Workers Admit to Using Unapproved Generative AI Tools
Image Source: Salesforce

In the end, it is clear that organizations need to step up and set up clear policies for generative AI use in the workplace, spelling out the tools that are approved, the tools that are banned, and, perhaps most important, guidelines for how generative AI should and should not be used.

One way to address the use of so-called “shadow generative AI” is to deploy tools that are designed to monitor the use of external generative AI services. These tools can actively monitor when an employee uses his or her device to access websites that might be set up to access generative AI models. Rather than simply blocking access, these tools can also be set up to provide user education on why using unvetted tools is unsafe or undesired, and then can redirect the user to an approved tool or platform.

Users Are Engaging in Questionable Activities When Using Generative AI at Work

Based on the results of the survey, clear generative AI usage guidelines are required, as nearly two-thirds (64%) of workers say they have passed off generative AI work as their own. This type of behavior can introduce significant legal risks to the company around copyright and plagiarism, and if the output of generative AI is not vetted or checked for veracity and accuracy, companies can also suffer reputational harm.

From an operational perspective, companies might want to implement a system to review and approve all content that is generated via AI to ensure that the content meets the company’s legal and ethical standards for public (or private) use.

Steps Organizations Should Take to Ensure Safe and Responsible Use of Generative AI

Generative AI is and will continue to be a powerful tool that workers are increasingly seeking to use to improve their workflow and productivity. But in order to ward off the risks that are inherent with the technology, organizations should:

  • Develop a continuous generative AI education and training plan and make it mandatory for all workers
  • Vet all generative AI tools that are under consideration for use by employees, including those incorporated within commercial SaaS platforms and external providers, to ensure they meet the organization’s requirements for safety, ethical use, data privacy and protection, and data security
  • Implement a shadow generative AI-monitoring and control technology to ensure that workers using company-issued or managed devices are not accessing banned or un-vetted generative AI tools or resources
  • Clearly lay out all generative AI guidelines, rules, and best practices, and ensure that these guidelines are enforced

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Salesforce Announces Einstein Copilot and Einstein Copilot Studio

Salesforce Q2 FY 2024 Reaches $8.6 Billion, Driven by Data Cloud

Salesforce Einstein 1 Platform Provides Native AI Integration

Author Information

Keith has over 25 years of experience in research, marketing, and consulting-based fields.

He has authored in-depth reports and market forecast studies covering artificial intelligence, biometrics, data analytics, robotics, high performance computing, and quantum computing, with a specific focus on the use of these technologies within large enterprise organizations and SMBs. He has also established strong working relationships with the international technology vendor community and is a frequent speaker at industry conferences and events.

In his career as a financial and technology journalist he has written for national and trade publications, including BusinessWeek,, Investment Dealers’ Digest, The Red Herring, The Communications of the ACM, and Mobile Computing & Communications, among others.

He is a member of the Association of Independent Information Professionals (AIIP).

Keith holds dual Bachelor of Arts degrees in Magazine Journalism and Sociology from Syracuse University.


Latest Insights:

Nivas Iyer, Sr. Principal Product Manager at Dell Technologies, joins Paul Nashawaty to discuss the transition from VMs to Kubernetes and the strategies to overcome emerging data storage challenges in modern IT infrastructures.
Shimon Ben David, CTO at WEKA, joins Dave Nicholson and Alastair Cooke to share his insights on how WEKA's innovative solutions, particularly the WEKApod Data Platform Appliance, are revolutionizing storage for AI workloads, setting a new benchmark for performance and efficiency.
The Futurum Group team assesses how the global impact of the recent CrowdStrike IT outage has underscored the critical dependency of various sectors on cybersecurity services, and how this incident highlights the vulnerabilities in digital infrastructure and emphasizes the necessity for robust cybersecurity measures and resilient deployment processes to prevent widespread disruptions in the future.
On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss CrowdStrike Global meltdown, Meta won't do GAI in EU or Brazil, HP Imagine AI 2024, TSMC Q2FY24 earnings, AMD Zen 5 Tech Day, Apple using YouTube to train its models, and NVIDIA announces Mistral NeMo 12B NIM.