Menu

Tech Giants and White House Join Forces on Safe AI Usage

Tech Giants and White House Join Forces on Safe AI Usage

The News: Several major tech companies, including Amazon, Google (Alphabet), Meta Platforms, and Microsoft, have made voluntary commitments to the White House regarding the safe use of AI. These commitments include internal and third-party security testing of AI systems before their release, using digital watermarking to differentiate between real and AI-generated content, reporting vulnerabilities in their systems, and publicly disclosing their AI systems’ capabilities and limitations. The companies also pledged to prioritize research on societal risks related to AI, such as bias, discrimination, and privacy, and share information with various stakeholders to manage AI risks effectively. Additionally, the US government has consulted with several other countries on these commitments to align efforts in AI governance worldwide. Additional details are available here.

Tech Giants and White House Join Forces on Safe AI Usage

Analyst Take: Spurred in part by the introduction of ChatGPT in November 2022, generative AI and natural language processing (NLP) are rapidly emerging as major disruptors to how business across all industries is conducted. A slew of models with great potential, including Google’s Bard, have been introduced. The challenge becomes utilizing these models in a way that does not compromise the organization’s data or customers’ data, or introduce unfair and potentially malicious bias.

The collaboration of these major names in the technology industry, and their coordination with the White House, will contribute to the development and communication of standards and best practices for using AI. Additionally, these industry juggernauts are collectively holding themselves accountable for ensuring the safety of the infrastructure on which their models are delivered. Regular testing and reporting will help to reduce overall vulnerabilities, and as a result the threat landscape. Meanwhile, communication of these vulnerabilities along with guidelines for using AI models appropriately will contribute to safer usage.

This bodes positively for IT Operations teams that will ultimately be responsible for the security and privacy of the data that will be fed into the AI models their organizations will be using. Simply put, it means that end-users will be more aware of risks, it means that the technologies underpinning AI models will be more secure, and it means that there will be more information available and shared across governmental agencies and private industries regarding potential threats.

One such potential threat is the usage of AI to create fake data, which is why this industry collaboration has been encouraged to home in on building competencies around using digital watermarking to differentiate between real and AI-generated content. Typically, digital watermarks are embedded in media files to verify their authenticity. They could be used to earmark specific content as being AI-generated, and potentially even to trace the content back to the tool that was used to create the content. Futurum Group notes that the growing usage of AI has means that additional research in this area, such as around how easy it would be to fake a digital watermark, would be beneficial.

The collaboration of the Biden Administration on a global scale is worth noting, especially considering the growing number of state-sponsored cybersecurity attacks. For example, this week’s announcement comes on the heels of the U.N. Security Council’s first official meeting to discuss the potential impacts of AI on international economies and security. Clearly, we are still in the early days when it comes to the regulations and principles that will ultimately guide and govern the use of AI, as well as the resulting impact on enterprise IT infrastructure and the teams responsible for their procurement and day-to-day oversight and protection.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

The Cost of The Next Big Thing – Artificial Intelligence

The EU’s AI Act: Q3 2023 Update

Google Cloud Unveils Cutting-Edge AI Tools for Accelerating Drug Discovery and Precision Medicine

Author Information

Krista Case

Krista Case brings over 15 years of experience providing research and advisory services and creating thought leadership content. Her vantage point spans technology and vendor portfolio developments; customer buying behavior trends; and vendor ecosystems, go-to-market positioning, and business models. Her work has appeared in major publications including eWeek, TechTarget and The Register.

Related Insights
Can AI Save the Mainframe BMC Bets on Governance and Agentic AI
April 10, 2026

Can AI Save the Mainframe? BMC Bets on Governance and Agentic AI

Brad Shimmin and Mitch Ashley, Analysts at Futurum, examine BMC Software’s April 2026 AI expansion. The report details how uniting AMI with Control-M's new Agent Gateway addresses the mainframe demographic...
Anthropic's Gigawatt-Scale TPU Deal with Broadcom Creates a Structural Advantage
April 9, 2026

Anthropic’s Gigawatt-Scale TPU Deal with Broadcom Creates a Structural Advantage

Brendan Burke, Research Director at Futurum, examines Anthropic TPU expansion with Google and Broadcom, highlighting how multi-gigawatt compute deals and custom silicon are reshaping AI infrastructure scale and competition....
Slack Expands Slackbot for Enterprise Work; Can It Simplify Execution?
April 9, 2026

Slack Expands Slackbot for Enterprise Work; Can It Simplify Execution?

Keith Kirkpatrick, VP and Research Director at Futurum, examines Salesforce’s Slackbot enterprise update, expanding Slack into a unified work interface with AI skills, CRM, and orchestration capabilities....
Does Honoring Matei Zaharia Signal a New Era for Open-Source Data and AI Systems?
April 9, 2026

Does Honoring Matei Zaharia Signal a New Era for Open-Source Data and AI Systems?

Matei Zaharia's ACM Prize for Apache Spark reflects enterprise AI's shift toward open-source platforms, showing how democratized data infrastructure is transforming competitive dynamics across the industry....
Can Nasuni’s File Data Activation Drive Real AI ROI, or Is It More AI Hype?
April 8, 2026

Can Nasuni’s File Data Activation Drive Real AI ROI, or Is It More AI Hype?

Alastair Cooke, Research Director, Cloud and Data Center at Futurum, shares his insights on Nasuni’s announcement of the Resilio Active Everywhere V6 and AI Active products, which enable file data...
Anthropic Glasswing: AI Vulnerability Detection Has Crossed a Threshold
April 8, 2026

Anthropic Glasswing: AI Vulnerability Detection Has Crossed a Threshold

Analysts Mitch Ashley and Fernando Montenegro explore Anthropic's Project Glasswing. As AI vulnerability detection crosses a new threshold, the economics and speed of offensive and defensive cybersecurity are forever changed....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.