Menu

Tech Giants and White House Join Forces on Safe AI Usage

Tech Giants and White House Join Forces on Safe AI Usage

The News: Several major tech companies, including Amazon, Google (Alphabet), Meta Platforms, and Microsoft, have made voluntary commitments to the White House regarding the safe use of AI. These commitments include internal and third-party security testing of AI systems before their release, using digital watermarking to differentiate between real and AI-generated content, reporting vulnerabilities in their systems, and publicly disclosing their AI systems’ capabilities and limitations. The companies also pledged to prioritize research on societal risks related to AI, such as bias, discrimination, and privacy, and share information with various stakeholders to manage AI risks effectively. Additionally, the US government has consulted with several other countries on these commitments to align efforts in AI governance worldwide. Additional details are available here.

Tech Giants and White House Join Forces on Safe AI Usage

Analyst Take: Spurred in part by the introduction of ChatGPT in November 2022, generative AI and natural language processing (NLP) are rapidly emerging as major disruptors to how business across all industries is conducted. A slew of models with great potential, including Google’s Bard, have been introduced. The challenge becomes utilizing these models in a way that does not compromise the organization’s data or customers’ data, or introduce unfair and potentially malicious bias.

The collaboration of these major names in the technology industry, and their coordination with the White House, will contribute to the development and communication of standards and best practices for using AI. Additionally, these industry juggernauts are collectively holding themselves accountable for ensuring the safety of the infrastructure on which their models are delivered. Regular testing and reporting will help to reduce overall vulnerabilities, and as a result the threat landscape. Meanwhile, communication of these vulnerabilities along with guidelines for using AI models appropriately will contribute to safer usage.

This bodes positively for IT Operations teams that will ultimately be responsible for the security and privacy of the data that will be fed into the AI models their organizations will be using. Simply put, it means that end-users will be more aware of risks, it means that the technologies underpinning AI models will be more secure, and it means that there will be more information available and shared across governmental agencies and private industries regarding potential threats.

One such potential threat is the usage of AI to create fake data, which is why this industry collaboration has been encouraged to home in on building competencies around using digital watermarking to differentiate between real and AI-generated content. Typically, digital watermarks are embedded in media files to verify their authenticity. They could be used to earmark specific content as being AI-generated, and potentially even to trace the content back to the tool that was used to create the content. Futurum Group notes that the growing usage of AI has means that additional research in this area, such as around how easy it would be to fake a digital watermark, would be beneficial.

The collaboration of the Biden Administration on a global scale is worth noting, especially considering the growing number of state-sponsored cybersecurity attacks. For example, this week’s announcement comes on the heels of the U.N. Security Council’s first official meeting to discuss the potential impacts of AI on international economies and security. Clearly, we are still in the early days when it comes to the regulations and principles that will ultimately guide and govern the use of AI, as well as the resulting impact on enterprise IT infrastructure and the teams responsible for their procurement and day-to-day oversight and protection.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

The Cost of The Next Big Thing – Artificial Intelligence

The EU’s AI Act: Q3 2023 Update

Google Cloud Unveils Cutting-Edge AI Tools for Accelerating Drug Discovery and Precision Medicine

Author Information

Krista Case

Krista Case brings over 15 years of experience providing research and advisory services and creating thought leadership content. Her vantage point spans technology and vendor portfolio developments; customer buying behavior trends; and vendor ecosystems, go-to-market positioning, and business models. Her work has appeared in major publications including eWeek, TechTarget and The Register.

Related Insights
Infosys and Anthropic Target Regulated AI—Will Trusted AI Win Over Speed?
March 21, 2026

Infosys and Anthropic Target Regulated AI—Will Trusted AI Win Over Speed?

Grounding the Agentic Mandate As the Semantic Layer Market Eyes 19% Growth, Microsoft Fabric IQ Targets Leaders Prioritizing AI Investment
March 20, 2026

Grounding the Agentic Mandate: As the Semantic Layer Market Eyes 19% Growth, Microsoft Fabric IQ Targets Leaders Prioritizing AI Investment

Brad Shimmin, VP and Practice Lead at Futurum, shares insights from FabCon and SQLCon 2026 on how Microsoft is leveraging the new Database Hub and Fabric IQ to unify transactional...
Can Accenture’s AI-First Mandate Create a Defensible Moat—or Trigger Talent Flight?
March 20, 2026

Can Accenture’s AI-First Mandate Create a Defensible Moat—or Trigger Talent Flight?

Can Infosys and Anthropic’s AI Alliance Crack the Code for Regulated Industry Transformation?
March 20, 2026

Can Infosys and Anthropic’s AI Alliance Crack the Code for Regulated Industry Transformation?

Can Accenture and RELEX Deliver on the Promise of Unified AI Supply Chains for Retail Giants?
March 18, 2026

Can Accenture and RELEX Deliver on the Promise of Unified AI Supply Chains for Retail Giants?

NVIDIA Agent Toolkit
March 16, 2026

At GTC 2026, NVIDIA Stakes Its Claim on Autonomous Agent Infrastructure

Nick Patience and Mitch Ashley, analysts at Futurum, examine NVIDIA's Agent Toolkit announcements at GTC 2026, covering NemoClaw, AI-Q, the Nemotron Coalition, and what they mean for enterprise agentic AI...

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.