Tech Giants and White House Join Forces on Safe AI Usage

Tech Giants and White House Join Forces on Safe AI Usage

The News: Several major tech companies, including Amazon, Google (Alphabet), Meta Platforms, and Microsoft, have made voluntary commitments to the White House regarding the safe use of AI. These commitments include internal and third-party security testing of AI systems before their release, using digital watermarking to differentiate between real and AI-generated content, reporting vulnerabilities in their systems, and publicly disclosing their AI systems’ capabilities and limitations. The companies also pledged to prioritize research on societal risks related to AI, such as bias, discrimination, and privacy, and share information with various stakeholders to manage AI risks effectively. Additionally, the US government has consulted with several other countries on these commitments to align efforts in AI governance worldwide. Additional details are available here.

Tech Giants and White House Join Forces on Safe AI Usage

Analyst Take: Spurred in part by the introduction of ChatGPT in November 2022, generative AI and natural language processing (NLP) are rapidly emerging as major disruptors to how business across all industries is conducted. A slew of models with great potential, including Google’s Bard, have been introduced. The challenge becomes utilizing these models in a way that does not compromise the organization’s data or customers’ data, or introduce unfair and potentially malicious bias.

The collaboration of these major names in the technology industry, and their coordination with the White House, will contribute to the development and communication of standards and best practices for using AI. Additionally, these industry juggernauts are collectively holding themselves accountable for ensuring the safety of the infrastructure on which their models are delivered. Regular testing and reporting will help to reduce overall vulnerabilities, and as a result the threat landscape. Meanwhile, communication of these vulnerabilities along with guidelines for using AI models appropriately will contribute to safer usage.

This bodes positively for IT Operations teams that will ultimately be responsible for the security and privacy of the data that will be fed into the AI models their organizations will be using. Simply put, it means that end-users will be more aware of risks, it means that the technologies underpinning AI models will be more secure, and it means that there will be more information available and shared across governmental agencies and private industries regarding potential threats.

One such potential threat is the usage of AI to create fake data, which is why this industry collaboration has been encouraged to home in on building competencies around using digital watermarking to differentiate between real and AI-generated content. Typically, digital watermarks are embedded in media files to verify their authenticity. They could be used to earmark specific content as being AI-generated, and potentially even to trace the content back to the tool that was used to create the content. Futurum Group notes that the growing usage of AI has means that additional research in this area, such as around how easy it would be to fake a digital watermark, would be beneficial.

The collaboration of the Biden Administration on a global scale is worth noting, especially considering the growing number of state-sponsored cybersecurity attacks. For example, this week’s announcement comes on the heels of the U.N. Security Council’s first official meeting to discuss the potential impacts of AI on international economies and security. Clearly, we are still in the early days when it comes to the regulations and principles that will ultimately guide and govern the use of AI, as well as the resulting impact on enterprise IT infrastructure and the teams responsible for their procurement and day-to-day oversight and protection.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other insights from The Futurum Group:

The Cost of The Next Big Thing – Artificial Intelligence

The EU’s AI Act: Q3 2023 Update

Google Cloud Unveils Cutting-Edge AI Tools for Accelerating Drug Discovery and Precision Medicine

Author Information

Krista Case

With a focus on data security, protection, and management, Krista has a particular focus on how these strategies play out in multi-cloud environments. She brings approximately 15 years of experience providing research and advisory services and creating thought leadership content. Her vantage point spans technology and vendor portfolio developments; customer buying behavior trends; and vendor ecosystems, go-to-market positioning, and business models. Her work has appeared in major publications including eWeek, TechTarget and The Register.

Prior to joining The Futurum Group, Krista led the data protection practice for Evaluator Group and the data center practice of analyst firm Technology Business Research. She also created articles, product analyses, and blogs on all things storage and data protection and management for analyst firm Storage Switzerland and led market intelligence initiatives for media company TechTarget.

SHARE:

Latest Insights:

New pNFS Architecture Addresses Data Storage Needs for AI Training and Large Scale Inferencing
Camberley Bates at The Futurum Group covers Pure Storage FlashBlade //EXA announcement for the AI Factory.
Strong ARR and Margin Expansion, but Investor Concerns Over CapEx and AI-Driven Shifts Remain
Olivier Blanchard, Research Director at The Futurum Group, shares insights on Samsara’s strong Q4 FY 2025 earnings, the 11% stock drop, and key investor concerns over CapEx slowdowns and AI-driven edge computing. How will these factors shape Samsara’s growth?
Google’s Latest Pixel Update Brings AI-Driven Scam Detection, Live Video Capabilities in Gemini Live, and Expanded Health and Safety Features to Pixel Devices
Olivier Blanchard, Research Director at The Futurum Group, examines Google’s March Pixel Drop, highlighting AI-powered Scam Detection, Gemini Live’s updates, Pixel Watch 3’s health tracking, and Satellite SOS expansion.
Discover how AI is driving major shifts in tech earnings on this episode of the Six Five Webcast - Infrastructure Matters. Learn about Broadcom's AI-fueled growth, VMware's Private AI Foundation, Salesforce's Agentforce, and the satellite internet race, and the impact of tariffs and the future of AI in business.

Thank you, we received your request, a member of our team will be in contact with you.