The News: On October 30, The White House issued an Executive Order on Safe, Secure and Trustworthy Artificial Intelligence. The EO is very broad, covering a wide range of issues and actions. Key points include:
- Share AI safety test results with the US government. In accordance with the Defense Production Act, the Order requires that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests.
- Develop standards for safe, secure, and trustworthy AI. The National Institute of Standards and Technology (NIST) will set standards for red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks.
- Develop watermarking standards for generative AI for government communications. The Department of Commerce will develop guidance for content authentication and watermarking to label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.
- The president calls on Congress to write and pass data privacy legislation.
- Perform actions to ensure the responsible government use of AI. Create standards.
Read the full White House Executive Order on AI fact sheet here.
Executive Order on AI: It Is Not Law
Analyst Take: The White House Executive Order on AI will have a minimum impact on the use of AI in the US. Further, it will take Congress 3 to 4 years to create and pass actual AI-focused law. The European Union (EU) AI Act will have more impact as an AI regulation for enterprises in the US over the next few years. Here is my rationale on all of these elements.
Limited Power of Executive Orders
US Presidential Executive Orders are not law; they have limited powers, typically only with immediate impact on government organizations under the executive branch. You can read a quick summary of the power of executive orders here: What Are Executive Orders? What Are Their Limits?
Note from the key points section how the Executive Order is directed to government agencies under the control of the executive branch, are aimed at government agencies, or are simply a call for Congress to make a law:
- Share of AI safety test results with the US Government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model and must share the results of all red-team safety tests.
- This one aligns with an existing law, the Defense Production Act.
- Develop standards for safe, secure, and trustworthy AI. NIST will set standards for red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks.
- This one is not new. It calls for standards from the government body (NIST) that has been working on AI standards in global cooperation for several years. Note the Executive Order directs the standards to government agencies only.
- Develop watermarking standards for generative AI for government communications. The Department of Commerce will develop guidance for content authentication and watermarking to label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.
- This one is specifically about government communications, and not any other forms of consumer or enterprise communications. Note that it directs the Department of Commerce to “develop guidance” as well. Not very strong language or action.
- The president calls on Congress to write and pass data privacy legislation.
- Because the president cannot write laws.
- Perform actions to ensure the responsible government use of AI. Create standards.
- Government use, not consumers or enterprises, call for standards, not laws or legislation.
Near-Term Impacts But Easily Rescinded
Executive Orders can be rescinded by the current president or any president thereafter. Congress can pass laws that overturn EOs. In this case, most of the actions are more focused on developing groundwork for impactful AI regulations. Most important in that regard is the ongoing and much broader work in AI standards being championed by NIST.
Congressional Malaise
In terms of AI laws, Congress is moving very slowly. From an article in Time in September: “Legislators put forward a series of overlapping legislative proposals for everything from an independent federal office to oversee AI and requirements for the licensing of these technologies, to liability for civil rights and privacy violations and a ban on deceptive AI-generated content in elections.
So far, however, most proposals for legislation have been light on details, laying out rules for transparency and legal liability in very broad strokes. While there may be general agreement on a high-level framework that checks all the boxes–AI should be safe, effective, trustworthy, privacy-preserving, and non-discriminatory–“what that really means is that regulatory agencies will have to figure out how to give content to such principles, which will involve tough judgment calls and complex tradeoffs,” says Daniel Ho, a professor who oversees an artificial intelligence lab at Stanford University and is a member of the White House’s National AI Advisory Committee.”
To an extent, moving slowly makes more sense. It is early days for AI and laws that govern it are best designed to be framework laws that are futureproofed to a degree. Congress, or other lawmakers such as the EU, have to build laws that are flexible enough to offer guardrails but not too narrow as to create loopholes.
EU AI Act Is the New General Data Protection Regulation
Like the General Data Protection Regulation (GDPR), the EU AI Act will require any company doing business in the EU to comply with its laws. With GDPR, this requirement meant most US and non-EU companies complied with GDPR, not only in the EU, but in their own markets as well. So GDPR became the standard privacy legislation for the world.
The EU legislative bodies are light years ahead of the US and any other significant government in crafting meaningful AI regulations. That said, if the current drafts pass muster this year, it will be early 2025 before any EU laws go into effect.
Conclusions
The White House EO is guidance for enterprises leveraging AI to think about, but because of the lack of power the EO has, it might not have much impact. AI innovation will not slow because of it. Enterprises will do well to study the EO but move ahead with building AI risk management best practices and frameworks. AI risk management best practices will effectively enable enterprises to follow responsible AI practices, policing bias, misinformation and disinformation, transparency, privacy, and security.
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Other insights from The Futurum Group:
Two Trends in AI Regulations and a Look at Microsoft Copilot – The AI Moment, Episode 2
Mr. Benioff Goes to Washington
The EU’s AI Act: Q3 2023 Update
Author Information
Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.
Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.