The News: On December 8, European Parliament and the European Council (EC) reached a provisional agreement on the AI Act. Here are the key details:
- Banned applications:
- Biometric categorization systems that use sensitive characteristics (political, religious, philosophical beliefs, sexual orientation, race)
- Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases
- Emotion recognition in the workplace and educational institutions
- Social scoring based on social behavior or personal characteristics
- AI systems that manipulate human behavior to circumvent their free will
- AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation)
- Law enforcement exemptions: Safeguards and narrow exceptions for the use of biometric identification systems for law enforcement purposes.
- Obligations for high-risk systems: AI systems classified as high risk (due to their significant potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law) face mandatory fundamental rights impact assessments.
- Guardrails for general AI systems: These systems and models will have to meet transparency requirements, including technical documentation, complying with EU copyright law, and disseminating summaries about the content used for training. For systems that have systemic risk, there is a higher obligation—model evaluations, risk assessments and mitigation, adversarial testing, cybersecurity requirements, and reporting on energy efficiency.
Read the press release on the AI Act on the European Parliament website.
EU Reaches Provisional Agreement On AI Act: What Does It Mean?
Analyst Take: The AI Act continues to move toward formal ratification and is on track to be fully implemented by 2025. What will the impact be? Here are our thoughts.
Consumer and Human Rights Protections Too Broad
While the language of the Act reflects a noble and responsible intent to protect technology users from intended and unintended negative impacts of AI technologies, it is far too broad and fails on two fundamental fronts. First, the language of the act does not provide enough clarity separating malicious, nefarious, and deliberate intent to cause harm from the accidental, unintended harms that may result from certain uses of some AI technologies. The EC needs to do a much better job here, as intent and use, rather than the technology itself, is central to this specific regulatory framework, to say nothing of the guidelines directing how broadly worded bans on AI applications would be enforced.
Second, the Act does not appear to allow space for voluntary, opt-in, beneficial versions of the potentially harmful use cases it seems eager to ban. For instance, the use of AI to identify a user’s emotional state could prove vital in a remote healthcare management setting, where allowing AI-powered apps to provide more context in human-assisted or automated care for individuals with unique medical, emotional, and personality needs could be a critical, even life-saving feature. Students with special needs would also benefit from this type of device intelligence, particularly in remote learning settings, where learning applications and even interactions with educators and other students could be guided by additional context about their emotional state. In another set of use cases, it is not difficult to imagine how AI applications could be used to cater to users’ individual needs based on an analysis of specific attributes, such as disabilities, age, ethnicity, faith, and social or economic status. In these instances, the same technology that could be used to exploit vulnerabilities can be used to benefit users.
Here too, the EC’s wholesale ban of AI-powered applications could limit the positive impacts of AI and prevent intelligent solutions from creating value for tens of millions of seniors, people with disabilities, and countless users with unique personal needs. In its haste to deliver a regulatory package designed to protect the public from AI applications, the EC appears to be ignoring the ways in which the same technologies can be beneficial to the public, and risks proverbially throwing the baby out with the bathwater.
Guardrails for General AI Systems and Models Has Some Good and Some Bad
Creating laws right now to govern generative AI models and the systems that run them feels premature, as the technology is too new for anyone to accurately predict all the possible implications of generative AI outcomes. How is it possible to build protections for such a thing?
The EU sort of admits as much. From the press release, “MEPs also insisted that, until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation.” Standards work takes years, not months, to codify. Obligations to AI model providers include, as outlined in the news section, risk assessment and mitigation. The “transparency” requirements designed by the framers are aimed things like “disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training.” Transparency that would have impact would be to force model makers to reveal their datasets and how the models were trained. That does not sound like it is in the AI Act, and it would be interesting to see how that would sit with proprietary AI models makers versus open source AI models. The EU has already said it will more lightly regulate fully open source models than closed AI models. While companies are scrambling through this, there will be a lot open to interpretation.
Conclusions
At the AI Act continues the journey toward EU law, it will be interesting to see if the EU or member states backpedal on the sections focused on general AI systems and AI models. The likelihood is high that they will, simply because there are too many unknowns right now about the outputs of AI models.
It is too early to tell if the law as it stands will quell AI innovation; it certainly could for EU-based innovation. Unlike General Data Protection Regulation (GDPR), companies from outside the EU may hesitate to comply with model requirements to do business in the EU if it is not clear that the regulation penalizes systems outcomes, which we have no way of predicting, unfairly in 2025. The US and other legislative bodies should note this and take their time on how to regulate AI models and the AI model ecosystem. The consumer and human rights protections for AI need help but something is better than nothing.
Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.
Other insights from The Futurum Group:
Executive Order on AI: It Is Not Law
Mr. Benioff Goes to Washington
The EU’s AI Act: Q3 2023 Update
Author Information
Mark comes to The Futurum Group from Omdia’s Artificial Intelligence practice, where his focus was on natural language and AI use cases.
Previously, Mark worked as a consultant and analyst providing custom and syndicated qualitative market analysis with an emphasis on mobile technology and identifying trends and opportunities for companies like Syniverse and ABI Research. He has been cited by international media outlets including CNBC, The Wall Street Journal, Bloomberg Businessweek, and CNET. Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.
Research Director Olivier Blanchard covers edge semiconductors and intelligent AI-capable devices for Futurum. In addition to having co-authored several books about digital transformation and AI with Futurum Group CEO Daniel Newman, Blanchard brings considerable experience demystifying new and emerging technologies, advising clients on how best to future-proof their organizations, and helping maximize the positive impacts of technology disruption while mitigating their potentially negative effects. Follow his extended analysis on X and LinkedIn.