Quick Take: What you need to know about the EU’s newly proposed guidelines for Artificial Intelligence

The News: Brussels, Belgium – Earlier this week, the European Union released a new set of guidelines and operating principles regarding future use of Artificial Intelligence in Europe. The guidelines, which fall within the scope of Europe’s comprehensive regulatory regime for digital technologies, has been defined by the EU’s executive Commission as a “framework” to develop and manage what it calls “trustworthy artificial intelligence.”

Newly elected European Commission President Ursula von der Leyen has prioritized the development of what she describes as “a coordinated European approach to artificial intelligence and data strategy.”

Per the European Commission:

Today, the Commission unveils its ideas and actions for a digital transformation that works for all, reflecting the best of Europe: open, fair, diverse, democratic and confident. It presents a European society powered by digital solutions that put people first, opens up new opportunities for businesses, and boosts the development of trustworthy technology to foster an open and democratic society and a vibrant and sustainable economy. Digital is a key enabler to fighting climate change and achieving the green transition. The European data strategy and the policy options to ensure the human-centric development of Artificial Intelligence (AI) presented today are the first steps towards achieving these goals.”

These guidelines follow Europe’s aim to create a regulatory environment in which technology works for people, and promotes both a fair and competitive economy, and an open, democratic and sustainable society.

Per the European Commission:

Economic and Competitive Objectives: “In partnership with the private and the public sector, the aim is to mobilise resources along the entire value chain and to create the right incentives to accelerate deployment of AI, including by smaller and medium-sized enterprises. This includes working with Member States and the research community, to attract and keep talent. […] All AI applications are welcome in the European market as long as they comply with EU rules.”

Risk Management Objectives: “As AI systems can be complex and bear significant risks in certain contexts, building trust is essential. Clear rules need to address high-risk AI systems without putting too much burden on less risky ones. Strict EU rules for consumer protection, to address unfair commercial practices and to protect personal data and privacy, continue to apply.”

High-Risk uses of AI defined: “For high-risk cases, such as in health, policing, or transport, AI systems should be transparent, traceable and guarantee human oversight. Authorities should be able to test and certify the data used by algorithms as they check cosmetics, cars or toys.

“Unbiased data is needed to train high-risk systems to perform properly, and to ensure respect of fundamental rights, in particular non-discrimination. While today, the use of facial recognition for remote biometric identification is generally prohibited and can only be used in exceptional, duly justified and proportionate cases, subject to safeguards and based of EU or national law, the Commission wants to launch a broad debate about which circumstances, if any, might justify such exceptions.”

A Separate Regime for Low Risk uses of AI: “For lower risk AI applications, the Commission envisages a voluntary labeling scheme if they apply higher standards.”

The European Commission is likely to begin focusing on next steps in May or June of this year.

Europe – not China or the US – is taking the lead in creating a working an AI-focused legal and ethical framework for the rest of the world

Analyst Take: This effort to better understand the risks and opportunities that come with the evolution of AI technologies is a welcome move by the European Commission. In my view, all governments and regulatory bodies have a duty to immerse themselves in new and emerging technologies in order to better plan for their potential impact. I am also encouraged to note that the Commission’s statements about the guidelines prioritize economic development, promote potential partnerships between the private sector and the public sector, create a level playing field for competition regardless of company size, and foster not only innovation but job creation. This signals to me that the EC’s stance is less about the imposition of regulatory restrictions than it is about creating the ideal framework for growth and leadership in AI across EU member states.

Regarding the general tone of what appears to be the EC’s regulatory intent, I am also encouraged to see an effort to compartmentalize AI uses, particularly in terms of low vs high risk to the public.

For high-risk cases, such as in health, policing, or transport, AI systems should be transparent, traceable and guarantee human oversight.” This type of complementary human-machine partnership is something that Principal Analyst Daniel Newman and I have written about in detail in our latest book Human/Machine, and in our opinion, helps create a safety net for the public in cases where AI applications may not always be sophisticated enough (or free) to make ethical choices on their own.

Such applications may not be limited, as the EC’s document states, to health, policing, and transport, however. They might also touch on employment, access to bank loans and economic resources, civil liberties, enrollment in schools and universities, access to job training, individual privacy, data collection and security, public surveillance, infrastructure management and security, and so on. The EC’s intent here seems to be, first and foremost, to identify the full range of AI applications that may, by law, require some degree of human oversight in order to make decisions, and second, to establish adequate guidelines regulating their use.

For low risk applications in which no (or very little) human oversight is needed, the EC nonetheless appears to want to establish clear rules and guidelines which should, at the very least, fall in line with Europe’s broader regulatory framework for digital technologies. At the heart of these rules, I expect to find consumer protections, data privacy protections, data security protocols, Opt-in/Opt-out transparency guidelines, and some degree of antitrust oversight.

While I do not believe that the EU will outlaw the use of AI-powered facial recognition technologies (or adjacent identification technologies like gait recognition), I do expect that rules governing their use by law enforcement agencies will try to establish an adequate balance between the individual rights of citizens and legitimate public safety applications, as well as create disclosure, transparency, and opt-in/opt-out regimes for commercial applications (from home security use cases to more public environments like retail stores and public transportation access points. 

As the EU appears to have its own AI leadership ambitions, I expect the EC to give non EU technology companies very little wiggle room when it comes to compliance. Historically, the relationship between the EC and Silicon Valley giants has been somewhat adversarial, and I don’t anticipate that changing in the next few years.

One of the more interesting aspects of the EU’s plan for AI regulations is the AI labeling scheme it seems to be considering. On the one hand, notifying users and consumers that they are interacting with an AI or an AI-powered product/service will make for an interesting disclosure structure. On the other, any kind of labeling would be difficult to enact without an accompanying certification framework, meaning that different types of AI products (and AI training products and models) may soon be readily identifiable for technology buyers and users.

And if a certification schedule is enacted by the EU, it suggests the establishment of standards – standards that will likely relate to anti-bias AI training, ethical AI frameworks, and so on. In other words, the regulatory framework envisaged by the EC for the EU may help build the runway that ultimately helps AI developers steer more of their products towards benevolent, non-discriminatory, transparent “Big Butler” type AI products and applications, as opposed to the other kind. This seems like a good direction for the technology – a though recently echoed by Elon Musk. As AI becomes more powerful and embedded in every aspect of our lives, it is important to start establishing ethical frameworks, human-machine partnership models, and structural guardrails to ensure that the technology will yield high economic and societal returns while minimizing harm.

It is still too early to gauge how well the EC will negotiate this challenge, but at the very least, this effort will move dialogue about how to manage artificial intelligence more responsibly in the future, and that is a good thing. 

To be continued.

Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.

Author Information

Olivier Blanchard has extensive experience managing product innovation, technology adoption, digital integration, and change management for industry leaders in the B2B, B2C, B2G sectors, and the IT channel. His passion is helping decision-makers and their organizations understand the many risks and opportunities of technology-driven disruption, and leverage innovation to build stronger, better, more competitive companies.  Read Full Bio.


Latest Insights:

The Six Five team discusses Oracle Q4FY24 earnings.
The Six Five team discusses enterprise SaaS reset or pause
The Six Five team discusses Six Five Summit 2024 wrap.
The Six Five team discusses Apple WWDC 2024.