Artificial Intelligence Act: New EU Rules to Shape Global AI Standards

Artificial Intelligence Act: New EU Rules to Shape Global AI Standards

By Marek Kysela

Originally posted on SEMI Blog

On April 21, 2021, the European Commission put forward the long-awaited Proposal for a Regulation on a European approach for Artificial Intelligence, introducing for the first time harmonized rules for the development, placement and use of secure and ethical artificial intelligence (AI) in Europe.

The proposed regulation’s wide scope subjects providers, importers, distributors, and users of AI systems to regulatory scrutiny. It also includes providers and users outside of the European Union (EU) that deploy AI systems or use AI system outputs in the EU. With extraterritorial scope, the proposed regulation aims to further strengthen the EU’s leadership in shaping global standards and norms for new technologies.

A European Path to Safe AI

Following a risk-based approach, the proposed regulation classifies AI systems according to the level of danger they pose in the following categories:

Unacceptable risk: AI poses unacceptable risk to systems or applications with the potential to manipulate human behavior and exploit vulnerabilities of groups of people to cause psychological or physical harm. Examples of prohibited AI systems include social-scoring systems and biometric identification that can be used by public authorities.

High risk: The proposal identifies two main categories: AI systems used as safety components of products (or are a product themselves), and other stand-alone AI systems that have fundamental rights implications.

Considering their intended purpose, the proposal identifies specific conformity assessment measures for both groups. AI systems intended to be used as security components will require a conformity assessment by an independent third party. Such systems will also be subject to the same ex-ante and ex-post compliance and enforcement mechanisms as products of which they are part. In contrast, stand-alone AI systems assessed through internal checks would require ex-ante compliance with all requirements of the regulation as well as with robust standards for quality and risk management and post-market monitoring.

The proposed regulation identifies eight areas of high risk including AI systems in safety components of products (e.g., machinery, radio equipment, AI applications in robot-assisted surgery), critical infrastructures (e.g., transport), educational and vocational training (e.g., exam scoring) and employment (e.g., monitoring or evaluation of persons in work-related contractual relationships).

Prior to their introduction to market, high-risk AI systems will be a subject to strict requirements including the use of high-quality datasets; adequate risk assessment and mitigation systems; high levels of robustness, accuracy, and security; clear and adequate user information; detailed system documentation and logging of activities to ensure traceability of results.  Human oversight and control must be ensured.

Low and Minimal risk: For limited-risk AI systems (e.g., chatbots), the regulation proposes only minimum transparency obligations, while minimal risk AI systems posing little to no risk (e.g., AI in spam filters) will not be regulated.

 

Boosting AI Excellence from Lab to Market

Continuous innovation of AI requires a secure environment that can support responsible validation of AI technologies. To that end, the proposal encourages the set-up of testing and experimentation facilities or so called AI regulatory sandboxes. Established by one or more Member States, the sandboxes will provide a controlled environment to test innovative technologies under strict oversight before their market introduction. These facilities could play an instrumental role in connecting the Europe’s R&D ecosystem, creating new partnerships among numerous stakeholders.

In addition to regulatory sandboxes, the European Commission intents to set up:

  • A Public-Private Partnership (PPP) on AI, data and robotics designed to implement and invest in strategic research innovation and a deployment agenda for Europe
  • Additional Networks of AI Excellence Centers to foster exchange of knowledge and advance collaboration with the industry
  • Testing and experimentation facilities to test state-of-the-art technology
  • Digital Innovation Hubs, one-stop shops to provide access to technical expertise and experimentation
  • An AI-on-demand platform as a central European toolbox of AI resources (e.g., expertise, algorithms, software frameworks and development tools).

Next Steps

The proposed regulation is the at the start of a lengthy legislative process and will be debated by the European Parliament and European Council in the coming months. Given the importance of AI, and number of stakeholders involved, it is likely the proposed regulation will face various changes before being applied across the EU.

For its part, SEMI Europe will maintain discourse with key public and private stakeholders on the proposed regulation, closely monitoring related policy developments as they unfold.

Marek Kysela is senior coordinator of Advocacy at SEMI Europe.

Indium EMSNow Durafuse x