Artificial intelligence, New EU Proposals
The EU Commission has put forward a series of new proposals related to Artificial Intelligence. These are legislative and non-legislative proposals. For example, there is a ve regulation establishing rules on AI. (Artificial Intelligence Act – AIA). With this, the EC seeks to create a first, comprehensive legal framework for the rapidly developing family of technologies. The following is a summary of the groundbreaking proposal. There are 4 main topics:
- the main policy considerations;
- the proposed new restrictions;
- the proposed risk classifications and
- the related obligations for providers and users of AI arising from this proposal.
The EC plans to review the AI coordination plan with the MS. This aims to
- address and;
- align with each other.
This will enable Europe to become a global leader in the development of human-centered, sustainable, safe, inclusive and reliable AI. The plan also provides an overview of both existing and planned projects related to AI at the European level. It also gives a picture of the various funding opportunities. These include the new “Facility for Recovery and Resilience”. This provides for a 20% digital spending target at Member States level.
Furthermore, the EC has made a proposal to revise the regulatory framework for machines. AI technologies (built into both consumer and professional products) must now be taken into account. The new machinery regulation will replace the current machinery directive (2006/42/EC). The aim is first to ensure safety and further to introduce an EU-wide conformity assessment when placing such AI-enhanced products on the EU market.
Status quo of the EU policy debate
The AIA proposal follows the EU’s standard legislative procedure (i.e., the “ordinary legislative procedure”). It is now being debated in parallel in the European Parliament (EP) and the Council. The complex and far-reaching nature of the AIA proposal will undoubtedly lead to lengthy negotiations. These are likely to last well into 2022. The European Parliament sees the need to create a regulatory framework that defines the restrictions on AI technologies and robotics. In particular, there will be a well-defined legal liability regime for products and services that use AI.
- The Slovenian Presidency of the Council will host a high-level conference on AI in September 2021;
- Maintain a competence over the legislative file regarding:
- the high level expert group on artificial intelligence;
- the High Level Expert Group on the impact of digital transformation on EU labour markets;
- or the expert group on liability and new technologies;
- The associated committee, in addition to the Committee on Industry, Research and Energy (ITRE).
IMCO Committee rapporteur Brando Benifei (S&D Group, Italy) hopes that the EU Parliament can approve its negotiating mandate by the end of 2021. That seems very ambitious given the complexity of the AIA proposal.
There will be provisions that could affect the fundamental rights of citizens. Consider, for example, remote biometric recognition. Of these, it is expected that the European Parliament will take an aggressive stance. It is also possible that the EP will propose to ban even more AI practices in the EU. Also, in some cases, it will start challenging the self-certification capabilities of high-risk AI providers.
The EU Council has been discussing the proposed AIA at a technical level. Last June, the Council of Ministers already exchanged views on the AIA proposal for the first time. Top priority is given to the Slovenian Presidency. It aims to reach a general approach by the end of its mandate (in December 2021). However, it can be expected that these complex deliberations will last well into 2022 .
In that case, it will be up to the French Presidency of the Council to pursue that general orientation. It is even possible that its conclusion will be during the period of the Czech Presidency. Some EU member states share industry concerns about the potential impact of the AIA on competitiveness and innovation. A number of other member states are particularly concerned about the potential of AI systems to support law enforcement, counterterrorism, and the like.
The new artificial intelligence act (AIA)
From a policy perspective, the EC recognizes the benefits that AI can play in society. Those benefits range from, for example, better medical care to better education. However, the EC also believes that some AI systems carry risks. A new regulatory framework is needed to protect users without limiting technological developments. The EC also hopes to create more clarity and legal certainty around AI. With this, trust and excellence in AI solutions can be established. This will not only encourage their adoption and expansion in the EU, but it will also prevent regulatory fragmentation of AI systems.
There are already several AI-enhanced products and services on the EU market. So now the EC has proposed the world’s first comprehensive regulatory regime for AI. That proposal incorporates feedback from external stakeholders and expert groups .
The scope of AIA, new EC proposals.
The scope of the proposed AIA  (Title I) includes providers that market or provide service for AI systems, regardless of whether these providers are located inside or outside the EU. Users of AI systems are covered under the new rules only if they are located in the EU.
AIA will apply even if the provider and user are located outside the EU, but the output produced by those systems is used in the EU. Some examples are:
- providers of AI systems, (e.g. a developer of a CV screening tool);
- Users of such AI systems, (e.g., a large employer who purchases this CV screening tool);
The AIA will not apply to non-professional use. Also exempt are systems developed and used solely for military purposes.
The EC proposes a risk-based categorization of AI systems. In doing so, they use 4 levels of risk, the corresponding legal obligations and the corresponding legal restrictions:
AIA, new EC proposals – Risk level No. 1 – Unacceptable risk (Title II)
A small number of particularly harmful AI systems that violate the values of the EU are prohibited because they violate fundamental rights . These include
- “social scoring” by governments;
- the exploitation of vulnerabilities of children or otherwise disabled people;
- the use of subliminal techniques that can cause physical and psychological harm;
- live remote biometric identification systems in publicly accessible areas used for law enforcement purposes. However, some exceptions do apply to this category.
AIA, new EC proposals – Risk level No. 2 – High risk (Title III, Annex III)
Article 6 defines “high risk” AI systems as systems
- where the AI system is intended to be used as a safety component of a product, or is itself a product
- and that product is subject to an existing third party conformity assessment (e.g. motor vehicles, trains and aircraft).
In addition, the EC has the power to directly designate an AI system as a high-risk system by adding it to Annex III of the AIA. However, certain criteria must then be met. The EC proposes to review the list of covered systems annually. This is in view of the rapid evolution of high-risk AI use scenarios.
The EC would adopt delegated acts to amend the list of high-risk AI systems in Annex III.
Annex III contains a number of cases of AI applications that can (potentially) have a negative impact on people’s health, safety or fundamental rights. These cases are therefore designated as “high risk”. Such cases include AI systems that, for example
- use biometric identification;
- manage or operate critical infrastructure;
- are used for education or vocational training;
- are used for recruitment or personnel or work-related tasks;
- determine access to essential private and government services, including benefits, ;
- be used in a law enforcement context;
- be used in a migration, asylum or border management context or;
- are used in the administration of justice or democratic processes .
High-risk AI systems can only be placed on the EU market or put into operation if they meet certain minimum requirements. In doing so, providers must subject the system to a prior conformity assessment. Also, that assessment must be repeated if there are significant changes to the AI system. In certain cases, an independent notified body must be involved in that assessment.
Even after the product is placed on the market, providers of AI systems must implement quality and risk management systems. This is to ensure that they
- comply with the new requirements;
- To minimize the risk to users and affected persons.
Market surveillance authorities will support post-market monitoring through audits.
For high-risk AI systems, the EC proposes a set of new mandatory requirements (Title III), including:
- The establishment of a risk management system (Art. 9), which in a continuous and iterative process manages the risks associated with the AI system;
- quality criteria for the sets of training, validation and testing data used (Art. 10);
- technical documentation describing, among other things, the AI system’s compliance with applicable requirements, including for law enforcement purposes (Article 11)
- record keeping requirements to ensure an appropriate level of traceability of the operation of the AI system (Article 12)
- transparency and information provision to enable users to interpret and appropriately use the output of the system (Article 13);
- The systems are effectively under human supervision (Article 14) – The systems must achieve an appropriate level of accuracy, robustness and cyber security throughout their life cycle (Article 15) The following new obligations apply to providers and users of high-risk AI systems, among others;
- General obligation to comply with the above list of requirements – Maintain a quality management system (Art. 17) – Ensure that their systems go through the relevant conformity assessment procedure (Art. 19);
- Maintain automatically generated logs (Art. 20);
- Obligation to take corrective action when the AI system does not comply with the AIA (Art. 21), obligation to report serious incidents or malfunctions to the competent national authorities (Art. 22) and obligation to cooperate with those authorities (Art. 23);
- Specific requirements apply to importers of high risk AI systems (Art. 26), as well as distributors (Art. 27) and users (Art. 29) The EC will establish a publicly accessible register of high risk AI applications and systems (Art. 60).
AIA, new EC proposals – Risk level No. 3 – Limited risk (title IV)
Certain AI systems will only be subject to new transparency requirements (Title IV). This is for example when there is a risk of manipulation (e.g. chatbots) or deception (e.g. deep fakes). Natural persons must be aware that they are interacting with an AI system, unless this is clear from the circumstances and context of use.
Exceptions exist for law enforcement purposes.
AIA, new EC proposals – Risk level No. 4 – Minimal risk
All other AI systems can be developed and used in compliance with existing law without new legal obligations through the AIA. According to the EC, a large number of AI systems currently in use in the EU fall into this category. The EC recommends voluntary codes of conduct for providers of such AI systems.
Monitoring/Enforcement (Title VI)
The proposed enforcement measures provide for penalties of up to €30 million or 6% of global revenue (whichever is higher) for the most serious violations of the new regime. This makes the penalty regime even more draconian than that for violations of the General Data Protection Regulation (GDPR). This is the case for the use of prohibited AI systems and the violation of data management provisions when using high-risk AI systems.
All other cases of non-compliance with the AIA are subject to a fine of up to 20 million euros or 4% of global turnover (whichever is higher). Merely providing false, incomplete or misleading information to the competent authorities is already liable to a fine of up to 10 million euros or 2% of global revenue.
MS authorities will play a key role in the application and enforcement of the new AI regulatory regime. Newly designated national AI supervisory authorities will oversee the application of the AIA and conduct market surveillance activities. A new European Artificial Intelligence Council will be established at the EU level, which will support and guide the EC and national authorities in their relevant activities
Enforcement rests with the Member States. This is also the case for the GDPR. It can be expected that sanctions will be introduced gradually. In doing so, enforcement efforts will initially focus on those who do not seek to comply with the regulation. It is also to be expected that there will be a lot of material on how to comply with the regulation, as well as interpretative notes.
AIA, new EC proposals: the potential impact on the industry
Somewhat similar to the GDPR, the AIA proposal in its current form will have extraterritorial reach. As such, it could potentially affect a large number of companies with customers located in the EU. The regulation is in the midst of the EU legislative process, so it is not yet set in stone. However, the direction suggested by the EC is clear.
The proposed AI regulatory scheme worldwide is the first of its kind. Therefore, many experts expect it to have a major impact on other regions of the world. This is called the “Brussels effect”  (similar to what was experienced after the adoption of the GDPR). Therefore, it is all the more important that all stakeholders get involved in the debate now in order to create an adequate EU regime for AI.
It is critical is to participate in the current regulatory and policy debate. Yet many organizations will also need to start preparing for the new AIA and the risks associated with the new AI rules.
What is the next step?
The debit surrounding the legislative process on the AIA proposal will last until 2022. In addition to the current legislative debates, the EC will come up with additional legislative measures related to AI in 2022. These are aimed at adapting the liability framework to apply to emerging technologies. Most likely, these will include
- a revision of the Product Liability Directive;
- a legislative proposal related to the liability of AI systems;
- adjustments to existing sectoral safety legislation (including the General Product Safety Directive or the Radio Equipment Directive).
Importantly, in 2021 the EC will publish a policy program to implement Europe’s digital compass. This will cover a wider range of policies relevant to aligning the EU’s digital ambitions for 2030. It will include a roadmap setting out
- the general principles and commitments that Member States will be advised to follow;
- concrete actions needed to achieve the policy goals.
The use of AI systems is cited as one of the key areas to be developed to achieve the EU’s 2030 digital ambitions. Three quarters of European businesses should use cloud computing services, big data and AI solutions.
The EU plans to set the standards that will pave the way for ethical technology worldwide. At the same time, the EU needs to remain competitive. The industry has expressed deep concern about the far-reaching impact the future law will have on their businesses. This is evident from the public consultation on the proposed AIA. Not only will the AIA create unnecessary and burdensome compliance obligations in the eyes of much of the industry. It will also be very costly to industry.
In contrast is the view widely shared by civil society, trade unions and data protection authorities. They share the conviction that the proposed law does not go far enough. In their view, there should be tighter restrictions on high-risk AI use and compliance obligations for AI systems. Either way, for now it remains an open question whether the proposed AIA is going to become a de facto global standard.
The EU legislative process will lead to the first significant regulation of AI systems worldwide and any company that wants to do business in or with the EU will have to comply. That much seems certain by now. The legislative proposal is the basis on which the EC will continue to formulate its future policies around the various facets of AI. AI systems are constantly evolving. Therefore, it is difficult to guarantee that the legislative framework will be future-proof. However, this stage of the legislative process is the ideal time to understand the direction in which the negotiations will go. After all, now the technical elements of the bill are being discussed. It provides ample opportunities to engage with policymakers and thus influence this critically important new AI legislation.
AIA, new EC proposals: the current legislative phase
An organization will need to familiarize itself with what the new AI regulatory regime may mean. What steps can be taken now to anticipate and manage the new regime. Such steps may include the following:
- An inventory of all AI systems used by the organization;
- A risk classification system;
- Risk mitigation measures;
- Independent audits;
- Data risk management processes;
- An AI governance structure;
-  The Slovenian Council Presidency will host a high-level conference on AI on September 13-14, 2021.
-  E.g. the High-Level Expert Group on Artificial Intelligence; the High-Level Expert Group on the Impact of Digital Transformation on EU Labour Markets; or the Expert Group on Liability and New Technologies.
-  A regulation is an EU legal instrument that will be directly applicable in all EU member states and thus must take another step at the national level to become applicable law.
-  E.g., the Charter of Fundamental Rights of the EU.
-  See AIA, Annex III, for details.
-  “The Brussels Effect – how the European Union rules the world”, by Anu Bradford, Oxford Pres
Source: The National Law Review