AI Act: The EU Parliament adopts its position

ICTInsider-AI-act-artificial-intelligence

Authors: Davide Baldini, Francesca Tugnoli, Eleonora Margherita Auletta

 

 

Legislative process and entry into force

On 14 June 2023, the European Parliament approved its position[1] on the so-called “AI Act”, which is the draft regulation proposed by the European Commission in 2021[2]. Its aim is to establish rules applicable to the Artificial Intelligence[3] sector and will be applicable across the EU.

The text approved by the European Parliament is not yet the final one.[4] However, the AI Act is expected to be formally adopted by the end of 2023 and would then be applicable by 2025[5].

 

“Artificial Intelligence”: Definition and scope of application of the AI Act

The objective of the Regulation is to promote the development of Artificial Intelligence (“AI”) systems capable of guaranteeing a high level of protection for health, safety, fundamental rights, democracy and the environment[6]. The Regulation adopts a rather broad definition of “AI System” to be: “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments[7]. This definition is in line with the one recently adopted by the OECD[8], in order to provide for a more common definition of AI[9]. By virtue of its particularly broad scope, this definition encompasses not only approaches based on machine learning[10] – to date, the most well-known and used AI sector – but also other technologies that have been around for decades, such as expert systems[11].

From a subjective point of view, the AI Act places most of the obligations on the “AI Provider” which is the subject that develops and/or markets the AI System[12]. The Provider is, however, different from the “Deployer” which is the effective user of the AI System and, subsequently has fewer obligations.

 

A risk-based approach: The categorization of AI Systems

The AI Act stands out for having adopted a course of action based on the assessment of risk, which is reflected in the categorization of AI into three types of systems:

        • Prohibited systems which are considered excessively risky;
        • High-risk systems which are subject to stringent obligations and requirements; and
        • Low and limited risk systems which are subject to less stringent obligations which are mainly related to the terms of transparency towards the end user.

The category of prohibited systems[13] includes AI Systems which are used for purposes of:

        • Social scoring by public administrations;
        • Manipulation of free will through subliminal techniques;
        • Remote biometric identification in real time in publicly accessible spaces[14];
        • Biometric identification in order to categorize people on the basis of the special categories of personal data pursuant to Art. 9 of Regulation (EU) 2016/679 (“GDPR”);
        • Emotion recognition, except when used for medical applications and with the informed consent of the patient.

The list of high-risk systems[15] is much more extensive, and includes the AI Systems used, for example, for purposes of:

        • Credit risk scoring;
        • Assessment of migration, asylum, or international protection status;
        • Evaluation of student performances;
        • Justice, for example in the assessment of the risk of recidivism;
        • Use in the workplace, including evaluating CVs for recruitment purposes, employee performance, etc

Also the so-called “recommender systems” are considered to be high risk, but only when used by the recently published list of “very large online platforms (VLOPs)” or “very large online search engines (VLOSEs)” by the European Commission[16], under the Digital Services Act[17].

Providers of high-risk systems are subject to multiple obligations. They will have to:

        1. Register their systems in a database
        2. Subject systems to a prior compliance mechanism, which will be conducted before placing the system on the market and once completed, will allow the Provider to use the “CE” marker;
        3. Meet a number of technical requirements regarding risk management;
        4. Implement testing mechanisms;
        5. Ensure the technical robustness of the system;
        6. Meet a variety of data training and data governance requirements;
        7. Ensure transparency;
        8. Ensure human control over IT operation and security;
        9. Provide detailed instructions on the use of the AI System to the Deployer and, more generally, design the AI Systems so that the Deployers can understand their functioning[18]; and
        10. If established outside the EU, designate an authorized representative within the EU.

In order to fulfill the above obligations, AI Providers will be able to resort to specific EU standards, currently undergoing adoption and, in so doing, they can benefit from a presumption of conformity with the AI Act[19].

Finally, the Deployer who uses high-risk AI Systems is required to carry out a fundamental rights impact assessment[20] prior to using the system.

 

Focus on the regulation of “Generative AI”

Compared to the previous version of the AI Act, the European Parliament has integrated new obligations which will be applicable to Providers and Deployers of “General-purpose AI Systems”[21]. This includes tools such as ChatGPT, introducing a “layered” regulation applicable to this technology and also distinguishes the further category of “foundation models”. Providers of such systems are also required to ensure a high level of protection of fundamental rights, health, safety, the environment, democracy and the rule of law and, to assess and mitigate the risks deriving from such systems, as well as register them in a database.

The systems used to generate art, music and other content are, however, subject to strict transparency obligations. The relevant Providers are required to report that the content was generated by an AI System. They are also required to train and design the models to prevent the generation of illegal content and, to publish information on the use of copyrighted data where such data is used to train model.

 

Conclusion and Practical take-aways: New obligations and compliance opportunities

There are multiple points of contact between the new Regulation and the GDPR. In fact, where the AI System carries out personal data processing activities, the GDPR and the AI Act will both apply. Consequently, the sanction that may be imposed pursuant to a violation of the AI Act i.e., up to 40 million or 7% of annual worldwide turnover, may possibly be combined with those provided for in case of a GDPR violation.

In particular, the role of a “Deployer” will, in practice, overlap with that of the “Data Controller”. Therefore, the deployer will be able to take advantage of the obligations that the new Regulation places on the “Provider” including, in particular, the obligation to provide the instructions and technical specifications relating to the AI System, to fulfill its obligation to carry out a data protection impact assessment (DPIA)[22], and in, general to enhance their accountability.

In the context of the subsequent legislative developments, there are expected changes that will mostly affect the list of prohibited and high-risk systems, the definition of AI and the regulation of generative AI. The general structure and content of the Regulation should not undergo any particular changes[23].

Finally, in the light of the foregoing, it is recommended that organizations that use AI Systems already put in place a mapping exercise of such systems, while simultaneously verifying the risk category they belong to. To this end, the GDPR compliance activities already in place can be leveraged to one’s advantage, for example, the drafting of the Records of processing activities. Providers of high-risk AI Systems will be able, for their part, to already carry out preliminary assessments regarding the technical compliance of their solutions with the requirements of the AI Act.

 

 

 

 

[1] The text of the version approved by the Parliament is available at the following URL: https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.html. Within this contribution, the term “AI Act” and “Regulation” refer to the version approved by the Parliament.

[2] COM(2021) 206 final https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206.

[3] The expression “Artificial Intelligence” does not find a globally accepted definition (see. S.J. Russell, P. Norvig, Artificial Intelligence: A Modern Approach, Harlow, 2021, p. 19 and following). Within this contribution, this expression refers to the definition proposed by the European Parliament, examined below.

[4] The text approved by the Parliament will now undergo to the so-called “trilogues”. Simplifying, it is a phase of the EU’s ordinary legislative procedure (governed by arts. 289 et seq. TFEU), consisting of interinstitutional negotiations between the European Parliament (expression of popular sovereignty), the EU Council (expression of the governments of the Member States) and the European Commission (expression of the executive power of the Union).

[5] https://www.euractiv.com/section/politics/news/eu-commission-expects-first-bloc-wide-ai-law-to-be-adopted-this-year/.

[6] See art. 1 of the AI Act.

[7] See art. 3(1)(1) of the AI Act.

[8] In particular, see the document “Recommendation of the Council on Artificial Intelligence”, available at the following address: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.

[9] Recital 6 of the AI Act establishes in this regard that: “The notion of AI system in this Regulation should be clearly defined and closely aligned with the work of international organisations working on artificial intelligence to ensure legal certainty, harmonization and wide acceptance, while providing the flexibility to accommodate the rapid technological developments in this field”.

[10] According to the well-known definition of T. Mithcell, machine learning can be defined as “the study of computer algorithms that automatically improve through experience”. See T. Mitchell, Machine Learning, McGraw Hill, 1997.

[11] These are software, developed since the 1970s, programmed to solve complex problems by reasoning through “bodies of knowledge”, mainly represented as “if-then” rules. See S.J. Russell and P. Norvig, note 3 above, pp. 40-42.

[12] The definition can be found in art. 3(2) of the AI Act: “a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed and places that system on the market or puts it into service under its own name or trademark, whether for payment or free of charge”.

[13] The category is laid down in art. 5 of the AI Act.

[14] The use of these systems is possible only when mandated by the judicial authority in the context of criminal investigations.

[15] The category is laid down in art. 6 of the AI Act.

[16] https://ec.europa.eu/commission/presscorner/detail/en/IP_23_2413.

[17] Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act).

[18] Art. 13(1) of the AI Act.

[19] For an examination of the role of standards within the AI Act, see M. Veale, F. Z. Borgesius, Demystifying the Draft EU Artificial Intelligence Act, Computer Law Review International, 4/2021, available at the following URL: https://arxiv.org/abs/2107.03721.

[20] Art. 29 bis of the AI Act.

[21] The relevant definition can be found in art. 3(1d) of the AI Act: “an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed”.

[22] As reiterated in art. 29(5) of the AI Act.

[23] See note 5, supra.

ICTLC Italy
italy@ictlegalconsulting.com