EU Parliament adopts final version of AI Act: our key takeaways

7 mn

The first comprehensive AI regulation in the world aims for strong protection against harmful impacts.

Reading time: 9 minutes, 45 seconds

 

1. Context

After much anticipation, the EU Parliament formally adopted the final version of the proposal for an EU regulation laying down harmonised rules on artificial intelligence (AI Act) [1] on 13 March 2024. The text is still subject to a final lawyer-linguist check and needs to be formally endorsed by the Council of the EU.

Initially proposed by the EU Commission in April 2021, the proposal has been amended over the years to take account of technological developments, particularly generative AI and the launch of ChatGPT in November 2022.

The AI Act is the first comprehensive regulation for the artificial intelligence (AI) industry worldwide. It aims to ensure a high level of protection from harmful effects of AI systems in the EU, while supporting innovation and improving the functioning of the internal market.

The AI Act seeks to provide all market players with clear requirements and obligations regarding specific uses of AI. Accordingly, it sets out directly applicable rules on the development, marketing, commissioning and use of AI systems within the EU. It also applies more generally to machine learning training, testing and validation datasets.

 

2. Key features of the AI Act

Here are our main takeaways about the AI Act:

 

1. Definitions

“A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.

This definition is intentionally broad to remain “technologically neutral” and thus ensure that the text will not become outdated given this ever-evolving technology.

 

An AI system will be considered high-risk if it satisfies either of the following two conditions:

 

 

 

High-risk AI systems are subject to more stringent regulation than those that are lower risk.

 

“An AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development and prototyping activities”.

General purpose AI (GPAI) models are treated differently, given the potentially significant risks associated with their development and deployment (see point iii below). 

 

2. Risk-based approach

3. General purpose AI models

Under the AI Act, models are regulated separately from AI systems.

GPAI models are classified depending on their risk. A GPAI model will be characterised as “with systemic risk” if it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks (or the EU Commission has decided that a GPAI model has equivalent capabilities or impact).

Providers of GPAI models are subject to specific obligations such as:

Additional obligations apply to providers of GPAI models with systemic risk, including assessment and mitigation of the systemic risk and ensuring an adequate level of cybersecurity protection.

4. Obligations throughout the value chain, particularly for deployers (users)

The AI Act imposes obligations throughout the entire value chain: from providers, importers and distributors to deployers of AI solutions within the EU, as well as persons adversely impacted by the use of an AI system placed on the market or put into service in the EU.

A deployer is “any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity”.

All Luxembourg companies, whether or not regulated, may come within the scope of the AI Act’s requirements to some extent as soon as they are in contact with an AI system (i.e. as deployer of the AI system). The relevant requirements (e.g. transparency, policies and procedures, notification to authorities) will differ, depending on both their own characterisation within the meaning of the AI Act, and on the level of risk of the AI system(s) concerned.

In particular, specific transparency obligations will apply when using:

Certain entities (banks [2], insurers [3] and public services) will have to carry out a fundamental rights impact assessment prior to deploying a high-risk AI system.

 

5. AI regulatory sandboxes

“AI regulatory sandboxes… provide for a controlled environment that fosters innovation and facilitates the development, training, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific sandbox plan agreed between the prospective providers and the competent authority.”

The AI Act requires national competent authorities to establish at least one AI regulatory sandbox at national level, which must be operational 24 months after the AI Act enters into force. The sandbox may also be established jointly with one or several other Member States’ competent authorities. The EU Commission may provide technical support, advice and tools for the establishment and operation of AI regulatory sandboxes.

Competent authorities will have to provide, as appropriate, guidance, supervision and support within the sandbox with a view to identifying risks, in particular to fundamental rights, health and safety, testing, mitigation measures, and their effectiveness in relation to the obligations and requirements of the AI Act and, where relevant, other EU and Member States legislation supervised within the sandbox.

 

6. Governance

Their identity and tasks must be notified to the EU Commission. The EDPS has been designated as the competent authority for the supervision of EU institutions, agencies and bodies falling within the scope of the AI Act.

7. Sanctions

Market surveillance authorities will be able to impose sanctions in cases of non-compliance with the AI Act. These will include fines of up to 35 million euros or, if the offender is a company, up to 7% of its total worldwide annual turnover in cases of prohibited AI practices.

For providers of GPAI, the EU Commission may impose fines of up to 15 million euros or 3% of total worldwide turnover in the preceding financial year, whichever is higher.

The supply of incorrect, incomplete or misleading information to notified bodies or national competent authorities in reply to a request will also be subject to administrative fines of up to 7.5 million euros or, if the offender is a company, up to 1% of its total worldwide annual turnover for the preceding financial year, whichever is higher.

 

8. Other requirements

Obviously, it is also essential to address issues such as intellectual property, data protection, regulatory requirements, cybersecurity, contracts and liability to ensure responsible AI deployment.

As AI technologies continue to evolve, alignment with legal standards becomes increasingly important to mitigate risks and promote trust in AI-driven systems.

 

What’s next?

The adoption of the AI Act marks a significant milestone in shaping the future of AI governance. Once published in the Official Journal of the EU, the AI Act will enter into force on the 20th day following its publication. It will generally become fully applicable 24 months after its entry into force.

However, certain specific timelines will apply from the entry into force of the text:

 

How we can help

Contact our experts in the IP, Communication & Technology Team (ip_it_dataprotectionteam@arendt.com) and Regulatory & Consulting for further assistance on how to comply with the AI Act. They will be delighted to support you in scoping the AI Act’s applicability in your specific business context, and to assist with the required compliance actions.

 

 

[1] Interinstitutional File: 2021/0106(COD).

[2] For “AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud”.

[3] For “AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance.”

Download the press release

Luxembourg Newsflash – EU Parliament adopts final version of AI Act our key takeaways

pdf392 KB