AI Act

Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (AI Act)

Promoting the development of AI while addressing potential risks

2024

2 November 2024

Powers of authorities protecting fundamental rights

Member States to identify the public authorities or bodies referred to in Article 77(1) (Powers of authorities protecting fundamental rights) and make a list of them publicly available.

Chapter I (General provisions) and
Chapter II (Prohibited AI practices) start applying.

2025

2 May 2025

Codes of practice

EU Commission may, by way of an implementing act, approve a code of practice and give it general validity within the EU.

_

Update 10 July 2025: The General-Purpose AI (GPAI) Code of Practice was published. Member States and the EU Commission are assessing its adequacy.

No specific timeline

Guidelines from the EU Commission on implementation of AI Act

EU Commission to develop guidelines on the practical implementation of the AI Act, and in particular on:

  • the application of the requirements and obligations referred to in Articles 8 to 15 (Requirements for high-risk AI systems) and in Article 25 (Responsibilities along the AI value chain);
  • the prohibited practices referred to in Article 5 (Prohibited AI practices);

_

Update 4 February 2025: EU Commission published Guidelines on prohibited artificial intelligence practices.

  • the practical implementation of the provisions related to substantial modification;
  • the practical implementation of transparency obligations laid down in Article 50 (Transparency obligations for providers and deployers of certain AI systems);
  • detailed information on the relationship of the AI Act with the EU harmonisation legislation listed in Annex I (List of Union harmonisation legislation), as well as with other relevant EU law, including as regards consistency in their enforcement;
  • the application of the definition of an AI system as set out in Article 3(1) (Definitions).

_

Update 6 February 2025: EU Commission published Guidelines on AI system definition.

_

Update 10 July 2025: EU Commission published the General-Purpose AI Code of Practice.

2 August 2025

Reporting of serious incidents

EU Commission to develop guidance to facilitate compliance with the obligations set out in Article 73(1) (Reporting of serious incidents).

Chapter III, Section 4 (High-risk AI systems – Notifying authorities and notified bodies);

Chapter V (General-purpose AI models);

Chapter VII (Governance), Chapter XII (Penalties) and

Article 78 (Confidentiality) start applying, with the exception of Article 101 (Fines for providers of general-purpose AI models).

2 August 2025

AI systems already placed on the market or put into service and general-purpose AI models already placed on the market

Providers of general-purpose AI models that have been placed on the market before 2 August 2025 must take the necessary steps in order to comply with the obligations laid down in the AI Act by 2 August 2027.

2 August 2025

Designation of national competent authorities and single points of contact

Member States to communicate to the EU Commission the identity of national competent authorities (notifying authority and market surveillance authority) and the tasks of those authorities. Member States to make publicly available information on how competent authorities and single points of contact can be contacted.

2 August 2025

Evaluation and review

EU Commission to assess the need for amendment of the list set out in Annex III (High-risk AI systems referred to in Article 6(2)) and of the list of prohibited AI practices laid down in Article 5 (Prohibited AI practices) and submit the findings of that assessment to the EU Parliament and the Council of the EU. EU Commission to assess these lists on an annual basis.

2026

2 February 2026

Classification rules for high-risk AI systems

EU Commission, after consulting the European Artificial Intelligence Board, toprovide guidelines specifying the practical implementation of Article 6 (Classification rules for high-risk AI systems) together with a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk.

_

Update 6 June 2025: EU Commission launched a consultation on guidelines on the classification of AI systems as high-risk. The consultation closed on 18 July 2025.

2 February 2026

Post-market monitoring by providers and post-market monitoring plan for high-risk AI systems

EU Commission to adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be included in the plan.

No specific timeline

Common specifications – high-risk AI systems and general-purpose AI models

EU Commission may adopt implementing acts establishing common specifications for the requirements set out in Chapter III, Section 2 (High-risk AI systems – Requirements for high-risk AI systems) or, as applicable, for the obligations set out in Chapter V, Sections 2 (General-purpose AI models – Obligations for providers of general-purpose AI models) and 3 (General-purpose AI models – Obligations of providers of general-purpose AI models with systemic risk) where the following conditions have been fulfilled:

  • the EU Commission has requested, pursuant to Article 10(1) of Regulation (EU) 1025/2012 on European standardisation, one or more European standardisation organisations to draft a harmonised standard for the requirements set out in Chapter III, Section 2, or, as applicable, for the obligations set out in Chapter V, Sections 2 and 3, and:
    • the request has not been accepted by any of the European standardisation organisations; or the harmonised standards addressing that request are not delivered within the deadline set in accordance with Article 10(1) of Regulation (EU) 1025/2012; or the relevant harmonised standards do not sufficiently address fundamental rights concerns; or
    • the harmonised standards do not comply with the request; and
  • no reference to harmonised standards covering the requirements referred to in Chapter III, Section 2, or, as applicable, the obligations referred to in Chapter V, Sections 2 and 3, has been published in the Official Journal of the EU in accordance with Regulation (EU) No 1025/2012, and no such reference is expected to be published within a reasonable period.

No specific timeline

Classification of general-purpose AI models as general-purpose AI models with systemic risk

EU Commission to adopt delegated acts to amend the thresholds listed in Article 51(1) and (2) (Classification of general-purpose AI models as general-purpose AI models with systemic risk), as well as to supplement benchmarks and indicators in light of evolving technological developments, such as algorithmic improvements or increased hardware efficiency, when necessary, for these thresholds to reflect the state of the art.

No specific timeline

General-purpose AI models: amendments to Annex XIII

EU Commission empowered to adopt delegated acts in order to amend Annex XIII (Criteria for the designation of general-purpose AI models with systemic risk referred to in Article 51) by specifying and updating the criteria set out in that Annex.

No specific timeline

Obligations for providers of general-purpose AI models

EU Commission empowered, for the purpose of facilitating compliance with Annex XI (Technical documentation referred to in Article 53(1)(a) – technical documentation for providers of general-purpose AI models), in particular points (2)(d) and (e) thereof, to adopt delegated acts to detail measurement and calculation methodologies with a view to allowing for comparable and verifiable documentation.

No specific timeline

General-purpose AI models: amendments to Annexes XI and XII

EU Commission empowered to adopt delegated acts to amend Annex XI (Technical documentation referred to in Article 53(1)(a) – technical documentation for providers of general-purpose AI models) and Annex XII (Transparency information referred to in Article 53(1)(b) – technical documentation for providers of general-purpose AI models to downstream providers that integrate the model into their AI system) in light of evolving technological developments.

The AI Act starts applying in full, with the exception of Article 6(1) (Classification rules for high-risk AI systems) and the corresponding obligations which will apply from 2 August 2027.

2 August 2026

AI systems already placed on the market or put into service and general-purpose AI models already placed on the market

Without prejudice to the application of Article 5 (Prohibited AI practices), the AI Act applies to operators of high-risk AI systems, other than the systems referred to in Article 111(1) (i.e. AI systems which are components of large-scale IT systems listed in Annex X), that have been placed on the market or put into service before 2 August 2026. However, this is only if, as from that date, those systems are subject to significant changes in their designs.

2 August 2026

AI regulatory sandboxes

Member States to ensure that their competent authorities establish at least one AI regulatory sandbox at national level, which must be operational by 2 August 2026.

No specific timeline

AI regulatory sandboxes

EU Commission to adopt implementing acts specifying the detailed arrangements for the establishment, development, implementation, operation and supervision of the AI regulatory sandboxes. The implementing acts must include common principles on the following issues:

  • eligibility and selection criteria for participation in the AI regulatory sandbox;
  • procedures for the application, participation, monitoring, exiting from and termination of the AI regulatory sandbox, including the sandbox plan and the exit report;
  • the terms and conditions applicable to the participants.

No specific timeline

Scientific panel of independent experts

EU Commission to adopt an implementing act making provision for the establishment of a scientific panel of independent experts intended to support the enforcement activities under the AI Act.

No specific timeline

Power to conduct evaluations

EU Commission to adopt implementing acts setting out the detailed arrangements and the conditions for the evaluations, including the detailed arrangements for involving independent experts, and the procedure for the selection thereof.

Article 6(1) (Classification rules for high-risk AI systems) and the corresponding obligations in the AI Act start applying.

2027

2 August 2027

AI systems already placed on the market or put into service and general-purpose AI models already placed on the market

Without prejudice to the application of Article 5 (Prohibited AI practices), AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex X (EU legislative acts on large-scale IT systems in the area of Freedom, Security and Justice) that have been placed on the market or put into service before 2 August 2027 must be brought into compliance with the AI Act by 31 December 2030.

2 August 2027

AI systems already placed on the market or put into service and general-purpose AI models already placed on the market

Providers of general-purpose AI models that have been placed on the market before 2 August 2025 must have taken the necessary steps in order to comply with the obligations laid down in the AI Act.

No specific timeline

Classification rules for high-risk AI systems

EU Commission empowered to adopt delegated acts in order to amend Article 6(3), second subparagraph, (Classification rules for high-risk AI systems) by adding new conditions to those laid down therein, or by modifying them, where there is concrete and reliable evidence of the existence of AI systems that fall under the scope of Annex III (High-risk AI systems referred to in Article 6(2)), but do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons. EU Commission to adopt delegated acts in order to amend Article 6(3), second subparagraph, by deleting any of the conditions laid down therein, where there is concrete and reliable evidence that this is necessary to maintain the level of protection of health, safety and fundamental rights provided for by the AI Act.

No specific timeline

Amendments to Annex III (High-risk AI systems)

EU Commission empowered to adopt delegated acts to amend Annex III (High-risk AI systems referred to in Article 6(2)) by adding or modifying use cases of high-risk AI systems where both of the following conditions are fulfilled:

  • the AI systems are intended to be used in any of the areas listed in Annex III;
  • the AI systems pose a risk of harm to health and safety, or an adverse impact on fundamental rights, and that risk is equivalent to, or greater than, the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.

EU Commission empowered to adopt delegated acts to amend the list in Annex III by removing high-risk AI systems where both of the following conditions are fulfilled:

  • the high-risk AI system concerned no longer poses any significant risks to fundamental rights, health or safety, taking into account the criteria listed in Article 7(2) (Amendments to Annex III);
  • the deletion does not decrease the overall level of protection of health, safety and fundamental rights under EU law.

No specific timeline

Amendments to Annex IV (Technical documentation for high-risk AI systems)

EU Commission empowered to adopt delegated acts in order to amend Annex IV (Technical documentation referred to in Article 11(1)), where necessary, to ensure that, in light of technical progress, the technical documentation provides all the information necessary to assess the compliance of the system with the requirements set out in Section 2 (Requirements for high-risk AI systems) of Chapter III (High-risk AI systems).

No specific timeline

Amendments to Annexes VI and VII (Conformity assessment)

EU Commission empowered to adopt delegated acts in order to amend Annex VI (Conformity assessment procedure based on internal control)and Annex VII (Conformity based on an assessment of the quality management system and an assessment of the technical documentation) by updating them in light of technical progress.

No specific timeline

Conformity assessment

EU Commission empowered to adopt delegated acts in order to amend Article 43(1) and (2) (Conformity assessment), in order to subject high-risk AI systems referred to in points 2 to 8 of Annex III (High-risk AI systems referred to in Article 6(2)) to the conformity assessment procedure referred to in Annex VII (Conformity based on an assessment of the quality management system and an assessment of the technical documentation) or parts thereof.

No specific timeline

EU declaration of conformity

EU Commission empowered to adopt delegated acts in order to amend Annex V (EU declaration of conformity) by updating the content of the EU declaration of conformity set out in that Annex, in order to introduce elements that become necessary in light of technical progress.

2028

2 August 2028

Evaluation and review

EU Commission to evaluate and report to the EU Parliament and the Council of the EU on the following:

  • the need for amendments extending existing area headings or adding new area headings in Annex III (High-risk AI systems referred to in Article 6(2));
  • amendments to the list of AI systems requiring additional transparency measures in Article 50 (Transparency obligations for providers and deployers of certain AI systems);
  • amendments enhancing the effectiveness of the supervision and governance system.

EU Commission to report every four years.

2 August 2028

Evaluation and review

EU Commission to evaluate and report to the EU Parliament and the Council of the EU on the functioning of the AI Office, whether the AI Office has been given sufficient powers and competences to fulfil its tasks, and whether it would be relevant and needed for the proper implementation and enforcement of the AI Act to upgrade the AI Office and its enforcement competences and to increase its resources.

2 August 2028

Evaluation and review

EU Commission to report to the EU Parliament and the Council of the EU on the review of the progress on the development of standardisation deliverables on the energy-efficient development of general-purpose AI models, and to assess the need for further measures or actions, including binding measures or actions. EU Commission to report every four years.

2 August 2028

Evaluation and review

EU Commission to evaluate the impact and effectiveness of voluntary codes of conduct to foster the application of the requirements set out in Chapter III, Section 2 (High-risk AI systems – Requirements for high-risk AI systems) for AI systems other than high-risk AI systems and possibly other additional requirements for AI systems other than high-risk AI systems, including as regards environmental sustainability. EU Commission to evaluate the impact every three years.

2029

2 August 2029

Evaluation and review

EU Commission to report on the evaluation and review of the AI Act to the EU Parliament and to the Council of the EU. The report must include an assessment with regard to the structure of enforcement and the possible need for an EU agency to resolve any identified shortcomings. On the basis of the findings, that report must, where appropriate, be accompanied by a proposal for amendment of the AI Act. EU Commission to report every four years.

2030

2 August 2030

AI systems already placed on the market or put into service and general-purpose AI models already placed on the market

Providers and deployers of high-risk AI systems intended to be used by public authorities must comply with the requirements and obligations of the AI Act.

2 December 2030

AI systems already placed on the market or put into service and general-purpose AI models already placed on the market

Without prejudice to the application of Article 5 (Prohibited AI practices), AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex X (EU legislative acts on large-scale IT systems in the area of Freedom, Security and Justice) that have been placed on the market or put into service before 2 August 2027 must comply with the AI Act.

2031

2 August 2031

Evaluation and review

EU Commission to carry out an assessment of the enforcement of the AI Act and report on it to the EU Parliament, the Council of the EU and the European Economic and Social Committee, taking into account the first years of application of the AI Act. On the basis of the findings, that report must, where appropriate, be accompanied by a proposal for amendment of the AI Act with regard to the structure of enforcement and the need for an EU agency to resolve any identified shortcomings.