Powers of authorities protecting fundamental rights
Member States to identify the public authorities or bodies referred to in Article 77(1) (Powers of authorities protecting fundamental rights) and make a list of them publicly available.
Promoting the development of AI while addressing potential risks
1 August 2024
2 November 2024
Member States to identify the public authorities or bodies referred to in Article 77(1) (Powers of authorities protecting fundamental rights) and make a list of them publicly available.
2 February 2025
Chapter I (General provisions) and
Chapter II (Prohibited AI practices) start applying.
2 May 2025
EU Commission may, by way of an implementing act, approve a code of practice and give it general validity within the EU.
_
Update 10 July 2025: The General-Purpose AI (GPAI) Code of Practice was published. Member States and the EU Commission are assessing its adequacy.
No specific timeline
EU Commission to develop guidelines on the practical implementation of the AI Act, and in particular on:
_
Update 4 February 2025: EU Commission published Guidelines on prohibited artificial intelligence practices.
_
Update 6 February 2025: EU Commission published Guidelines on AI system definition.
_
Update 10 July 2025: EU Commission published the General-Purpose AI Code of Practice.
2 August 2025
EU Commission to develop guidance to facilitate compliance with the obligations set out in Article 73(1) (Reporting of serious incidents).
2 August 2025
Chapter III, Section 4 (High-risk AI systems – Notifying authorities and notified bodies);
Chapter V (General-purpose AI models);
Chapter VII (Governance), Chapter XII (Penalties) and
Article 78 (Confidentiality) start applying, with the exception of Article 101 (Fines for providers of general-purpose AI models).
2 August 2025
Providers of general-purpose AI models that have been placed on the market before 2 August 2025 must take the necessary steps in order to comply with the obligations laid down in the AI Act by 2 August 2027.
2 August 2025
Member States to communicate to the EU Commission the identity of national competent authorities (notifying authority and market surveillance authority) and the tasks of those authorities. Member States to make publicly available information on how competent authorities and single points of contact can be contacted.
2 August 2025
EU Commission to assess the need for amendment of the list set out in Annex III (High-risk AI systems referred to in Article 6(2)) and of the list of prohibited AI practices laid down in Article 5 (Prohibited AI practices) and submit the findings of that assessment to the EU Parliament and the Council of the EU. EU Commission to assess these lists on an annual basis.
2 February 2026
EU Commission, after consulting the European Artificial Intelligence Board, toprovide guidelines specifying the practical implementation of Article 6 (Classification rules for high-risk AI systems) together with a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk.
_
Update 6 June 2025: EU Commission launched a consultation on guidelines on the classification of AI systems as high-risk. The consultation closed on 18 July 2025.
2 February 2026
EU Commission to adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be included in the plan.
No specific timeline
EU Commission may adopt implementing acts establishing common specifications for the requirements set out in Chapter III, Section 2 (High-risk AI systems – Requirements for high-risk AI systems) or, as applicable, for the obligations set out in Chapter V, Sections 2 (General-purpose AI models – Obligations for providers of general-purpose AI models) and 3 (General-purpose AI models – Obligations of providers of general-purpose AI models with systemic risk) where the following conditions have been fulfilled:
No specific timeline
EU Commission to adopt delegated acts to amend the thresholds listed in Article 51(1) and (2) (Classification of general-purpose AI models as general-purpose AI models with systemic risk), as well as to supplement benchmarks and indicators in light of evolving technological developments, such as algorithmic improvements or increased hardware efficiency, when necessary, for these thresholds to reflect the state of the art.
No specific timeline
EU Commission empowered to adopt delegated acts in order to amend Annex XIII (Criteria for the designation of general-purpose AI models with systemic risk referred to in Article 51) by specifying and updating the criteria set out in that Annex.
No specific timeline
EU Commission empowered, for the purpose of facilitating compliance with Annex XI (Technical documentation referred to in Article 53(1)(a) – technical documentation for providers of general-purpose AI models), in particular points (2)(d) and (e) thereof, to adopt delegated acts to detail measurement and calculation methodologies with a view to allowing for comparable and verifiable documentation.
No specific timeline
EU Commission empowered to adopt delegated acts to amend Annex XI (Technical documentation referred to in Article 53(1)(a) – technical documentation for providers of general-purpose AI models) and Annex XII (Transparency information referred to in Article 53(1)(b) – technical documentation for providers of general-purpose AI models to downstream providers that integrate the model into their AI system) in light of evolving technological developments.
2 August 2026
The AI Act starts applying in full, with the exception of Article 6(1) (Classification rules for high-risk AI systems) and the corresponding obligations which will apply from 2 August 2027.
2 August 2026
Without prejudice to the application of Article 5 (Prohibited AI practices), the AI Act applies to operators of high-risk AI systems, other than the systems referred to in Article 111(1) (i.e. AI systems which are components of large-scale IT systems listed in Annex X), that have been placed on the market or put into service before 2 August 2026. However, this is only if, as from that date, those systems are subject to significant changes in their designs.
2 August 2026
Member States to ensure that their competent authorities establish at least one AI regulatory sandbox at national level, which must be operational by 2 August 2026.
No specific timeline
EU Commission to adopt implementing acts specifying the detailed arrangements for the establishment, development, implementation, operation and supervision of the AI regulatory sandboxes. The implementing acts must include common principles on the following issues:
No specific timeline
EU Commission to adopt an implementing act making provision for the establishment of a scientific panel of independent experts intended to support the enforcement activities under the AI Act.
No specific timeline
EU Commission to adopt implementing acts setting out the detailed arrangements and the conditions for the evaluations, including the detailed arrangements for involving independent experts, and the procedure for the selection thereof.
2 August 2027
Article 6(1) (Classification rules for high-risk AI systems) and the corresponding obligations in the AI Act start applying.
2 August 2027
Without prejudice to the application of Article 5 (Prohibited AI practices), AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex X (EU legislative acts on large-scale IT systems in the area of Freedom, Security and Justice) that have been placed on the market or put into service before 2 August 2027 must be brought into compliance with the AI Act by 31 December 2030.
2 August 2027
Providers of general-purpose AI models that have been placed on the market before 2 August 2025 must have taken the necessary steps in order to comply with the obligations laid down in the AI Act.
No specific timeline
EU Commission empowered to adopt delegated acts in order to amend Article 6(3), second subparagraph, (Classification rules for high-risk AI systems) by adding new conditions to those laid down therein, or by modifying them, where there is concrete and reliable evidence of the existence of AI systems that fall under the scope of Annex III (High-risk AI systems referred to in Article 6(2)), but do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons. EU Commission to adopt delegated acts in order to amend Article 6(3), second subparagraph, by deleting any of the conditions laid down therein, where there is concrete and reliable evidence that this is necessary to maintain the level of protection of health, safety and fundamental rights provided for by the AI Act.
No specific timeline
EU Commission empowered to adopt delegated acts to amend Annex III (High-risk AI systems referred to in Article 6(2)) by adding or modifying use cases of high-risk AI systems where both of the following conditions are fulfilled:
EU Commission empowered to adopt delegated acts to amend the list in Annex III by removing high-risk AI systems where both of the following conditions are fulfilled:
No specific timeline
EU Commission empowered to adopt delegated acts in order to amend Annex IV (Technical documentation referred to in Article 11(1)), where necessary, to ensure that, in light of technical progress, the technical documentation provides all the information necessary to assess the compliance of the system with the requirements set out in Section 2 (Requirements for high-risk AI systems) of Chapter III (High-risk AI systems).
No specific timeline
EU Commission empowered to adopt delegated acts in order to amend Annex VI (Conformity assessment procedure based on internal control)and Annex VII (Conformity based on an assessment of the quality management system and an assessment of the technical documentation) by updating them in light of technical progress.
No specific timeline
EU Commission empowered to adopt delegated acts in order to amend Article 43(1) and (2) (Conformity assessment), in order to subject high-risk AI systems referred to in points 2 to 8 of Annex III (High-risk AI systems referred to in Article 6(2)) to the conformity assessment procedure referred to in Annex VII (Conformity based on an assessment of the quality management system and an assessment of the technical documentation) or parts thereof.
No specific timeline
EU Commission empowered to adopt delegated acts in order to amend Annex V (EU declaration of conformity) by updating the content of the EU declaration of conformity set out in that Annex, in order to introduce elements that become necessary in light of technical progress.
2 August 2028
EU Commission to evaluate and report to the EU Parliament and the Council of the EU on the following:
EU Commission to report every four years.
2 August 2028
EU Commission to evaluate and report to the EU Parliament and the Council of the EU on the functioning of the AI Office, whether the AI Office has been given sufficient powers and competences to fulfil its tasks, and whether it would be relevant and needed for the proper implementation and enforcement of the AI Act to upgrade the AI Office and its enforcement competences and to increase its resources.
2 August 2028
EU Commission to report to the EU Parliament and the Council of the EU on the review of the progress on the development of standardisation deliverables on the energy-efficient development of general-purpose AI models, and to assess the need for further measures or actions, including binding measures or actions. EU Commission to report every four years.
2 August 2028
EU Commission to evaluate the impact and effectiveness of voluntary codes of conduct to foster the application of the requirements set out in Chapter III, Section 2 (High-risk AI systems – Requirements for high-risk AI systems) for AI systems other than high-risk AI systems and possibly other additional requirements for AI systems other than high-risk AI systems, including as regards environmental sustainability. EU Commission to evaluate the impact every three years.
2 August 2029
EU Commission to report on the evaluation and review of the AI Act to the EU Parliament and to the Council of the EU. The report must include an assessment with regard to the structure of enforcement and the possible need for an EU agency to resolve any identified shortcomings. On the basis of the findings, that report must, where appropriate, be accompanied by a proposal for amendment of the AI Act. EU Commission to report every four years.
2 August 2030
Providers and deployers of high-risk AI systems intended to be used by public authorities must comply with the requirements and obligations of the AI Act.
2 December 2030
Without prejudice to the application of Article 5 (Prohibited AI practices), AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex X (EU legislative acts on large-scale IT systems in the area of Freedom, Security and Justice) that have been placed on the market or put into service before 2 August 2027 must comply with the AI Act.
2 August 2031
EU Commission to carry out an assessment of the enforcement of the AI Act and report on it to the EU Parliament, the Council of the EU and the European Economic and Social Committee, taking into account the first years of application of the AI Act. On the basis of the findings, that report must, where appropriate, be accompanied by a proposal for amendment of the AI Act with regard to the structure of enforcement and the need for an EU agency to resolve any identified shortcomings.