Keypoint: The European Parliament voted to approve the EU’s Artificial Intelligence Act and it will enter force 20 days after publication in the EU Official Journal.

On March 13, 2024, the European Parliament voted to approve the European Union’s Artificial Intelligence Act (commonly referred to as the EU AI Act). The vote, originally scheduled for April of 2024, was moved up by one month after all relevant parties reached agreement on the text of the Act. Following this final vote, the EU AI Act will “enter into force” 20 days after publication in the EU Official Journal, which typically occurs within a few days of an affirmative vote and starts the relevant compliance timelines within the EU AI Act.

In terms of compliance deadlines, the law takes “full effect” 2 years from the date it comes into force, but, practically, many aspects of the law will apply prior to that 2-year deadline. The exception to this is the 36-month applicability for AI models considered a high risk, allowing companies utilizing high risk models more time to comply with obligations. The timeline for companies to comply with the various aspects of the EU AI Act will proceed as follows, as measured from the date the law enters into force:

  • 6 months: Bans on certain AI Applications posing unacceptable risk go into effect.
  • 9 months: Regulators to establish “Codes of Practice” for AI models to follow.
  • 12 months: Law applies to general purpose AI models (i.e. models not classified as “High Risk”).
  • 36 months: Obligations on “High Risk” AI models applies.

The legislation aims to strike a balance between innovation and ensuring that AI is trustworthy, safe, and respects the fundamental rights of EU citizens. The EU AI Act establishes obligations for AI systems based on their potential risks and the impact these systems can have on data subjects in the EU. The key points of the EU AI Act include:

  • Risk-Based Approach: The EU AI Act classifies AI systems according to the level of risk they pose. The framework builds around four categories: unacceptable risk, high risk, limited risk, and minimal risk.
  • Prohibited Practices: AI systems that pose an unacceptable risk are banned. This includes AI systems that can manipulate human behavior to circumvent users’ free will (e.g., subliminal techniques), systems using biometric categorization, systems that allow ‘social scoring’ by governments, and untargeted scraping of facial images from CCTV for database creation.
  • Law Enforcement Exceptions: There are exceptions in the law for certain practices that would otherwise be prohibited, though the law attempts to narrowly define the circumstances where such technology can be deployed (e.g., use of biometric identification system).
  • High-Risk AI Systems: Systems classified as high risk include AI technology used in sectors such as critical infrastructure, education and vocational training, employment, essential private and public services (e.g., healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. elections). These systems must undergo rigorous assessments of their safety, transparency, and data governance before being deployed.
  • Conformity Assessments: Before launching a high-risk AI system, the provider must perform a conformity assessment to ensure compliance with certain requirements. If the AI system conforms to the required standards, the provider must issue a CE marking, indicating conformity with EU regulations.
  • Transparency Rules: For certain AI systems that interact with people (e.g., chatbots), users should be informed that they are interacting with artificial intelligence. This is to ensure transparency and allow users to make informed decisions about these interactions.
  • Data Governance: Providers of AI systems must ensure data governance measures are in place. The data used to train, validate, and test AI systems should be managed carefully to avoid risks and unintended biases.
  • Monitoring and Reporting: AI system providers and users are expected to continuously monitor the performance of high-risk systems and report to-be-established European Artificial Intelligence Board any serious incidents or malfunctioning.
  • Measure to Support Innovation: The Act introduces the concept of regulatory sandboxes and opportunities for structured real-world testing to be established at the country level to still allow for innovations.
  • National Supervisory Authorities: Member states are instructed to designate or establish national supervisory authorities responsible for ensuring compliance with the act.
  • European Artificial Intelligence Board: To facilitate the consistent application of the AI Act, a European Artificial Intelligence Board (EAIB) will be established, composed of representatives from every member state and the Commission.