Keypoint: The AI Act is the first legislation of its kind and is expected to have a significant impact on companies globally.
The European Parliament recently voted in favor of the Artificial Intelligence Act (“AI Act”) with overwhelming majority. Once finalized, the AI Act will have widespread impact for entities using artificial intelligence (“AI”) in their business operations. Similar to the European Union’s (“EU”) General Data Protection Regulation, the AI Act will apply extraterritorially to providers placing on the market or putting into service AI systems in the EU, irrespective of whether those providers are established in the EU or in a third country.
The AI Act is dense and expansive. For entities looking for an introduction to the topic, below we provide a brief overview of the current legislation, as well as what you can expect procedurally as the AI Act progresses toward final passage.
The AI Act takes a risk-based approach, classifying AI applications in three risk levels: “unacceptable risk,” “high risk,” and “low or minimal risk.” The AI Act leaves applications categorized as “low or minimal risk” largely unregulated.
Applications classified as creating unacceptable risk are considered violations of the fundamental rights of natural persons (as provided to all EU citizens in the Charter of Fundamental Rights of the European Union) due to the potential to manipulate individuals through subliminal techniques or exploit vulnerable groups. AI-based social scoring by public authorities is generally considered an unacceptable risk, as well as real-time remote biometric identification systems in public spaces for the purpose of law enforcement. Applications that pose unacceptable risks are prohibited in the European market.
Applications categorized as high risk are permitted subject to certain compliance requirements as discussed further below. High risk applications are further divided into two main categories: 1. systems intended for use as a safety component and that are subject to already-established third-party assessments (e.g., medical devices); and 2. other applications that may implicate fundamental rights but do not have a current, separate third-party assessment process already established in law or industry. Applications currently categorized as high risk can be viewed in Annex III of the proposed legislation.
Requirements for high risk applications are provided under Chapter 2, Articles 8 through 15. The proposed text requires the following:
- Implementation and maintenance of risk management system for high risk applications;
- Training, validating, and testing data sets;
- Up-to-date technical documentation that demonstrates compliance with the AI Act;
- Automatic logging of events while an application is operating;
- Transparency that allows users to interpret AI outputs and use an application appropriately;
- Human oversight; and
- “Appropriate level of accuracy, robustness, and cybersecurity.”
A summary of the obligations as a provider of high risk applications are provided here, and are articulated in more detail under Chapter 3, Article 16 through 29:
- Ensure compliance with the AI Act;
- Quality management systems;
- Providing technical documentation;
- Keep automatic logs;
- Ensure high risk applications undergo conformity assessments where applicable;
- Comply with registration obligations;
- Take corrective actions where necessary;
- Inform competent authorities of non-compliance and corrective actions taken, where applicable;
- Mark high risk applications with required marking to indicate conformity;
- Demonstrate compliance upon request from competent authorities; and
- Appoint an authorized representative in the European Union.
There are further obligations if you are a product manufacture, importer, or distributor of high-risk AI applications. Several issues, including enforcement and regulatory authority under the AI Act, are still to be determined as a part of the “trilogue” discussed below.
While the concept of “automated decision making” and the requirement to disclose the underlying reasoning behind it existed in the EU’s General Data Protection Regulation, there was previously very little specific guidance on how such information processing was viewed or regulated. The AI Act takes the regulation of similar processing activity to an entirely new level, with the potential for a new, complex regulatory process (including the potential for new regulators) that companies will need to carefully navigate
With passage by the European Parliament on June 14, 2023, the AI Act has entered the “trilogue” phase of the EU’s legislative procedure. The trialogue involves informal negotiations between the European Commission, the European Parliament, and the European Council to reconcile existing differences in the versions of the AI Act passed by the bodies to agree on the final text of the legislation.
After participating parties have finalized legislative language, it is documented in the form of a provisional agreement and is submitted for approval to European Parliament and the European Council for separate, formal adoption. After formal adoption, the legislation is published with implementation dates and other key pieces of information. The timeline of the trialogue process can vary significantly depending on the complexity of the legislation.