The EU AI Act is here - What you need to know and what to do next
09 February 2024
09 February 2024
The Ashurst Emerging Tech Series is a collection of briefings compiled by our UK and European Digital Economy Transactions team, designed to help businesses understand and prepare for the impacts of new and incoming digital services and emerging tech legislation.
In our fourth briefing, we consider the EU Artificial Intelligence Act (AI Act).
If you're in any way interested in AI, you're probably aware of the EU AI Act - the world's first comprehensive regulatory framework for AI.
The AI Act covers the lifecycle of "AI systems": from development to use and all the stages in between. It has the combined aims of ensuring that AI is human-centric (overseen by humans), safe and trustworthy, while also enabling the EU to be a world-class hub for AI innovation – which is no mean feat.
Last week, Member States gave the AI Act the green light, affirming the political agreement reached by EU institutions in December and taking us one step closer to implementation. This week, the European Council published the latest, likely to be final, text of the AI Act (see here) and invited the European Parliament to endorse it.
Following a long period of negotiation, the AI Act was published in the Official Journal of the European Union on 12 July 2024 and enters into force with direct effect in Member States from 1 August 2024, subject to the transition periods for compliance we describe below. This follows political agreement by the EU institutions in December 2023 and a vote by the European Parliament on 13 March 2024.
To help businesses prepare, we've set out below a summary of the AI Act based on the latest text referred to above, and our suggested steps to get on the front-foot of compliance.
If you think you will be subject to the AI Act, be proactive! Consider how the AI Act will apply to your business and start working towards compliance now.
Scope analysis: businesses should start mapping their AI systems and prepare a documented assessment of how the AI Act is likely to apply. This will provide early indications of whether an AI system used by a business:
Impact / compliance assessment: once it is understood how the AI Act will apply, businesses should conduct a detailed assessment of the areas in which its current systems, processes and controls do not meet AI Act requirements. This should include:
The key roles regulated by the AI Act are:
Role |
Description |
Provider |
An entity which (1) develops, or instructs the development of, an AI system or GPAI model; (2) places that system on the market or puts it into service in the EU (3) under its own name or trademark; (4) whether for payment or free of charge. |
Deployer |
An entity which uses an AI system for a professional activity (i.e. personal, non-professional users are not deployers). |
Importer |
An entity which (1) is located or established in the EU; (2) places an AI system on the market in the EU; (3) that bears the name or trademark of an entity outside the EU. |
Distributor |
An entity in the supply chain (other than a provider or importer) that makes an AI system available in the EU. |
Collectively, the above members of the "AI value chain" are referred to as "operators" in the AI Act.
Obligations under the AI Act will differ depending on the role of the operator, with most obligations placed on providers of AI systems.
Distributors, importers, deployers and any other third party may be considered a "provider" of a high-risk AI system for the purposes of the AI Act where they:
The AI Act therefore has the potential to affect many types of organisations. In particular, it may regulate any business:
Similar to the GDPR, the AI Act will have an extra-territorial effect meaning it will apply to:
The AI Act will not apply to AI systems used:
The AI Act will also not apply to areas outside of the scope of EU law (e.g. to AI systems that have no impact on EU citizens or use in the EU).
The AI Act will take a risk-based approach, with different requirements applying to different risk classes, as summarised below:
Risk Category |
Description |
Key requirements |
Relevant AI systems |
No or minimal risk |
AI systems not caught by any of the below risk categories |
Not subject to the AI Act, however compliance with voluntary AI codes of conduct is encouraged. |
For example, AI systems used for spam filtering. |
Limited risk |
AI systems deemed to pose a limited risk, but which must be operated transparently |
Transparency obligations vary depending on the AI system, but generally providers / deployers must ensure that there is transparency as to the fact that the interaction, content, decision, output etc is created by AI. |
Subject to defined limited exceptions, AI systems used or intended to be used:
Certain emotion recognition and remote biometric categorisation systems will be classed as high-risk and will be subject to the requirements for high-risk systems listed below as well as the transparency requirements. |
High Risk |
AI systems that are specifically listed as high risk in the AI Act – broadly these are systems that create a high risk to the health and safety or fundamental rights of people |
High-risk AI systems are subject to comprehensive compliance requirements, covering seven main areas:
Certain operators of high-risk AI systems are subject to additional obligations, as set out in section 6 below. |
Subject to defined exceptions and except as prohibited below AI systems used or intended to be used:
|
Unacceptable risk - prohibited AI systems |
AI systems that present a threat to people, as specifically defined in Article 5 of the AI Act |
All such AI systems are banned under the AI Act. |
Subject to limited, defined exceptions, AI systems that:
|
As noted above, the AI Act places specific obligations on categories of "operators" in the AI value chain which depend on the operator's role, notably:
Providers must ensure high-risk AI systems comply with the requirements for high-risk AI systems set out above and must further be able to demonstrate compliance when reasonably requested to do so by a national competent regulator, providing information that is reasonably requested. Additionally, providers must:
Providers based outside of the EU – are required to appoint an authorised representative established in the EU.
Deployers (i.e. non-personal users) of high-risk AI systems must:
In addition to the obligations on operators of high-risk AI systems noted above, public body operators and private operators providing public services are required in all cases to undertake fundamental rights risk assessments in respect of any high-risk AI systems (except where the system is intended to be used as a safety component in the management and operation of critical public infrastructure).
Private body operators must also undertake fundamental rights risk assessments in relation to a narrow set of high-risk AI systems relating to credit scoring and life/health insurance pricing.
The AI Act will have dedicated rules for GPAI models (also known as "foundation models").
These are models which are trained on a large amount of data at scale, can perform a range of distinct tasks, can be integrated into other AI systems and are designed for general purpose/to perform a wide-range of distinct tasks.
Although GPAI models are essential components of AI systems, they do not constitute AI systems in their own right.
The AI Act notes that GPAI models "may be placed on the market in various ways, including through libraries, application programming interfaces (APIs), as direct download, or as physical copy."
The following risk categories apply:
Risk Category |
Key requirements |
All GPAI models |
General transparency requirements apply to all GPAI models, including a requirement for GPAI model providers to:
|
GPAI models posing a "systemic risk" |
GPAI models will be deemed to pose a "systemic" risk where they have "high-impact" capabilities that can be "propagated at scale" across the AI value chain. Such models are further defined as potentially having a "significant impact on the EU internal market due to their reach or due to actual or reasonably foreseeable negative effects such GPAI model could have on public health, safety, public security, fundamental rights, or the society as a whole." GPAI models which pose a systemic risk must comply with transparency requirements applicable to all GPAI models above and, in addition providers must:
|
As a starting point, most provisions will apply after a two-year transition period, i.e., from 2 August 2026 onwards, except for:
During the transition period, we can expect the EU Commission to publish various guidelines and standards and to launch the AI Pact, which is a scheme under which businesses can voluntarily commit to comply with certain obligations of the AI Act before the legal deadlines.
In-scope businesses could be subject to significant fines under the AI Act. Similar to the GDPR, these will be capped at a percentage of global annual turnover in the previous financial year or a fixed amount (whichever is higher), as follows:
The AI Act provides that penalties are to be effective and dissuasive, yet also proportionate. Accordingly, penalties will take into account the interests and economic viability of SMEs / start-ups.
The AI Act seeks to reduce risks to safety and the fundamental rights of individuals, however it is recognised that AI systems may still pose some harm to end users.
As the AI Act does not contain provisions on liability for the purposes of damages claims and compensation, two new, complementary liability regimes have been proposed by the EU Commission: the EU AI Liability Directive and the revised EU Product Liability Directive.
This set of directives will seek to redress harm caused to individuals by AI and will be the focus of our next article in our Emerging Tech Series.
Authors: David Futter, Partner; Aimi Gold, Senior Associate; William Barrow, Senior Associate; Sian Deighan, Associate; and with assistance from Sophie Thompson, Solicitor Apprentice
The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
Readers should take legal advice before applying it to specific issues or transactions.