Legal development

The EU AI Act is here - What you need to know and what to do next

computer chip

    The Ashurst Emerging Tech Series is a collection of briefings compiled by our UK and European Digital Economy Transactions team, designed to help businesses understand and prepare for the impacts of new and incoming digital services and emerging tech legislation.

    In our fourth briefing, we consider the EU Artificial Intelligence Act (AI Act).

    1. Introduction

    If you're in any way interested in AI, you're probably aware of the EU AI Act - the world's first comprehensive regulatory framework for AI.

    The AI Act covers the lifecycle of "AI systems": from development to use and all the stages in between. It has the combined aims of ensuring that AI is human-centric (overseen by humans), safe and trustworthy, while also enabling the EU to be a world-class hub for AI innovation – which is no mean feat.

    Last week, Member States gave the AI Act the green light, affirming the political agreement reached by EU institutions in December and taking us one step closer to implementation. This week, the European Council published the latest, likely to be final, text of the AI Act (see here) and invited the European Parliament to endorse it.

    Following a long period of negotiation, the AI Act was published in the Official Journal of the European Union on 12 July 2024 and enters into force with direct effect in Member States from 1 August 2024, subject to the transition periods for compliance we describe below. This follows political agreement by the EU institutions in December 2023 and a vote by the European Parliament on 13 March 2024.

    To help businesses prepare, we've set out below a summary of the AI Act based on the latest text referred to above, and our suggested steps to get on the front-foot of compliance.

    2. Key takeaways

     

    3. What you should do now

    If you think you will be subject to the AI Act, be proactive! Consider how the AI Act will apply to your business and start working towards compliance now.

    Scope analysis: businesses should start mapping their AI systems and prepare a documented assessment of how the AI Act is likely to apply. This will provide early indications of whether an AI system used by a business:

    • is subject to the AI Act, considering its extra-territorial scope (see "Where" in section 4 below);
    • falls outside of the AI Act due to being low-risk (e.g. spam filtering systems) and therefore does not require any additional compliance steps (see "No or minimal risk" in section 5 below);
    • may require some additional transparency compliance steps to be taken due to the way in which it interacts with people (see "Limited risk" in section 5 below);
    • is likely to be considered high-risk, or perhaps prohibited, under the AI Act and any additional compliance steps required (see "High risk" and "Unacceptable risk" in section 5 below); and
    • is subject to the GPAI rules, or whether the business incorporates any GPAI into its AI systems (and may therefore depend on GPAI developers' compliance with the AI Act's information sharing requirements, in order to fulfil its own obligations) (see section 8 below).

    Impact / compliance assessment: once it is understood how the AI Act will apply, businesses should conduct a detailed assessment of the areas in which its current systems, processes and controls do not meet AI Act requirements. This should include:

    • where the specific compliance gaps are that will need remediation;
    • the scale of compliance resource the business might need to commit, and the compliance processes, assessment and documentation that might need to be implemented;
    • any strategic changes the business might need to make (e.g. aiming for AI Act compliance throughout the business or reducing availability of certain products in the EU); and
    • developing an AI governance framework appropriate to the business's AI systems and use cases.

    4. Scope

    What:

    • The AI Act will regulate "AI systems" and GPAI models.
    • To effectively distinguish AI systems from simpler systems, the definition is as follows:

      "AI system" is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

    Who:

    The key roles regulated by the AI Act are:

    Role

    Description 

    Provider 

    An entity which (1) develops, or instructs the development of, an AI system or GPAI model; (2) places that system on the market or puts it into service in the EU (3) under its own name or trademark; (4) whether for payment or free of charge.

    Deployer 

    An entity which uses an AI system for a professional activity (i.e. personal, non-professional users are not deployers).

    Importer

    An entity which (1) is located or established in the EU; (2) places an AI system on the market in the EU; (3) that bears the name or trademark of an entity outside the EU.

    Distributor

    An entity in the supply chain (other than a provider or importer) that makes an AI system available in the EU.

     

    Collectively, the above members of the "AI value chain" are referred to as "operators" in the AI Act.

    Obligations under the AI Act will differ depending on the role of the operator, with most obligations placed on providers of AI systems.

    Distributors, importers, deployers and any other third party may be considered a "provider" of a high-risk AI system for the purposes of the AI Act where they:

    • put their name or trade mark on such high-risk AI system;
    • make "substantial modifications" to the high-risk AI system; or
    • modify the intended purpose of an AI system (including GPAI).

    The AI Act therefore has the potential to affect many types of organisations. In particular, it may regulate any business:

    • whose employees use an AI system in performing their role;
    • which adapts or customises a third party AI system for its own use; or
    • which creates and implements its own AI system.

    Where:

    Similar to the GDPR, the AI Act will have an extra-territorial effect meaning it will apply to:

    • businesses supplying AI systems within the EU (regardless of whether they are established in the EU or not); and
    • businesses located outside of the EU, if the output of their AI system is used within the EU.

    Exclusions:

    The AI Act will not apply to AI systems used:

    • for personal, non-professional activities;
    • exclusively for military, defence or national security purposes; or
    • for the sole purpose of scientific research and development.

    The AI Act will also not apply to areas outside of the scope of EU law (e.g. to AI systems that have no impact on EU citizens or use in the EU).

    5. Risk classifications for AI systems

    The AI Act will take a risk-based approach, with different requirements applying to different risk classes, as summarised below:

    Risk Category

    Description

    Key requirements

    Relevant AI systems

    No or minimal risk

    AI systems not caught by any of the below risk categories

    Not subject to the AI Act, however compliance with voluntary AI codes of conduct is encouraged.

    For example, AI systems used for spam filtering.

    Limited risk

    AI systems deemed to pose a limited risk, but which must be operated transparently

    Transparency obligations vary depending on the AI system, but generally providers / deployers must ensure that there is transparency as to the fact that the interaction, content, decision, output etc is created by AI.

    Subject to defined limited exceptions, AI systems used or intended to be used:

    • to directly interact with people (e.g. chatbots);
    • to generate synthetic audio, image, video, or text content (including GPAI);
    • for emotion recognition or biometric categorisation;
    • to generate or manipulate image, audio or video content as "deep fakes"; and
    • for text generation / manipulation which publish text to inform the public on public interest matters.

    Certain emotion recognition and remote biometric categorisation systems will be classed as high-risk and will be subject to the requirements for high-risk systems listed below as well as the transparency requirements.

    High Risk

    AI systems that are specifically listed as high risk in the AI Act – broadly these are systems that create a high risk to the health and safety or fundamental rights of people

    High-risk AI systems are subject to comprehensive compliance requirements, covering seven main areas:

    1. risk management systems – a risk management system is required to be "established, implemented, documented and maintained" for high-risk AI systems. The risk management system is to be developed iteratively and needs to be planned and run throughout the entire lifecycle of the system and will be subject to systematic review and updates;

    2. data governance and management practices – training, validation and testing data sets for high-risk AI systems must be subject to appropriate data governance and management practices, with the AI Act setting out specific practices which must be considered, including data collection processes and appropriate measures to detect, prevent and mitigate biases;

    3. technical documentation – a requirement to draw up relevant technical documentation for a high-risk AI system before it is placed on the market or "put into service" (i.e. installed for use) and that should be kept up to date;

    4. record keeping / logging – high-risk AI systems must "technically allow for" automatic logs to be recorded over the lifetime of the system in order to ensure a level of traceability of the system's functioning appropriate to the intended purpose of the system;

    5. transparency / provision of information to deployers – high-risk AI systems must be designed and developed to ensure sufficient transparency in order to allow deployers to interpret the output from such systems and use it appropriately. Any instructions for use must be provided in a format which is appropriate to the AI system;

    6. human oversight – "natural persons" must oversee high-risk AI systems, with a view to preventing or minimising the risks to health, safety or the fundamental freedoms that may arise from use of such systems; and

    7. accuracy, robustness and cybersecurity – high-risk AI systems are required to be designed and developed to achieve an appropriate level of accuracy, robustness and cyber security and to perform in accordance with these levels throughout their lifecycle.

    Certain operators of high-risk AI systems are subject to additional obligations, as set out in section 6 below. 

    Subject to defined exceptions and except as prohibited below AI systems used or intended to be used:

    • as a safety component of a product/or the AI system itself is a safety product regulated by certain EU legislation listed in the AI Act and (in each case) is required to undergo a conformity assessment before being placed on the market in the EU;

    • for (i) remote biometric identification (other than where the sole purpose is ID verification), (ii) biometric categorisation according to sensitive or protected attributes or (iii) emotion recognition;

    • as safety components in the management and operation of critical digital infrastructure road traffic and the supply of water, gas, heating and electricity;

    • in education and vocational training for admissions, evaluation of learning outcomes, assessing the appropriate level of education to be received by a student and monitoring and detecting prohibited student behaviour in testing environments;

    • in employment in respect of decisions on recruitment, "work related relationship", promotions, contract terminations, task allocation and monitoring or evaluating performance and behaviour;

    • to make decisions which affect access to essential public and private services and benefits in defined area including credit scoring or risk assessment for insurance;

    • in certain law enforcement contexts including polygraphs, evidence assessment and re-offending risk assessment;

    • in migration, asylum and border control management including to assist with examination of visa / asylum applications; and

    • for the administration of justice and democratic processes including to assist judiciary with fact / law interpretation and application, or intended for use to influence outcome of an election or referendum.

    Unacceptable risk - prohibited AI systems

    AI systems that present a threat to people, as specifically defined in Article 5 of the AI Act

    All such AI systems are banned under the AI Act.

    Subject to limited, defined exceptions, AI systems that:

    • deploy subliminal techniques beyond a person's consciousness, or purposefully manipulative or deceptive techniques (in each case) which materially distort human behaviour to circumvent free will;

    • exploit vulnerable groups (e.g. children) in order to materially distort behaviour in a way that is reasonably likely to cause harm to a person;

    • categorise people based on their biometric data to deduce or infer race, political opinions, religious or philosophical beliefs or other specific defined matters, except in respect of the labelling or filtering of lawfully acquired biometric datasets, such as images, in law enforcement;

    • evaluate or classify people based on social behaviour or known, inferred or predicted characteristics creating a "social score" which leads to specific defined detrimental / unfavourable treatment

    • use "real-time" biometric identification in public for law enforcement purposes (subject to narrow exceptions, such as targeted searches for missing / abducted persons and preventing terrorist attacks);

    • assess or predict the risk of people carrying out crimes based on profiling or personality trait / characteristic assessments;

    • create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV; and

    • enable emotion recognition in the workplace or educational institutions, except for medical or safety reasons.

    6. Operator obligations for high-risk AI systems

    As noted above, the AI Act places specific obligations on categories of "operators" in the AI value chain which depend on the operator's role, notably:

    Providers must ensure high-risk AI systems comply with the requirements for high-risk AI systems set out above and must further be able to demonstrate compliance when reasonably requested to do so by a national competent regulator, providing information that is reasonably requested. Additionally, providers must:

    • undertake a conformity assessment, draw up a declaration of conformity and affix a CE marking (each as prescribed under the AI Act) for the AI system prior to rollout;
    • include their name, registered trade name or registered trade mark and address on the high-risk AI system itself or where this is not possible, on the packaging;
    • retain defined categories of information about the AI system to allow for regulatory review for a minimum of ten years from the AI system being installed or offered for sale in the EU;
    • retain any automatic logs within its control for at least six months;
    • register in the EU AI database;
    • implement a quality management system - to ensure compliance with the AI Act;
    • "immediately take the necessary corrective action" to bring any non-conformance of the AI system within the AI Act and inform distributors, importers, deployers and any authorised representatives accordingly; and
    • ensure that the AI system complies with certain EU regulations related to disability accessibility.

    Providers based outside of the EU – are required to appoint an authorised representative established in the EU.

    Deployers (i.e. non-personal users) of high-risk AI systems must:

    • take appropriate technical and organisational measures to ensure that the AI system is used in accordance with instructions for use;
    • to the extent the deployer exercises control over the high-risk AI system, ensure human oversight by competent, trained and supported personnel;
    • to the extent the deployer exercises control over the high-risk AI system, ensure input data is relevant and sufficiently representative in light of the intended purpose of the AI system;
    • comply with monitoring and record keeping obligations; and
    • advise any relevant employees / employee representatives that they will be subject to the AI system.

    7. Fundamental Rights Risk Assessments for high-risk AI systems

    In addition to the obligations on operators of high-risk AI systems noted above, public body operators and private operators providing public services are required in all cases to undertake fundamental rights risk assessments in respect of any high-risk AI systems (except where the system is intended to be used as a safety component in the management and operation of critical public infrastructure).

    Private body operators must also undertake fundamental rights risk assessments in relation to a narrow set of high-risk AI systems relating to credit scoring and life/health insurance pricing.

    8. General purpose AI models

    The AI Act will have dedicated rules for GPAI models (also known as "foundation models").

    These are models which are trained on a large amount of data at scale, can perform a range of distinct tasks, can be integrated into other AI systems and are designed for general purpose/to perform a wide-range of distinct tasks.

    Although GPAI models are essential components of AI systems, they do not constitute AI systems in their own right.

    The AI Act notes that GPAI models "may be placed on the market in various ways, including through libraries, application programming interfaces (APIs), as direct download, or as physical copy."

    The following risk categories apply:

    Risk Category

    Key requirements

    All GPAI models

    General transparency requirements apply to all GPAI models, including a requirement for GPAI model providers to:

    • technical information – draw up and maintain technical documentation, which can be provided to competent regulatory authorities on request (including information about known or estimated energy consumption of the model);

    • compliance information for AI system providers - make available to AI system providers who intend to make use of the GPAI model certain categories of technical documentation about the GPAI model (e.g. acceptable use policies) and any other information required to enable such providers "to have a good understanding of the capabilities and limitations" of the GPAI model and "to comply with their obligations" under the AI Act. 

    • copyright protection - implement a policy to ensure EU copyright rules are respected by their model (particularly where copyright holders have set limitations on how their works can be used in AI model training); and

    • summary of training data - release a sufficiently detailed summary of the data used to train the GPAI model in a prescribed template form (to be released by the new AI Office).

    GPAI models posing a "systemic risk"

    GPAI models will be deemed to pose a "systemic" risk where they have "high-impact" capabilities that can be "propagated at scale" across the AI value chain. Such models are further defined as potentially having a "significant impact on the EU internal market due to their reach or due to actual or reasonably foreseeable negative effects such GPAI model could have on public health, safety, public security, fundamental rights, or the society as a whole."

    GPAI models which pose a systemic risk must comply with transparency requirements applicable to all GPAI models above and, in addition providers must:

    • EU Commission notification - notify the EU Commission without undue delay (and in any event within two weeks) of the fact that the GPAI system presents a systemic risk;

    • risk mitigation evaluation - perform model evaluation in order to mitigate systemic risks;

    • additional risk mitigation steps - otherwise assess and mitigate systemic risks at EU level;

    • serious incidents – keep track of, monitor and report without undue delay serious incidents to the new AI Office and, as appropriate national competent regulators; and

    • cybersecurity - ensure cybersecurity protection for the GPAI model and the physical infrastructure of the model.

    9. Timeline

    As a starting point, most provisions will apply after a two-year transition period, i.e., from 2 August 2026 onwards, except for:

    • AI systems posing an unacceptable risk, which will be prohibited from 2 February 2025;
    • rules applying to foundation models, governance and sanctions, which will apply from 2 August 2025.

    During the transition period, we can expect the EU Commission to publish various guidelines and standards and to launch the AI Pact, which is a scheme under which businesses can voluntarily commit to comply with certain obligations of the AI Act before the legal deadlines.

    10. Fines

    In-scope businesses could be subject to significant fines under the AI Act. Similar to the GDPR, these will be capped at a percentage of global annual turnover in the previous financial year or a fixed amount (whichever is higher), as follows:

    • €35 million or 7% of global annual turnover for non-compliance with prohibited AI system rules;
    • €15 million or 3% of global annual turnover for violations of other obligations; and
    • €7.5 million or 1% of global annual turnover for supplying incorrect, incomplete or misleading information required to be provided under the AI Act.

    The AI Act provides that penalties are to be effective and dissuasive, yet also proportionate. Accordingly, penalties will take into account the interests and economic viability of SMEs / start-ups.

    11. The EU AI Liability Directive and Revised EU Product Liability Directive

    The AI Act seeks to reduce risks to safety and the fundamental rights of individuals, however it is recognised that AI systems may still pose some harm to end users.

    As the AI Act does not contain provisions on liability for the purposes of damages claims and compensation, two new, complementary liability regimes have been proposed by the EU Commission: the EU AI Liability Directive and the revised EU Product Liability Directive.

    This set of directives will seek to redress harm caused to individuals by AI and will be the focus of our next article in our Emerging Tech Series.

    Authors: David Futter, Partner; Aimi Gold, Senior Associate; William Barrow, Senior Associate; Sian Deighan, Associate; and with assistance from Sophie Thompson, Solicitor Apprentice

    The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
    Readers should take legal advice before applying it to specific issues or transactions.