Data Bytes 47: Your UK and European Data Privacy update for May 2024
06 June 2024
06 June 2024
A warm welcome to our May edition of Data Bytes. With the headline news in the last month being the upcoming general election on July 4 and the accompanying news that the DPDI Bill has been dropped, the focus on data protection related legislation has passed over to AI. Wil the rumours circulating that a labour government will introduce a similar piece of legislation to the EU AI Act come to fruition? See our byte sized updates on this and more below.
Keep scrolling to our spotlight section, where we have worked with our colleagues in Ashurst's Financial Regulation and Competition teams to summarise below key takeaways from the updates published by the ICO, FCA, Ofcom and the CMA (who form part of the Digital Regulatory Cooperation Forum (DRCF)) as well as the PRA, on their proposed strategies for the regulation of AI.
As a result of the upcoming snap general election on 4 July, the Data Protection and Digital Information Bill (DPDI Bill) and Private Member's Bill on AI did not make it to the 'wash up' period and were therefore both dropped. We wrote about the DPDI Bill and the Private Member's Bill on AI here and here.
The 'wash up' period comes after an election is called when the government must decide which bills it wants to prioritise passing before parliament is prorogued. We'll have to keep a watching brief on whether the next government will reintroduce the DPDI Bill and any AI legislation but there are rumours that if we have a new Labour government, an AI bill mirroring the EU AI Act will be announced in the King's Speech.
The ICO has released its fourth call for evidence as part of its consultation series regarding generative AI, this time focussing on individual rights in relation to the training and fine-tuning of generative AI.
The call for evidence, namely from GenAI developers, (open until 10 June) looks at how data subject rights would be respected with respect third party sources such as web-scraped datasets; whether input and output filters are a sufficient mechanism for enacting people's rights; and what mitigation measures should be in place for group data subject rights etc. The ICO seems to be requesting this information to practically understand the current industry approach so that it can ultimately measure and benchmark GenAI developers.
On 21 May, the ICO confirmed that its investigation into Snap Inc's 'My AI' GenAI chatbot had concluded. Please see for details about the investigation and preliminary enforcement notice issued in October see here.
The ICO was satisfied by Snap's conduction of a thorough and compliant risk assessment of the risks posed by 'My AI' and implementation of appropriate mitigations. Whilst this is clearly a win for Snap and its 'My AI' GenAI chatbot, Stephen Almond's, ICO Executive Director of Regulatory Risk, message should still be a warning bell for organisations developing or using generative AI, to consider data protection from the outset not least by: "rigorously assessing and mitigating risks to people’s rights and freedoms before bringing products to market". The investigation's quick resolution without fines was notable, perhaps indicating a more flexible enforcement approach adopted by the ICO.
The House of Commons Science, Innovation and Technology Committee published its Governance of AI report on 28 May, updating the 12 challenges of AI governance from its interim report which it suggests could be used to form or guide the UK's approach to AI regulation.
The Report identifies the biggest challenge as being the 'Black Box Challenge' whereby the workings of some AI models are 'unexplainable' and complex and that this should be accepted by regulators who should prioritise or focus on interrogating and verifying outputs. Whilst this seems to conflict with the principle of 'appropriate transparency and explainability' in the AI White Paper 2023, it seems to represent a more pragmatic approach which would be welcome by GenAI developers.
Following the publication of the ICO's content moderation guidance in February (please see here for more details), the ICO held a webinar on its guidance which Ashurst attended. Content moderation is the assessment of whether user-generated content meets certain standards and moderation actions may include content removal, service bans, feature blocking and visibility reduction. The ICO noted:
On 1 May 2024 the ICO and Ofcom (the regulator for online safety in the UK under the Online Safety Act) published a joint statement setting out their vision for a "clear and coherent regulatory landscape for online services", their collaboration to "share intelligence and tackle non-compliance" and the importance of compliance with both online safety and data protection rules despite seemingly a tension between the two.
In particular, the statement outlines that they will focus on collaboration and maximising coherence to ensure that policies issued by each of the regulators are consistent with each other’s regulatory requirements and guidance and that in the event of conflict/tension between privacy and safety objectives, clarity will be given on how compliance can be achieved with both regimes.
The ICO released an insights report on breach learnings and is aimed at DPO/data protection teams or Info Security teams and CISO managing information security. The report draws on case studies from its regulatory activities, setting out what could have been done differently to mitigate the risk of an attack being successful and what the likely future developments of each category of attack are.
The case studies provide illuminating insights into how certain types of attacks, namely on phishing, brute force attacks, denial of service, errors and supply chain attacks, occur and the potential harm caused. By way of example, in the phishing case study, an employee simply opened an email marked 'urgent review' downloaded and extracted the linked ZIP file which led to the installation of malware, ultimately resulting in the threat actor encrypting personal data on four HR databases affecting over 100,000 people and special category data.
It's an interesting read to showcase real life common security mistakes and provides insight into what the ICO expects of organisations and should be used as a benchmark against your own organisation's information security policies and procedures.
On 21 May 2024 the European Council gave its final approval to the AI Act, a pioneering piece of legislation designed to establish a unified framework for the regulation of artificial intelligence within the European Union. It is the first legal framework of its kind globally, aiming to set an international benchmark for AI governance. The AI Act's primary goal is to encourage the safe and ethical development and adoption of AI technologies throughout the EU and seeks to protect the fundamental rights of EU citizens and society, while also promoting investment and innovation in the field of AI.
To learn more about next steps for the AI Act, click here.
On 17 May 2024, the Council of Europe adopted an international treaty, the Framework Convention on Artificial Intelligence (FCAI), which is the first of its kind with binding effect at an international level. Under the FCAI the signatories (participating states) commit to adopt new and apply existing measures including legislation to ensure that AI systems are used in the private and public sector in a manner that respects human rights, the rule of law, and democratic principles. The participating states undertake to address the entire lifecycle of AI systems, from design to decommissioning, and to manage the risks associated with their use while fostering responsible innovation. It will be opened for signature in Vilnius, Lithuania, on 5 September 2024, during a conference of the Ministers of Justice.
The FCAI is the result of two years of work by the Committee on Artificial Intelligence, which included member states of the Council of Europe, the European Union, and 11 non-member states, as well as observers from the private sector, civil society, and academia. The FCAI emphasises a risk-based approach, requiring the participating states to adopt measures to identify, assess, prevent, and mitigate AI-related risks. It calls for transparency, oversight, and accountability, ensuring that AI systems do not discriminate, respect privacy, and uphold equality. The FCAI includes measures to protect democratic institutions and processes from being undermined by AI systems. While the FCAI does not sanction a participating state's failure to fulfil its obligations under the treaty, it establishes control mechanisms, such as periodic consultations and reporting obligations between the participating states. The FCAI is the first approach to apply the basic ideas of the EU AI Act (albeit in less depth) beyond the boundaries of the EU via an international treaty. EU Member states will already fulfil the majority of the FCAI requirements. Signing the FCAI will then mainly entail organisational obligations, such as reporting and cooperation obligations.
The Report is the result of the ChatGPT Taskforce, a group of European Data Protection Supervisory Authorities (SAs) coordinated by the European Data Protection Board that are investigating the ChatGPT service provided by OpenAI. The Report outlines the background, the ongoing investigations, and preliminary views of the SAs on some of the data protection principles in the context of ChatGPT, in particular lawfulness, fairness, transparency, data accuracy and data subjects' rights. The Report includes a questionnaire that the SAs have sent to OpenAI to learn more about how ChatGPT processes personal data.
Find out more about key findings of the report here.
On 24 April 2024, the EDPB published its set of Rules of Procedure for the "Informal Panel of EU DPAs" (Panel) which is a voluntary dispute resolution procedure under the EU-US Data Privacy Framework (DPF) consisting of one lead EU data protection authority (DPA) and two co-reviewer DPAs. The Panel will deal with complaints from data subjects in relation to data transfers from the EU to the US. Equally, US data importers can also initiate a procedure with the Panel if it seeks clarification on a particular question regarding a data transfer from the EU to the US or can subject themselves to the Panel procedure by a respective declaration as part of the self-certification submission under the DPF.
The Panel will provide binding advice to the data importer within an indicative period of 60 days of receiving a complaint by a data subject or a request from a data importer, after giving both sides a reasonable timeline to comment and provide evidence. The Panel's advice is supposed to bring the processing activities in line with the DPF principles and may include remedies that the effects of non-compliance are reversed or corrected by the data importer and, where appropriate, the controller ceases to process the personal data of the complainant. Where the US company fails to comply or provide a satisfactory explanation for the delay, the Panel may refer the matter directly to the US authorities for enforcement action or to the US Department of Commerce (DoC) (which may remove organisations from the DPF).
The Rules of Procedure will make it easier for data subjects to initiate judicial options where they suspect an unlawful processing of their personal data by a US based data importer. US companies that are certified under the DPF are well advised to review the Rules of Procedure and, as appropriate, adjust their internal complaint handling procedure.
On 6 May 2024, the German Data Protection Conference (DSK) published its "Guidance on data protection and artificial intelligence" ("Orientierungshilfe KI und Datenschutz") (DSK Guidance). It provides an overview on the various criteria that controllers need to consider from a data protection perspective when using AI applications, with a particular on using large language models. The DSK Guidance shall help controllers – and by implication also developers and providers of AI systems – select, implement and use AI applications in a way that respects the rights and freedoms of data subjects and complies with the GDPR.
The DSK points out that it is not stipulating a conclusive set of requirements at this stage, and that it will revisit and enhance its guidance in light of future developments. For a more detailed analysis of DSK Guidance, please click here.
On 13 May 2024, the German legislator has renamed the German "Telecoms and Telemedia Data Protection Act" (the "TTDSG") into the "Telecommunications Digital Services Data Protection Act" ("TDDDG"). The main purpose of this rebadging exercise is to replace the term "telemedia services" by "digital services" in alignment with the EU Digital Services Act (DSA). Beyond that, the material provisions of the TDDDG remain unchanged, including the particular data protection rules for telecommunication services also contained in the TDDDG.
Following its entry into force on 11 January 2024, the European Commission issued an overview of the EU Data Act, which, as a reminder, aims at creating a fair and competitive market for data by setting out rules regarding transparency, data access and data sharing rights.
The document issued by the European Commission is a practical guide which sets out the objectives for each chapter of the Act and how it works in practice.
It should notably be noted that the European Commission clarifies the definition of "data holder" and outlines that:
For further information on the Data Act, please see our previous article.
In a judgment published on 30 April 2024, the French Supreme Court (Court of Cassation) found the director an investigation company guilty of the offence of collecting personal data ‘by an unfair means’ and fined him to 20,000 Euros, along with a one-year suspended prison sentence. The director was accused of having provided investigation services on behalf of a company to carry out surveys on the company's employees, candidates for recruitment, customers or service providers, consisting of research into personal data such as criminal records, bank and telephone details, vehicles, property, status as tenant or owner, marital status, health, travel, etc. health and foreign travel.
The Court emphasized that the accessibility of the data on the Internet did not mitigate the offense, as collecting it for profiling purposes without the individuals' consent is prohibited. Indeed, the Court ruled that such data has been used for purposes unrelated to the purpose for which it was put online, and has been collected without the knowledge of the persons concerned, who are thus deprived of their right of opposition instituted by the French Data Protection Act.
In mid-May, the dental clinic DENTALCUADROS BCN, S.L.P. was fined €20,000 by the Spanish DPA due to inadequate management of a security breach following a ransomware attack. The attack, which encrypted sensitive health data of patients and demanded a ransom, revealed the lack of appropriate security measures and the delayed notification of the incident, thus breaching Articles 32 and 33 of GDPR. The clinic reported the breach almost a month after it was detected significantly exceeding the 72-hour notification timeframe.
In addition to the delay in notification, the clinic chose to inform patients verbally when they visited the clinic, disregarding the recommendation of the DPO to notify via email or postal mail. The clinic justified its decision by claiming that verbal communication was sufficient, but the DPA highlighted the inadequacy of this measure. The DPA's investigation also revealed that the last external backup was made 37 days before the attack and this, along with the lack of appropriate security measures, contributed to the sanction. Despite initially not explaining the reasons for the delay in notification, the clinic eventually claimed ignorance of the need to inform the DPA. The Clinic assumed liability and benefited from two 20% reductions for early payment and acceptance of liability.
In mid-May, the Spanish DPA fined the concert production company MOURO PRODUCCIONES , S.R.L. €20,000 for requiring, as a necessary condition for minors under 16 to attend the events, a copy of the ID card of the accompanying parents or legal guardians. The DPA initiated a sanctioning procedure following a complaint filed by a citizen and found that, by collecting a photocopy of the client's ID card, MOURO PRODUCCIONES had breached the principle of ‘data minimisation’ and that outdated legislative references were contained in the document on access to minors under 16 years of age.
In view of the above, the DPA found the production company to be non-compliant with data minimisation and transparency requirements and imposed a fine of €20,000; required the company to update its policy on granting access to minors under 16 years old and to remove the statement from the from the access documents that it would be invalid if the photocopy of the parent's or guardian's ID was not provided. The company paid a reduced fine, benefiting from two 20% reductions for early payment and acceptance of liability.
On 23 May 2024, the EDPB issued an Opinion on the use of facial recognition to streamline airport passengers' flow which considered that the use of facial recognition, for biometric authentication at airports for the purpose of streamlining passenger flow (security, baggage, etc.) could be compatible with the principle of integrity and confidentiality, Articles 25 and 32 GDPR. However, it noted that the enrolled biometric template would need to be stored locally on the individual's own device or centrally within each airport, in encrypted form with the key only held by the data subject.
On the contrary, the data processing will not be compatible with Articles 25 (Data protection by design and by default), 5.1 f) (Integrity and confidentiality) and 32 (Security of processing) of GDPR if: (i) the enrolled biometric templates are not encrypted and the key is not kept in the possession of the data subject; or (ii) the templates are stored in the cloud even if the decryption key is held by the data subject.
In February 2024, the UK Government published its initial guidance for regulators on implementing the five cross sectoral AI principles (AI Principles) identified in the UK AI White Paper. Ministers then wrote to regulators identified as having oversight of AI, requesting an update by 30 April 2024 on the steps they are taking to develop their strategic approaches to AI.
We have worked with our colleagues in Ashurst's Financial Regulation and Competition teams to summarise below key takeaways from the updates published by the ICO, FCA, Ofcom and the CMA (who form part of the Digital Regulatory Cooperation Forum (DRCF)) as well as the PRA.
It is clear that, in general, these regulators consider their existing frameworks and expectations align with the AI Principles and consequently their responses refer to "fine-tuning" regulatory approaches as opposed to material uplift or legal change. We also note and discuss below that the ICO appears to be leading the charge on the oversight of AI, or at least positioning itself into such a spearheading role.
The recent announcement of a general election on 4 July 2024 means there is potential for a new government to take a different approach to AI regulation. However, any legislative change would likely take several months to develop and involve a period of consultation with industry. As a result, the messages from the regulatory updates discussed below continue to remain relevant for the immediate term.
While the UK AI White Paper did not establish an independent UK AI regulator, the ICO appears to have positioned itself in this role. The ICO's Strategic Approach on Regulating AI (ICO Response) flags that many of the risks on AI development and deployment are driven by how data, and specifically personal data, is used. Propelled by the explosion of generative AI activity and the accessibility of such AI tools, the use of personal data in AI systems has become particularly prevalent across various industries, where personal data may be used for AI training or in prompts. Many AI risks naturally fall into the ICO's remit, and the UK data protection law is aligned with how AI risk is dealt with in the AI White Paper due to its grounding in risk based principles.
The ICO Response emphasises that the first four AI Principles are in fact simply data protection principles. The fifth AI Principle, Contestability and Redress, is reflected in the set of information rights that data subjects can exercise (the FCA and PRA also refer out to Article 22 UK GDPR rights concerning automated decision making as an existing framework for this AI Principle).
AI Principles |
Data Protection Law Principles |
1. Safety, Security, Robustness |
|
2. Appropriate transparency and explainability |
|
3. Fairness |
|
4. Accountability and governance |
|
5. Contestability and Redress |
Reflected to an extent in:
|
AI is a distinct and key priority for the ICO for 2024 – 2025, as opposed to being embedded within another focus area which is the approach a number of regulators take. The ICO's other priorities on Children's code strategy and online tracking, also directly map with identified AI related risks. The ICO Response outlines that it has already focused on higher risk applications of AI through providing clarity on its expectations on topics such as biometric recognition as well as releasing general guidance to help organisations apply data protection law to AI, including its AI and Data Protection guide and its advice to developers and users of generative AI.
Uniquely, the ICO has already taken enforcement action on the use of AI, including on Clearview AI, Inc. a facial recognition database on data deletion and most recently, Snap Inc. (see above), where the ICO flagged generative AI remains a key regulatory priority.
Regulatory cooperation is a core theme in the UK AI White Paper and is also clearly reflected across the various regulatory responses. As noted above, the ICO, CMA, FCA and Ofcom are part of the DRCF which aims to deliver a coherent approach to online regulation and launched a pilot AI and Digital Hub in April 2024 to support organisations obtain cross cutting regulatory advice. The ICO Response emphasises the importance of active regulatory collaboration, with the ICO looking to host joint workshops to explore how the AI Principles interact across the four regulators, as well as working directly with regulators through bilateral partnerships.
We have outlined key takeaways from the responses by the other DRCF members below, two of which reference such bilateral partnerships with the ICO:
CMA
Much of the CMA's work on AI to date has focused on foundation models and concerns that a small number of incumbent technology firms with existing market power could "profoundly shape" the development of AI-related markets. This work will continue in 2024, including with a planned publication of a joint statement with the ICO. Under the Digital Markets, Competition and Consumers Act, which recently received Royal Assent, the CMA will have the power to designate digital firms found to have strategic market status (SMS). As the CMA Strategic AI Update notes, AI and its deployment by firms will be relevant to the CMA’s selection of SMS candidates, particularly where AI is deployed in connection with other, more established activities. We will be publishing a fuller briefing on the CMA's approach to AI.
Nigel Parr, Partner (Competition)
OFCOM
In its Strategic Approach to AI, Ofcom notes one of its underpinning legislative frameworks, the Online Safety Act 2023, is an example where similar principles to the AI Principles were actively considered by Parliament during development of the law.
Ofcom has recently released a joint statement with the ICO noting "proactive tech and relevant AI tools" as a key collaborative theme between the two regulators. The statement clarifies that the regulators will identify "companies of mutual interest" which will trigger increased information sharing. Despite this streamlined approach, organisations will likely remain concerned about overlapping remits leading to multiple enforcement actions on the same issues.
Rhiannon Webster, Partner (Data Protection)
FCA/PRA
The responses from the FCA and PRA are clear in their positioning. Existing regulations provide a robust framework for its deployment in financial services - they are technology neutral and seek to regulated the same risks, with the same regulations.
Bradley Rice, Partner (Financial Regulation, and Co-Head of FinTech Legal Labs)
The use of AI by financial services firms is certainly not new and already has wide application, particularly with machine learning AI tools (e.g. market surveillance, market abuse detection and fraud prevention). The FCA and PRA note in their responses that they are actively continuing to further their understanding of AI deployment, including the latest generative AI technology, and may issue guidance or use other policy tools to clarify their regulatory expectations.
Regardless of any changes that may result from a change of Government in the election, each of the updates summarised above demonstrate that UK regulators will to continue to develop and clarify their approaches and expectations on the regulation of AI over the next year. Organisations should expect the ICO to continue to play a central role in this regard and watch out in particular for the outputs from the ICO's consultation on generative AI and planned updates to its existing AI and data protection guide.
Authors: Rhiannon Webster, Partner; Nicolas Quoy, Partner; Alexander Duisberg, Partner; Andreas Mauroschat, Partner; Cristina Grande, Counsel; Shehana Cameron-Perera, Senior Associate; Tom Brookes, Senior Associate; Antoine Boullet, Senior Associate; Lisa Kopp, Associate; David Plischka, Associate; Carmen Gordillo, Associate; Julia Bell, Associate; Nilesh Ray, Junior Associate; Hana Byrne, Junior Associate; Saba Nasrolahi, Trainee Solicitor; Muriel McCracken, Trainee Solicitor; Melvin Chung, Trainee Solicitor
The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
Readers should take legal advice before applying it to specific issues or transactions.