Data Bytes 46: Your UK and European Data Privacy update for April 2024
15 May 2024
15 May 2024
Welcome to our April edition of Data Bytes, where the Ashurst UK and European Data Privacy and Cyber Security Team summarise the key privacy legal and policy developments of the previous month.
It's very neat that the month of April has an A and an I in the right order, as April was a month of AI updates a plenty. Not only did all relevant UK regulators publish their strategic approaches to regulation of AI (to meet the government imposed deadline of 30 April) but it was also reported that the government could be rethinking its position on whether to produce an overarching piece of legislation on AI.
Our European updates include a summary of the key EDPB decision on pay or consent models where they opine that "in most cases it will not be possible for large online platforms to comply with the requirements for valid consent if they confront users only with a binary choice between consenting to processing of personal data for behavioural advertising and paying a fee".
Finally, keep scrolling for our "Spotlight section", where Matt Worsfold and Gohto Saikawa from our Risk Advisory Data Governance practice provide their insights on managing retention risk within an organisation. I think we all acknowledge that data retention remediation can fall towards the bottom of a data protection practitioner's to do list, when faced with many competing priorities. Matt and Gohto share their tips for a successful data retention remediation programme.
April was a packed month in terms of news of AI regulation and policy updates.
On 15 April 2024, it was reported in the Financial Times that that the UK Government may be considering a U-turn on its decision not to introduce specific AI legislation. Despite previous concerns about strict legislation stifling innovation and industry growth, this move would represent a significant shift in strategy in contrast to the pro-innovation principles based approach to regulating AI previously advocated by the UK Government. Until now, the UK strategy has been to delegate responsibility to existing regulators with regulators being asked to submit papers by the end of April outlining how they intend to regulate AI in their fields. All such papers have now been published and are available at Regulators’ strategic approaches to AI.
The ICO was one of those regulators. Unsurprisingly and in keeping with their current focus areas for enforcement activity, the ICO will be prioritising the AI risks related to children and vulnerable individuals as well as the use of AI in biometric technology.
April also saw the ICO launch its third call for evidence as part of its emerging thinking on generative AI development and use. This latest call relates to how the accuracy principle applies to the outputs of generative AI models and the impact of accurate training data on outputs.
ICO guidance has previously stated that the accuracy principle does not mean that the outputs of generative AI models need to be 100% statistically accurate. However, the more a generative AI model is used e.g. to make decisions about people or, is relied on by its users as a source of information (rather than inspiration), then accuracy should be a key principle when designing and testing the model. The ICO's overall expectation is that developers should have a good understanding of how accurate the training data needs to be if they are using it to develop generative AI models. The call for evidence had a very short window for response and closed on 10 May 2024.
Finally on the 22 April the Digital Regulation Cooperation Forum (DRCF) announced the launch of the AI and Digital Hub, a new informal advice service to support innovators with complex regulatory questions that cross more than one DRCF regulator’s remit. It's announcement states the free service will make it easier to get support from two or more of the regulators at once, via the DRCF website, rather than having to approach each regulator separately. The DCRF is made up of the Information Commissioner’s Office, Ofcom, the Competition and Markets Authority and the Financial Conduct Authority. There are 4 criteria of eligibility for the service, your product or service must be (i) innovative; (ii) largely digital or focus on AI; (iii) Benefits consumers, businesses and/or the UK economy. Finally your query must fall within the scope of at least two of the four DRCF members' regulatory remits (CMA, Ofcom, ICO, FCA).
All organisations developing, distributing or deploying AI solutions would be advised to review the strategy papers of their respective regulators to understand their focus areas and consider utilising the DCRF advice service where appropriate.
On 3 April 2024 the ICO released its Children's Code 2024-2025 strategy, aimed at protecting children using online services including social media and video sharing platforms. Since the introduction of its Age Appropriate Design Code in 2021, the ICO has been working with online services including websites, apps and games to provide better privacy protections for children to ensure their personal information is used appropriately within the digital world.
The new Children’s code strategy sets out the priority areas that the ICO says social media and video-sharing platforms need to improve on in the coming year, as well as how the ICO will enforce the law and facilitate conformity with the code.
Companies whose digital services are likely to be accessed by children (not just those specifically directed at children) should note the following priority areas:
The ICO has yet again been mainly unsuccessful in its long running saga against Experian in relation to Experian's direct marketing business and its view that the processing by Experian is not transparent. Following a regulatory investigation, the ICO issued an enforcement notice in October 2020 against Experian, a credit reference agency, due to its concerns that the nature and extent of Experian's processing of personal data for its offline marketing services violated the transparency requirements of the GDPR. On 20 February 2023, the First-tier Tribunal (FTT) allowed in large part an appeal from Experian against this enforcement notice and considerably modified its terms. Notably, the FTT's decision confirmed that companies can weigh in their favour the public interest benefits to consumers of their data processing when assessing whether they can lawfully process data based on their legitimate interests, and that the indirect provision of privacy information prescribed by Article 14 of the GDPR through third parties can be sufficient to meet the GDPR's transparency requirements. The Information Commissioner appealed the FTT's decision to the Upper Tribunal.
On 22 April 2024, the Upper Tribunal dismissed the ICO's appeal, ruling that whilst the FTT's decision was not well-structured nor particularly well-reasoned in certain places, it did not contain errors in law.
This decision may encourage a greater number of challenges to enforcement notices issued by the Information Commission in future and will be welcomed by businesses seeking to rely on legitimate interests as a lawful basis for their direct marketing data processing.
The Upper Tribunal's full judgment and a summary of its decision can be found here. The Information Commissioner -v- Experian Limited– Courts and Tribunals Judiciary.
On 11 April 2024, the Court of Justice of the European Union ("CJEU") issued a preliminary ruling regarding the influence of human error on liability and the assessment of damages ("GP v juris GmbH" C-741/21). The court ruled that data controllers cannot exonerate themselves from liability by referring to the misconduct of a person under their authority. A data controller must prove that he is not responsible for the data breach due to organisational deficiencies.
In the case, the claimant received, on three occasions, marketing letters/e-mails from Juris (a provider of legal research services) despite having previously revoked, in writing, all his consents to receive information from Juris by email or by telephone. He also objected to any processing of his data, except for the purposes of sending newsletters which he wished to continue to receive. Juris argued that one of their employees had not complied with the instructions given regarding the consent management system. The court held that a data controller cannot be exempt from liability "on the sole ground" that the damage was caused by the wrongful conduct of a person acting under its authority. "It is for that controller to ensure that his or her instructions are correctly applied by his or her employees. Accordingly, the controller cannot avoid liability […] simply by relying on negligence or failure on the part of a person acting under his or her authority."
A data controller must ensure that their employees are adequately trained and instructed (Art. 24, 32 GDPR). Additionally, the CJEU specified that if the data controller has infringed the GDPR multiple times, this should not affect the compensation. In contrast to fines, (non-material) damages aim to compensate and not to punish the controller (Art. 82, 83 GDPR) Accordingly, only the damage actually suffered by the data subject should be relevant to determine the compensation and the guidelines for calculating fines for supervisory authorities should not apply.
On 17 April 2024 the EDPB has published its opinion concerning the circumstances under which so called “consent or pay” models can be implemented by providers of large online platforms when they want to process data for behavioural advertising. It concludes "in most cases it will not be possible for large online platforms to comply with the requirements for valid consent if they confront users only with a binary choice between consenting to processing of personal data for behavioural advertising and paying a fee".
Large online platforms cover "very large online platforms" as defined under the Digital Services Act, "gatekeepers" as defined under the Digital Markets Act and further platforms based on the following criteria: attract a large number of data subjects, position of the company in the market, "large scale" processing (number of data subjects concerned, volume of data and geographical extent of the processing activity). "Consent and pay" models provide the user with at least two options in order to gain access to a platform. The data subject can consent to the processing of their personal data or decide to pay a fee and gain access to the service without their personal data being processed. The EDPB stated that in most cases it will not be possible for platform providers to comply with the requirements for valid consent, if they confront users with that binary choice. Controllers need to evaluate if there is an imbalance of power between them and the data subject and make sure to avoid to link the rejection to adverse consequences.
In the event of clear imbalances, controllers can only use consent in “exceptional circumstances” and where they can prove that there are no “adverse consequences at all” for the data subject, if they do not consent. Providers of large online platforms should consider offering an alternative that does not entail a payment, e.g a version of the service with a different form of advertising involving the processing of less (or no) personal data, like contextual or general advertising or advertising based on topics the data subject selected from a list of topics of interests.
In Germany we report on a case where the Hamburg regional court disagreed with the Conference of Independent Data Protection Supervisory Authorities of the Federation and the Länder ("DSK"), and ruled that mandatory account registration on users ( as opposed to guest check out options) can be lawful for certain products services. For further information click here. We also report that the German Federal Government has appointed Prof. Louisa Specht-Riemenschneider, one of the leading scholars on the law of data and digitalization (currently at the University of Bonn) as the upcoming Federal Commissioner for Data Protection and Freedom of Information. For further information click here.
Turning to France, we report on the CNIL's recommendations on the development of Artificial Intelligence Systems, we summarise the CNIL's 2023 annual report and report on the CNIL's enforcement action against Hubside.store for unlawful marketing practices, for which they received a fine equating to approximately 2% of their turnover.
Finally in Spain, there has been no let up from the AEPD in their enforcement activities and we report on two monetary fines imposed by the AEPD. The first a fine to a company concerning oversharing of personal data in breach of the data minimisation principle. The second a €2,000,000 sanction to Caixabank for consulting the General Treasury about the Social Security personal data of an individual without consent.
Increasingly sophisticated cyber-attacks and heightened regulatory and societal expectations surrounding data privacy and protection has put data retention practices into the spotlight. Individuals and regulators are posing incisive questions regarding why organisations persist in retaining data for long periods of time, often beyond what can be justified by applicable law or regulatory guidance. These questions can present legal risk, particularly in events such as data security breaches or data subject access request. Where data loss occurs in a cyber incident, as the sensitivity and age of data is laid bare, there is potentially a greater likelihood under UK law that an organisation may need to (i) notify affected parties where data is lost or stolen and (ii) provide compensation for resulting losses to individuals or companies.
By proactively implementing measures to handle, store, and dispose of data responsibly, organisations can mitigate potential reputational risks and compliance issues associated with holding onto data for too long without any legal justification.
The first challenge organisations must face in an increasingly data-driven world is understanding what a justifiable retention period looks like for their different data assets. This requires consideration of the applicable legal obligations associated with the relevant data types along with regulatory guidance, industry best practice as well as any public disclosures/notices to data subjects. To make the task all the more challenging, many businesses are facing into issues around how data retention requirements are defined where data spans multiple jurisdictions; how to get the right level of granularity in data definitions in order to apply retention periods; and how they know they have comprehensively considered everything in the data estate, including shadow IT and long-forgotten about back-ups and archives.
Once retention periods have been determined, the next challenge lies in establishing robust governance that has been tailored to continually manage the risks associated with data retention. Effective governance boils down to collaboration across multiple stakeholder groups. The privacy team must collaborate closely with data and technology specialists to implement the retention guidelines they've established. A prime example of this collaborative effort is the creation of a functional retention schedule. Here, the data team typically provides a catalogue of data assets and their respective classifications, while the privacy team ensures compliance by setting appropriate retention periods for each asset. In addition, the technology team plays a pivotal role in configuring data retention and deletion mechanisms within key systems. Successful collaboration empowers these parties to address increasingly complex data retention challenges, one of which often seen in practice is the application of retention rules to unstructured data.
For further information or to discuss how we can help with your data retention projects, please do get in touch with Matthew Worsfold.
Authors: Rhiannon Webster, Partner; Nicolas Quoy, Partner; Alexander Duisberg, Partner; Andreas Mauroschat, Partner; Matt Worsfold, Partner – Risk Advisory; Cristina Grande, Counsel; Shehana Cameron-Perera, Senior Associate; Tom Brookes, Senior Associate; Antoine Boullet, Senior Associate; Lisa Kopp, Associate; David Plischka, Associate; Carmen Gordillo, Associate; Chelsea Kwakye, Junior Associate; Saif Khan, Junior Associate; Nilesh Ray, Junior Associate; Hannah Byrne, Junior Associate; Gohto Saikawa, Executive – Risk Advisory; Muriel McCracken, Trainee Solicitor; Melvin Chung, Trainee Solicitor; Jessica Nelson, Trainee Solicitor
The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
Readers should take legal advice before applying it to specific issues or transactions.