Alert - Data Protection and Cybersecurity

The recent EDPB opinion on the use of the “Pay or Consent” system

When browsing on the websites of some newspapers or other online platforms, users are increasingly faced with a choice: subscribe to the service or consent to the use of their data for receiving customized advertising content.

The “Pay or Consent” system is a widespread practice but not necessarily lawful. This issue was addressed in the EDPB’s recent opinion (Opinion 08/2024 of 17 April 2024), which was requested by some national Data Protection Authorities (Netherlands, Norway and Germany) and, moreover, also takes into account the ruling of the Court of Justice of the European Union in case C-252/21, in which Meta was involved.

One of the primary legal concerns regarding the “Pay or Consent” system is undoubtedly the validity of consent, which, under the GDPR must be, inter alia, freely given. According to the EDPB, a user’s freedom of choice also depends on the options available to him/her, and it is hard to imagine that platforms can obtain freely given consent if the choice is limited to paying a subscription or “paying” with one’s data.

The EDPB argues that Data controllers should not merely offer a single alternative involving payment alongside a service that includes the processing of data for behavioural advertising purposes. Instead, they should consider providing data subjects with a third “equivalent alternative” that does not entail paying a fee. This would enable users to make a genuine choice; in fact, if users can access the service both free of charge and also without necessarily having to consent to behavioural advertising, it can be inferred that those who consent to profiling do so freely and knowingly, rather than opting for the only (seemingly) free alternative.

Another critical factor affecting the validity of the consent given is the power imbalance between a proprietor holding a significant market position (e.g. a newspaper) and the data subject (the website user). This imbalance is heightened by the type of service and the fact that it can be said to be “essential” for users. In essence, in “Pay or Consent” systems, users who neither wish to pay nor consent to having their data processed for behavioural advertising may be forced to forgo essential services such as staying informed.

While the freedom of consent given is undoubtedly the most controversial in this context, all other conditions for consent to be deemed lawful must also be met: it must be free, specific, informed, unambiguous and obtained in a manner that is clear and comprehensible to the data subject. Furthermore, data controllers must of course adhere the other principles of the GDPR when processing personal data.

For more information:

EDPB, ‘Opinion 08/2024 on Valid Consent in the Context of Consent or Pay Models Implemented by Large Online Platforms’, 17 April 2024.

AI Act: new rules on Artificial Intelligence in Europe

On 21 May 2024, the final text of the AI Act was approved.

It is the world’s first regulation on artificial intelligence, whose goal is to promote the development and adoption of safe and reliable AI systems within the European single market, ensuring they operate in accordance with the fundamental freedoms and rights of citizens, including the right to protection of personal data.

The AI Act aims to stimulate new investments and foster innovation in artificial intelligence, a sector now deemed crucial not only in the European market but also globally, recognized as a strategically important element across various domains and for many stakeholders.

First of all, the AI Act classifies AI systems based on their risk to the rights and freedoms of stakeholders. In fact, AI systems deemed to present a “limited risk” are subject to less stringent transparency obligations. In contrast, “high-risk” AI systems must comply with a number of specific requirements and face more stringent transparency obligations, except for AI systems authorised by law and used to ascertain, prevent, investigate or prosecute crimes.

In addition, the AI Act introduces a number of prohibitions regarding certain practices considered to have an element of risk, including:

  • adoption of techniques that manipulate individual cognition and behaviour;
  • random collection of biometric data from public spaces, the Internet or via video surveillance systems;
  • use of emotion recognition systems in workplace or educational settings;
  • implementation of social scoring systems;
  • biometric processing to infer data belonging to special categories;
  • use of predictive policing systems targeting specific individuals.

The AI Act also establishes new regulation for so-called foundation models, i.e. AI-based computational systems (e.g. ChatGPT) used for various activities and purposes, such as generating videos, texts and images, speech-language conversations, calculations and more.

In connection with such systems, the AI Act requires providers to conduct impact assessments on high-risk systems or systems used in the banking and insurance sector. Other obligations imposed on these system providers include the obligation to conduct tests to try to resolve systemic risks and to take measures aimed at ensuring an adequate level of security of the hardware and software infrastructure.

Another aspect involves implementing measures to foster the development and adoption of safe AI. In this respect, the competent authorities of each Member State are required to establish regulatory testing spaces dedicated to AI to ensure a controlled environment that fosters innovation and facilitates the development, training, testing and validation of AI systems.

In this context, it is crucial that AI systems undergo testing before being placed on the market. Therefore, tests must be conducted under real conditions, i.e. on the basis of existing data provided by subjects who have consented to their data being processed for AI regulatory testing purposes.

In any case, there is a strong emphasis on safeguarding the right to personal data protection, since the collection of data by AI systems poses a high risk to the rights of data subjects if protective measures are not taken or action is not taken to mitigate the risk of unlawful processing.

The AI Act therefore establishes specific rules to protect individuals whose data are processed, especially with regard to special categories of data (as in the case of “high-risk” AI systems that utilize datasets to train, test and validate AI-based learning models).

Providers of such systems are allowed to process data belonging to special categories, provided that: a) the use of such data is limited and appropriate security measures are taken, b) appropriate measures are taken to protect data and implement adequate safeguards; c) data are not disclosed or notified to third parties; and d) data are deleted once the purpose of the processing has been achieved. 

Finally, the AI Act includes sanctions for those who violate the provisions thereof. In fact, depending on the nature of the breach, the competent authority can impose a penalty equal to a maximum amount (ranging between EUR 7.5 and 35 million) or, if the offender is a company, a percentage of the total annual worldwide turnover for the preceding year, whichever is greater. However, SMEs or start-ups will benefit from reduced penalties.

For more information: Artificial Intelligence Act

The Privacy Authority has sanctioned a municipality for unlawful video surveillance and violations of employee privacy

The Data Protection Authority has sanctioned a municipality for improperly processing personal data through video surveillance, prompted by an employee’s complaint regarding a camera placed near the time-clock, the only tool used to record employees’ working hours. The municipality had used the images to addressed alleged breaches of the employee’s official duties, such as not adhering to her working hours, and, when questioned by the Privacy Authority, the municipality cited security concerns to justify the presence of the camera. However, the Privacy Authority determined that the municipality failed to comply with the remote oversight procedures and had used the footage for disciplinary purposes. Moreover, the municipality had not provided adequate information to workers and visitors regarding the camera’s personal data processing.

Although the fine imposed on the municipality was modest, the incident underscores the critical nature of this issue, which requires careful handling by companies.

For more details: provision dated 11 April 2024 (10013356)

Telemarketing: the Privacy Authority fined two energy operators 100,000 Euro each

Personal Data Protection Authority has imposed fines on two energy operators of EUR 100,000 each for unlawful processing of personal data. The sanctions were prompted by two complaints and 56 reports from users who received unsolicited phone calls and unauthorized activations of energy contracts. Investigations revealed that the calls, made without the consent of the individuals involved, primarily targeted users listed in the Public Opposition Register (PRo). Call centres, after acquiring users’ contacts from third-party companies and agents or intermediaries, illegally contacted those users, many of whom subsequently signed supply contracts. The Authority also ordered the call centres to implement appropriate technical, organisational and monitoring measures to ensure compliance with privacy laws when processing the personal data of the persons involved.

For more details: provision of 11 April 2024 (10008076)

Video surveillance with facial recognition in Rome: the Privacy Authority initiate an investigation

Personal Data Protection Authority has launched an inquiry into a project involving video surveillance with facial recognition at metro stations in Rome. According to press reports, in preparation to the Jubilee, the Administration of “Roma Capitale” intends to deploy cameras equipped with facial recognition technology, to detect “disruptive actions” in the metro by those who have committed “non-compliant acts” in the past. The Authority therefore requested the Administration of Roma Capitale to provide a technical description of the facial recognition technology, its purpose, the legal basis for processing biometric data, and a copy of the data protection impact assessment. The Administration was granted a 15-day deadline to respond. The Authority also reminded that a moratorium is currently in effect until 2025 on the use of video surveillance systems with facial recognition technology, in public places or places accessible to the public, by public authorities or private entities. Only judicial authorities, in the course of their judicial duties, and public authorities, engaged in crime prevention and suppression, may carry out such processing activities, subject to the Authority’s approval.

For further information: release dated 9 May 2024

Amendments to the Privacy Code: streamlined rules for medical, biomedical and epidemiological research

Recent amendment to the Italian Privacy Code, enacted through the conversion of Decree-Law no. 19 of 2 March 2024, brings significant changes to scientific research in the medical, biomedical and epidemiological fields. In particular, under the revised Article 110 of the Privacy Code, when obtaining prior consent of the data subject is not possible, patients’ personal data may be processed for the purposes of scientific research in the medical, biomedical and epidemiological fields provided an ethics committee has given a favourable opinion and the guarantees outlined by the Privacy Authority are met. The requirement for prior authorisation from the Privacy Authority has therefore been replaced by compliance with the guarantees set out by the Privacy Authority in the ethical rules on the processing of data for research purposes. 

The Personal Data Protection Authority will therefore have to establish general measures applicable to a plurality of projects through these ethical rules, which will undergo public consultation, also involving the scientific community in their formulation.

This reform aims to strike a balance between the need to protect personal data and fostering scientific research, which is particularly crucial in public health.

For more details: provision dated 9 May 2024 (10016146)

EDPB: Annual Report published

The annual report of the European Data Protection Board (“EDPB”) provides an overview of the board’s activities throughout the year (recommendations and best practice reports issued by the board, binding decisions, practical application of guidelines, etc.).

In 2023, the EDPB adopted two binding decisions, one of an urgent nature, and two new guidelines. In addition, the EDPB issued 37 opinions pursuant to Article 64 of the GDPR, most of them focusing on binding corporate rules and the accreditation requirements for certification bodies. Finally, the EDPB collaborated with the EDPS  to release two legislative opinions.

With specific reference to binding decisions, notable mentions include:

  • Binding Decision 1/2023, by which the EDPB resolved a dispute concerning data transfers by Meta Platforms Ireland Limited.
  • Binding Decision 2/2023, by which the EDPB resolved a dispute concerning the processing of data of users aged between 13 and 17 years by TikTok Technology Limited, highlighting issues with registration and video posting pop-ups not offering objective and neutral options to the user. 

In addition, the EDPB published the following guidelines in 2023:

  • Guideline 03/2022, issued on 14 February 2023, on Misleading Design Patterns in Social Media Platforms, aiming to provide recommendations and practical guidance to social media providers on how to identify and eliminate misleading designs in social media platforms.
  • Guidelines 05/2022 on the use of Facial Recognition Technology (FRT) in law enforcement. The guidelines provide relevant information for European and national legislators, as well as law enforcement authorities on implementing and using such FRT systems.

For further information: EDPB Annual Report

EDPB’s opinion on the use of facial recognition technologies by airport operators

In late May, the EDPB issued an opinion (Opinion No. 11/2024) on the use of facial recognition technologies to streamline passenger flow and the storage of biometric data by airport operators.

The opinion, in particular, examines the compatibility of such practices with:

  • the principle of data retention limitation (Art. 5(1)(e) GDPR),
  • the principle of integrity and confidentiality (Article 5(1)(f) GDPR),
  • data protection by design and by default (Article 25 GDPR),
  • security of processing (Article 32, GDPR).

As a preliminary remark, the EDPB highlights that there is no uniform EU legal obligation for airport operators and airlines to verify the name on a passenger’s boarding pass against their identity document. However, any such obligation may be governed by national law. Therefore, in the absence of national requirements for identity verification of passengers, biometric data cannot be used for recognition purposes, as this would entail excessive processing of personal data.

That said, the EDPB evaluated the compliance of the biometric data processing of passengers in four distinct scenarios.

  1. Data stored exclusively on passengers’ personal devices

In this scenario, biometric data reside solely on passengers’ personal devices, under their exclusive control, and are used for authentication at various airport checkpoints. This approach could align with GDPR requirements, provided that adequate security measures are implemented and that there is no alternative, less intrusive solutions available.

  • Centralized data storage at the airport with Passengers holding access keys

In the second scenario, biometric data are centrally stored at the airport in encrypted form, with the decryption key being held exclusively by passengers. The EDPB acknowledges that centralised storage poses risks, but these can be mitigated with appropriate security measures, ensuring GDPR-compliance, provided that the storage period is justified and limited to the minimum necessary.

  • Centralized data storage controlled by airport operators

Another scenario examined by the EDPB involves central storage of biometric data under the control of airport operators, enabling passenger identification, for a maximum period of 48 hours. According to the EDPB, this approach is incompatible with the GDPR, as centralization poses high risks to passengers’ fundamental rights in the event of a data breach.

  • Cloud-based data storage controlled by airlines or their service providers

Finally, the EDPB evaluates a scenario where biometric data is stored in the cloud under the control of an airline or its service provider, facilitating passenger identification. This scenario carries substantial risks as data may be accessible to multiple entities, including non-EEA providers. The EDPB concludes that this scenario is incompatible with the GDPR due to the high risk of a data breach and the lack of control by passengers over their own data.

In all instances, only biometric data of passengers who actively register and provide their consent should be processed.

In conclusion, the EDPB determined that scenarios where biometric data is stored exclusively by passengers (scenario 1) or in a centralized database with decryption keys solely in the possession of users (scenario 2), if implemented with a set of recommended minimum safeguards, are the only approaches that are compatible with the above-mentioned GDPR principles and that adequately mitigate the intrusiveness of data processing while ensuring maximum control for data subjects over their personal data.

Conversely, scenarios 3 and 4 are deemed excessively intrusive by the EDPB, lacking proportionality in relation to the expected benefits. Therefore, solutions based on centralized at the airport or in the cloud, without passengers holding decryption keys, cannot be considered compliant with the above-mentioned principles.

For more information:

EDPB, ‘Opinion 11/2024 on the use of facial recognition to streamline airport passengers’ flow (compatibility with Articles 5(1)(e) and(f), 25 and 32 GDPR’, 23 May 2024. 

Keep in touch!

Sign up for our newsletters!

Stay up-to-date on domestic and international legislative and tax news
and international, as well as all the Firm’s events and initiatives.

Back
to top