30 May 202418 minute read

Innovation Law Insights

30 May 2024
Artificial Intelligence

EDPB Publishes ChatGPT Taskforce Report revealing major challenges for GenAI privacy compliance

The European Data Protection Board has released its ChatGPT Taskforce Report. It highlights significant privacy issues that might have an impact for any developer and deployer of GenAI solutions. Here’s what you need to know.

Web scraping and data processing

The report reveals the use of legitimate interest for collecting and processing personal data to train ChatGPT. It then sets out the limits that would be acceptable according to the EDPB.

According to the EDPB, legitimate interest can in theory be the legal basis. But safeguards must be put in place to mitigate undue impact on data subjects, potentially altering the balancing test in favour of the data controller such as:

  • technical measures to filter data collection
  • exclusion of certain data categories and sources (eg public social media profiles)
  • deletion or anonymization of personal data before training

Critical issues – transparency obligations

When notifying data subjects isn’t feasible (like with scraping), controllers must make information publicly available to protect data subjects' rights.

According to the EDPB, the current status is that:

  • ChatGPT’s data collection methods are not publicly transparent.
  • Data subjects can’t easily exercise their rights (eg right to be forgotten).
  • The system still outputs personal information, indicating that data isn’t fully anonymized.
  • OpenAI is making deals with platforms like Reddit, which contain personal data. This suggests – according to the EDPB – an ongoing reliance on personal data without sufficient safeguards.

In my view, the EDPB has to acknowledge the potentials of GenAI for our society and find a manageable solution to balance compliance with proper exploitation of the technology. It would be a major change in the approach by privacy authorities that rarely have a business-oriented approach. But with the approval of the AI Act, the EU has taken a clear stance in favour of the proper use of AI and authorities have to work with GenAI developers to find feasible solutions.

The current approach might not be in the general interest, and open discussions with AI providers might help to find a solution that appropriately balances the interests of all the parties involved.

Author: Giulio Coraggio

 

Law and AI: First international treaty adopted

On 17 May, the Council of Europe adopted the Framework Convention on Artificial Intelligence and Human Rights, the first ever international treaty on AI. The aim is to address the risks that the use of such systems may pose to human rights, democracy, and, in general, the rule of law.

The Convention helps to form a common and harmonious framework in the approach that international institutions (primarily European) intend to adopt to correctly map the risks generated by AI systems. The text is fully in line with that of the European Regulation on AI (the AI Act). Adopting the Convention also aims to encourage other countries outside Europe to adopt similar measures, promoting greater international coherence in AI governance.

The primary objective, enshrined in Article 1, is to "ensure that activities throughout the entire life cycle of AI systems are fully consistent with human rights, democracy, and the rule of law." To this end, the treaty outlines rules applicable to all stages of the AI system lifecycle: design, development, use, and deactivation, adopting an innovative approach focused on risk assessment and management.

The core of the regulatory framework is Article 16, which obliges the parties to adopt graduated measures to "identify, assess, prevent, and mitigate the risks posed by AI systems, considering the actual and potential impacts." The extent of these measures should naturally be commensurate with the severity and likelihood of adverse consequences.

Additionally, the principle of accountability is established, requiring the obligation to "account for the negative impacts on human rights, democracy, and the rule of law." Article 14 then provides for procedural safeguards and accessible remedies for victims of violations related to the use of AI.

Moreover, the Convention sets stringent requirements regarding transparency (Art. 8), non-discrimination and gender equality (Art. 10), privacy and data protection (Art. 11). There’s also a specific focus on the risks to democracy, with an obligation for the parties to adopt rules to prevent the misuse of AI to undermine institutions such as the separation of powers, judicial independence, and access to justice (Art. 5).

A significant aspect of the Convention is the inclusion of representatives from civil society, industry, and academia in the negotiation process. This inclusive approach ensures that the concerns and perspectives of various stakeholders are considered, enhancing the legitimacy and effectiveness of the adopted norms.

The Council of Europe has chosen a flexible and "futureproof" approach. Faced with the rapid changes and evolution of AI systems, the Council deemed it appropriate to avoid adopting an overly rigid instrument, mitigating the risk of regulatory obsolescence in the face of technological evolution (consider the significant recent updates by OpenAI to ChatGPT with the release of the new model 4-omni).

As explained in the explanatory report, the negotiators deliberately used broad formulations such as "activities in the lifecycle of AI systems" to capture every relevant, current, and future phase. A "technologically neutral" approach to maintain validity despite rapid innovation progress.

Naturally, as highlighted by the Council itself, the promotion of "digital literacy" and digital skills among the population is an essential aspect, without which the Convention risks remaining ineffective.

In any case, the Council's intention is to lay the groundwork for the ethical and responsible development of AI, balancing technological innovation with full protection of fundamental rights and freedoms. A step of significant magnitude that must now be implemented in national legal systems.

The Framework Convention will be officially opened for signature on 5 September in Vilnius. This event will mark an important step forward in international cooperation to responsibly address the challenges and seize the opportunities presented by AI, in a manner respectful of fundamental rights.

Author: Edoardo Bardelli

 

Data Protection and Cybersecurity

Italian Data Protection Authority identifies safeguards for conducting retrospective clinical studies

On 9 May 2024, the Italian Data Protection Authority (IDPA) adopted a resolution outlining the safeguards to be observed in conducting retrospective clinical studies when obtaining patient consent is not possible.

With the same resolution, the IDPA promoted the adoption of new Deontological Rules on data processing for statistical and scientific research purposes. The IDPA invited those eligible to endorse the Deontological Rules and those having a “qualified interest” to communicate this to the IDPA within 60 days of the resolution’s publication in the Official Journal.

The resolution was deemed necessary following the recent amendment to Article 110 of the Italian Privacy Code (you can find more details here), which removed the requirement to obtain the IDPA’s authorisation for conducting retrospective studies “when, due to particular reasons, informing the data subjects is impossible or involves a disproportionate effort, or risks making the research objectives impossible or seriously compromised.” In the new version of Article 110, the obligation for prior consultation has been replaced with a provision for the IDPA to identify the safeguards to be observed pursuant to Article 106, paragraph 2, letter d) of the Italian Privacy Code.

The Garante has clarified what constitutes “ethical or organizational reasons” that allow the processing of health data of deceased or uncontactable individuals for scientific research purposes. Data controllers must “carefully justify and document” these reasons in the research project.

The Garante also specified that data controllers must conduct and publish a Data Protection Impact Assessment (DPIA), “informing the IDPA.” This requirement is the most significant change and raises some concerns.

Although a DPIA was necessary even before the amendment to Article 110 of the Privacy Code, its publication was not required. The decision to communicate the study documentation to the IDPA seems inconsistent with the accountability principle, given the removal of the requirement for prior consultation with the IDPA. It’s also unclear whether this communication should pertain to the intention to start the study, including all related documentation, or only the DPIA.

As a final remark, the DPIA might contain confidential information about the organisation of data controllers and the security measures implemented. But we believe this information can be redacted in the published version of the DPIA – but not in the one shared with the IDPA – provided the document contains a comprehensive description of all necessary elements as per Article 35 of the GDPR.

We hope the upcoming Deontological Rules will also clarify the uncertainties highlighted above.

Author: Cristina Criscuoli

 

Video surveillance and attendance tracking: Italian Data Protection Authority fines a municipality

The Italian Data Protection Authority recently reiterated the importance of complying with regulations regarding video surveillance in workplaces, imposing a fine on a municipality for the unlawful processing of personal data.

The incident

The case came to light following a report by an employee of the municipality of Madignano (CR). The employee complained about the installation of a camera in the atrium of the municipal building. The camera had been placed near the employee attendance tracking devices. The municipal administration had used the footage recorded by the camera to charge the employee with several breaches of duty, including failure to adhere to working hours. In response to a request for clarification from the Italian Data Protection Authority, the municipality justified the installation of the camera on security grounds, citing several incidents of assaults on an alderman and a social worker.

The authority's decision

During the investigation, the Italian Data Protection Authority found that the municipality had not complied with the safeguarding procedures required by sector regulations for remote monitoring. Specifically, the installation of the camera occurred without an agreement with the trade unions, violating Article 4 of the Workers' Statute. The video surveillance footage was also used to take disciplinary action against the employee.

Consequently, the Authority fined the municipal administration of Madignano and also ordered it to provide all concerned parties (both employees and visitors to the municipal office) with adequate information about the personal data processed using the camera in question. The municipality hadn’t made any information available, also violating Article 13 of the GDPR.

Considerations

This episode underscores the need for public administrations and companies to ensure responsible use of video surveillance technologies, in compliance with the principles of legality, transparency, and data minimization. While video surveillance in workplaces can be a useful tool for ensuring security, it must be used in accordance with current regulations to avoid fines and protect workers' rights. It’s essential to comply with the Workers' Statute and provide clear and complete information on the processing of personal data, ensuring that employees' privacy rights are always protected.

Author: Matteo Antonelli

 

Intellectual Property

Fashion and parody – is it really a trendy combo?

Is it lawful to commercialize clothing items that represent a parody of well-known fashion trademarks, or does it amount to a trademark infringement?

Parody has always been a controversial topic in fashion, but the question has become even more relevant in the last few years with the great number of new brands that have built their success on the parody of well-known fashion trademarks. While some brands have been embracing this new ironic trend, others have taken it more seriously and brought cases to court.

The topic has recently been in the spotlight due to a decision by a French court in a case brought by a famous fashion house against a US toy manufacturer. The fashion house accused the toy manufacturer of exploiting its word mark and multicoloured monogram motif to sell a toy that resembled the brand's bags. The court refused to exclude the defendant's infringement liability on the basis of parody, pointing out that the challenged marks were clearly used for commercial purposes to "facilitate sales of the challenged product" for its own benefit. Ruling in favour of the plaintiff, the French judges stated that "the parodistic intent invoked by the defendant is an implicit acknowledgement of the parasitism allegation made against it, regardless of whether the purpose of these acts is mocking, polemical or merely humorous, as is alleged in the present case.” The court added that by using the marks on the toys the company had "placed itself in the wake of the plaintiff in order to take unfair advantage of the reputation of its marks, even if in the form of humour or mockery."

In Italy, the issue has been addressed in a decision of the Italian Supreme Court that overturned the consolidated case law on the relationship between parody and trademarks (Cass. Pen., Sez II, n. 35166/2019).

At the EU level, the Trademark Regulation 2017/1001 states at Article 9 that "the proprietor of an EU trademark is entitled to prevent all third parties not having his consent from using (…) any sign" where exists a likelihood of confusion on the part of the public "in relation to goods or services which are identical with or similar to the goods for which the EU trade mark is registered." The same provision can be found at Article 20 of the Italian Intellectual Property Code. Importantly, Recital 27 of the Trademark Directive 2015/2436 affirms that "use of a trade mark by third parties for the purpose of artistic expression should be considered as being fair as long as it is at the same time in accordance with honest practices in industrial and commercial matters."

The case brought before the Italian Supreme Court refers to the alleged infringement of a series of fashion trademarks by an Italian clothing brand that used parodistic versions of those trademarks on its t-shirts. The court held that trademark infringement occurs when the alleged fake product is likely to be confused with the original products and to mislead consumers with respect to their origin.

In the case at issue, the court found that the items presented evident elements of novelty and therefore could be deemed more as a reinterpretation than an imitation of the original brands, recognising that the purpose of such reinterpretation was artistic and descriptive rather than imitative, excluding any risk of confusion.

The same approach has been followed in a previous decision, where the Supreme Court focused on the Constitutional provisions behind the right of parody, considered as a form of artistic expression. In fact, it stressed that these artistic interpretations are protected by the Italian Constitution under Article 21 in relation to freedom of expression and under Article 33 in relation to artistic freedom (Cass. Pen., Sez. II, n. 9347/2018).

These two decisions overturned the precedent case-law, which was more reluctant to exclude trademark infringement in case of parodistic versions of well-known brands.

For example, in two cases concerning the ironic use of certain fashion iconic trademarks of Chanel and Louis Vuitton on t-shirts produced and marketed by a third company, the court of Milan held that the use of a third party’s trademark would be justified only where it is implemented as an artistic work, but the parody defence would not apply in cases where a well-known trademark is used in someone else’s goods as a decorative element. Further, the court of Milan stated that since the products were created and sold by fashion brands, the purpose behind the ironic creations was mainly commercial and only indirectly artistic.

Therefore, the products involved were found to cause a risk of confusion and association between the "inspired" and the original products among consumers. In addition, given the reputation of the imitated brands, the court held that such unauthorized use of the Louis Vuitton and Chanel trademarks also amounted to an undue advantage for the infringer and to the dilution of the well-known brands. (see Trib. Milano, R.G. 53747/2012 and Trib. Milano, R.G. 59550/2012).

Overall, the line between parody or artistic creations inspired by existing trademarks and actual imitation is fine and Italian courts have generally been standing on the fashion brand's side. The most recent cases of the Supreme Court focused only on the risk of confusion but seemed to ignore the possible dilution of the reputation of the imitated trademark, which is why they’re chosen as the subject of parody. We’ll see if the Supreme Court’s recent decisions have set a new trend considering parody an exception from trademark infringement. What is more fun: parody or fashion?

Author: Valentina Mazza

 

Technology Media and Telecommunication

Final Adoption of the Gigabit Infrastructure Act

On 29 April the Council of the European Union adopted Regulation (EU) 2024/1309, ie the regulation on Gigabit infrastructure (Gigabit Infrastructure Act or Regulation). The European Commission presented it in February 2023 by as part of the connectivity package, a set of measures aimed at promoting infrastructure that can ensure connectivity with speeds of 1 Gigabit per second for all citizens and businesses in the EU.

The Regulation was published on 8 May 2024, in the Official Journal of the European Union.

The Gigabit Infrastructure Act mainly includes measures aimed at reducing the costs of deployment of electronic communications networks using Gigabit technology (ie connectivity that ensures a speed of 1 Gigabit per second) and aims to incentivize the provision of very high-capacity networks, promoting the shared use of existing physical infrastructure and fostering the more efficient deployment of new physical infrastructure.

The Regulation on Gigabit infrastructure aims to address a series of obstacles that currently slow down the roll-out of very high-speed networks.

The Regulation provides that network operators and public sector bodies owning or controlling physical infrastructure (including in-building physical infrastructure) have to meet, upon written request of an operator, all reasonable requests for access to that physical infrastructure under fair and reasonable terms and conditions. That includes the price for the installation of elements of very high-capacity networks or associated facilities. It’s also provided that requests must be met under non-discriminatory terms.

The Regulation also provides a series of conditions under which network operators and public sector bodies can refuse access to specific physical infrastructure.

The Gigabit Infrastructure Act also includes measures to simplify the procedures for granting the permits and rights of way required for the installation of networks and physical infrastructure.

Additionally, the “single information point” tool has been established to facilitate access to information, procedures, and services necessary for access, installation, and maintenance of electronic communications infrastructure. The functions of the single information point will be performed by one or more competent bodies appointed by the Member States at national level. The single information point simplifies interactions between operators and the competent authorities, enhancing their coordination.

As stated in the Regulation, a digital infrastructure based on very high-capacity networks constitutes the pillar for “almost all sectors of a modern and innovative economy,” promoting “innovative services, more efficient business operations and smart, sustainable, digital societies, while contributing to achieving the Union climate targets” and it is “of strategic importance to social and territorial cohesion and overall for the Union’s competitiveness, resilience, digital sovereignty and digital leadership.”

The Regulation will apply from 12 November 2025.

Authors: Massimo D’Andrea, Matilde Losa


Innovation Law Insights is compiled by the professionals at the law firm DLA Piper under the coordination of Arianna Angilletta, Matteo Antonelli, Edoardo Bardelli, Carolina Battistella, Carlotta Busani, Giorgia Carneri, Maria Rita Cormaci, Camila Crisci, Cristina Criscuoli, Tamara D’Angeli, Chiara D’Onofrio, Federico Maria Di Vizio, Enila Elezi, Alessandra Faranda, Nadia FeolaLaura Gastaldi, Vincenzo GiuffréNicola Landolfi, Giacomo Lusardi, Valentina Mazza, Lara Mastrangelo, Maria Chiara Meneghetti, Deborah Paracchini, Maria Vittoria Pessina, Tommaso Ricci, Miriam Romeo, Rebecca Rossi, Roxana Smeria, Massimiliano TiberioGiulia Zappaterra.

Articles concerning Telecommunications are curated by Massimo D’Andrea, Flaminia Perna e Matilde Losa.

For further information on the topics covered, please contact the Giulio Coraggio, Marco de Morpurgo, Gualtiero Dragotti, Alessandro Ferrari, Roberto Valenti, Elena Varese, Alessandro Boso Caretta, Ginevra Righini.

Learn about Prisca AI Compliance, the legal tech tool developed by DLA Piper to assess the maturity of AI systems against key regulations and technical standards here.

You can learn more about “Transfer,” the legal tech tool developed by DLA Piper to support companies in evaluating data transfers out of the EEA (TIA) here, and check out a DLA Piper publication outlining Gambling regulation here, as well as a report analyzing key legal issues arising from the metaverse qui, and a comparative guide to regulations on lootboxes here.

If you no longer wish to receive Innovation Law Insights or would like to subscribe, please email Silvia Molignani.

Print