Photo by Possessed Photography on Unsplash

New year’s fatigue? Or possibly AI fatigue? But the new year has only just begun! It does seem like the topic of AI and copyright was everywhere in the copyright world last year. While some digital topics have been known to cause a great commotion in copyright circles only to later sink practically without a trace, unless I am mistaken, the issue of the copyright implications of AI is different.

One AI topic, which has so far only been examined in any depth in relation to EU copyright law in a few instances, is copyright infringement by generative AI and the associated liability. In this regard, there are two aspects that need to be looked at separately, namely when does AI output constitute an infringement and who is liable for copyright-infringing AI output?

 

(1) When does AI output constitute an infringement?

In my view, the existing rules should apply in answering this question. AI output can be deemed a rights-infringing reproduction if it is identical to the original work. Similarly, AI output can be deemed a rights-infringing reproduction if the original can be recognised in it. The CJEU based its ruling on this issue in “Pelham” on the aspect of recognisability when it came to the related right of the record producer under Article 2 of the InfoSoc Directive (2001/29) (C-476/17 – Pelham). The same should apply when it comes to the author’s right of reproduction (see, e.g., German Federal Court of Justice (BGH) GRUR 2022, 899 – Porsche 911, which references the Pelham case law of the CJEU, C-476/17), even if a final decision from the CJEU on this point, in a case referred by the Swedish court (C-580/23 – Mio i.a.), is still pending.

That said, there may be a limitation on what constitutes a copyright infringement under the current rules even in cases where the AI output is identical to the original or the original is at least recognisable. This would be where the generative AI has not been trained using the original and a situation exists which for works by a human would be described as “independent (double) creation”. Should AI systems be able to benefit from the defence of accidental independent creation? It would seem that we need to find an answer to this question. If the defence of independent creation is allowed, the burden of proof that the original work was not used for the training of the generative AI could, due to the circumstantial evidence to the contrary, lie with the party invoking that defence. In Germany, for example, this would be in line with the rules on the burden of proof in cases of independent creation. The burden of proof for establishing that the younger work was created independently from the older work lies in principle with the author of the younger work. As an exception this rule will not apply if the older work has very little originality and the younger work shows substantial differences from the older work (see Axel Nordemann in Fromm/Nordemann, Commentary on the German Copyright Act, 12th edition, Section 24 para 64-65 with examples from German case law).

 

(2) Who is liable for copyright-infringing AI output?

The question as to the liability of the user of the AI output is relatively easy to answer where the user makes use of the AI output in a manner which has copyright relevance. Here, the general rules apply. Anyone reproducing AI output (Article 2 InfoSoc Directive), distributing it (Article 4 InfoSoc Directive) or communicating it to the public (Article 3 InfoSoc Directive) is liable in accordance with the existing rules.

This brings the case law of the CJEU on the concept of communication into play. According to that concept, even those who only indirectly cause a communication can be deemed to have performed an act of communication. The requirement is (1) an indispensable role in the act of communication and (2) the “deliberate nature of the intervention”. Although “deliberate” may sound like it means “intentful”, the latter requirement can be satisfied by a mere negligent violation of certain duties of care (C-682/18 and C-683/18 – YouTube and Cyando). This concept has now also been adopted in the national legal systems of the Member States, for example by the German BGH (Federal Court of Justice) (see our previous post here).

Another question is who is actually liable for the AI output itself? Is the AI operator liable? There is currently no specific operator liability at EU level for copyright infringements in the area of generative AI. However, AI operators could be held liable under the general rules, albeit normally only for unauthorised reproduction in the form of the AI output (Article 2 of the InfoSoc Directive). In the case of software and hardware providers who have no way of influencing users, the CJEU has decided that the aforementioned principles do not apply (C-426/21 – Ocilion). The German BGH has also repeatedly emphasised that liability as a perpetrator cannot apply to software providers because the software user is generally the perpetrator with control over the infringement (I ZR 32/19 – Internet-Radiorekorder).

However, there are a number of aspects which would point to not simply applying the case law on the use of software directly to copyright-infringing AI output. Rather, a differentiated approach seems more advisable. Providing an AI system involves more than just providing software that allows users to create reproductions at their own discretion. The AI system can significantly determine the content of the output. One idea would therefore be to attribute the reproduction according to who determines the focus of the content.

  • If the AI is merely a technical tool of the user and the focus of the determination lies with the AI user (e.g. through its prompts), only the AI user can be considered as a perpetrator.
  • However, the situation should be different if the focus of determining the content lies with the generative AI. In that case, the reproduction and the liability as perpetrator could be attributed to the AI operator. For example, this would be the case if the AI user has only given very minor specifications in their prompts.

The liability of the AI operator according to these principles should not be excluded by the fact that the generative AI produces the rights-infringing AI output in an automated process. For other automatically generated content – for example, editorial result lists with thumbnails in search engines – the system operator may nevertheless be held liable.

If the generative AI does create the rights-infringing output without control over the infringement, one should not rule out all liability on the part of the AI operator, however. After all, the AI remains the indirect cause of the infringement. Therefore, one must consider whether the above mentioned CJEU liability model, taken from YouTube and Cyando, can be applied here also.

If this model is to be applied to the liability of AI operators, it would be necessary for the CJEU liability model from YouTube and Cyando to be extended to cover infringements of the right of reproduction under Article 2 of the InfoSoc Directive. Until now, the CJEU has only applied it to the right of communication to the public under Article 3 of the InfoSoc Directive. There are many arguments in favour of an extension to the right of reproduction because even with the fully-harmonised right of reproduction under Article 2 of the InfoSoc Directive, the question as to who is doing the reproduction should not be left to the EU Member States. In this respect, the same applies as for the fully harmonised right of communication to the public under Article 3 of the InfoSoc Directive.

When applying the liability model, it seems appropriate to attribute an indispensable role to generative AI for the infringement of the right of reproduction. Generative AI is even more closely involved in the infringement than video platforms, which the CJEU confirmed as having an indispensable role in YouTube and Cyando. The duties of care of AI operators in the course of trade, which determine the deliberate nature of their actions, must be proportionate of course. Although the mere fact of automation and autonomisation could not eliminate liability in all cases, it can have a mitigating effect on liability when it comes to defining duties of care, particularly in the case of desirable business models. One should consider whether the three duties of care developed by the CJEU for video platforms (para. 102 – YouTube and Cyando) can be applied in an adapted form to operators of generative AI systems.

 

Conclusion

We copyright lawyers should not turn away from examining and investigating AI topics. There is much meat for discussion in the question of liability for rights-infringing output of generative AI for example. A happy and successful new year to all!

This is an adapted version of a German language editorial by the author for the German IP journal Gewerblicher Rechtsschutz und Urheberrecht (GRUR), Volume 1/2024. The author would like to thank Adam Ailsby, Belfast, (www.ailsby.com) who authored most of the English translation.


_____________________________

To make sure you do not miss out on regular updates from the Kluwer Copyright Blog, please subscribe here.


Kluwer IP Law

The 2022 Future Ready Lawyer survey showed that 79% of lawyers think that the importance of legal technology will increase for next year. With Kluwer IP Law you can navigate the increasingly global practice of IP law with specialized, local and cross-border information and tools from every preferred location. Are you, as an IP professional, ready for the future?

Learn how Kluwer IP Law can support you.

Kluwer IP Law
This page as PDF

Leave a Reply

Your email address will not be published. Required fields are marked *