In May, the ECS held their annual summit in Brussels, under the title “EU copyright, quo vadis? From the EU copyright package to the challenges of Artificial Intelligence.” The summit covered many of the hot topics on today’s copyright agenda, including the proposed directive on Copyright in the Digital Single Market. This post, however, focuses on the afternoon session, dedicated to the challenges posed to copyright law by artificial intelligence (AI), especially in the EU.

The discussion was divided into two panels. The first panel debated the impact of AI on copyright issues, focusing on possible regimes and criteria for protection. The second panel included presentations on other issues, such as moral rights, digital rights management (DRM), and private international law.

The initial panel was chaired by Professor Marie-Christine Janssens, who set the scene for the discussion. She recalled R. Kurzweil’s definition of AI as “the science of making computers do things that require intelligence when done by humans.” Such things could be, for example, creating copyright-protected works. Examples of non-human creations abound, such as Google’s Deep-mind AI piano prowess or the Next Rembrandt project. The question that arises is: where is the author’s “own intellectual creation” in works produced by computers or robots? A 2017 European Parliament Resolution calls for the elaboration of criteria in this respect. But what is the best regime for protection? Should we recognize a non-human copyright? Or perhaps a new neighboring right for producers? Should we follow the UK approach and think about protection for AI assisted works, or instead consider protection for AI generated works?

These questions were taken up by Professors Tatiana Synodinou and Reto Hilty, who focused their presentations on the topic of criteria for protection.

Synodinou’s starting point was the definition of autonomy as applied to AI: “autonomous agents are able to generate new ideas and to produce new forms of expression through the use of software which mimics the configuration of human neural networks.” Under international copyright law (art 2.6 Berne Convention), “protection shall operate for the benefit of the author.” In other words, there is a general principle that the author must be a natural person, despite some deviations from this principle, exemplified by the protection of software, databases, and films. But is originality a suitable criterion for the protection of AI generated works under copyright law? More precisely, what are the machine’s “free and creative choices” that make its expressed output original? The answer is not simple: the AI itself cannot be considered to make such inherently human choices, and the link between it and the human programmer is not sufficiently strong to consider that the latter determines the final expression of the work. If authorship does not provide adequate criteria for protection, would a sui generis regime be able to do so? Drawing parallels with the database producer’s right, Synodinou argues that the most suitable rationale for protection of AI generated works would be investment protection.

For his part, Hilty noted two possible justifications to recognize legal protection and grant exclusive rights for AI generated works. The first is “personality-related”. Under this rationale, however, it is only possible to find a human creator in the software that constitutes the initial input to the creation of an AI, but not in any of the machine-generated outputs, which would not qualify as “works” under copyright law. The second justification would be economic and – as mentioned by Synodinou – center on the protection of investment and the need to avoid a “market failure” in the absence of legal exclusivity. Whether such market failure exists, though, should be assessed by economists. For Hilty, criteria for protection in the field of AI should focus on the inputs required to create and develop AI systems. Such systems rely on machine-learning, which in turn involves acts of reproduction of copyright-protected works. Thus, similarly to text-and-data-mining, if the law aims to promote the development of AI, it should enable the use of copyright-protected works for that purpose.

The topic of copyright ownership in AI was addressed by Professor Ole-Andreas Rognstad. At first glance, ownership in copyright is a relatively straightforward conceptual issue, provided there is a “causal link between the copyrightable ‘input’ and the result”. However, that link is difficult or impossible to discern for output generated by more developed AI systems. Under current EU law, this would prevent copyright protection of the output, as the “free and creative choices” behind it are not a causal result of a human action but rather attributable to the AI system. Furthermore, as the AI system is not a legal entity, it cannot claim ownership. The result would therefore be a “no ownership scenario” for AI generated outputs.

Rognstad then discusses possible alternatives to this scenario. The first is to allocate ownership to the AI system. However, there appears to be no solid justification to do so from the perspective of incentive theory or the recognition of legal personality for AI systems. The second is to consider AI generated outputs as “works made for hire”, as recognized e.g. in US law (Sec. 101 of the Copyright Act), and to create a legal fiction that the AI system is “employed”. Still, this approach does not fit neatly into the EU legal system, under which it might make more sense to recognize “sui generis” solutions. These could center for example on the allocation of rights to the (i) producer, (ii) owner, or (iii) user of AI systems. In the end, however, it is challenging to justify copyright protection for AI generated outputs or the need to define novel ownership rules. In fact, Rognstad wondered whether AI “creations” should not be deemed part of the public domain, with the legal regime allowing for the possibility of interested parties invoking (national) rules external to copyright, such as unfair competition.

Professor Lionel Bently discussed the usefulness of the UK’s provisions on computer-generated works as a model to protect AI creations. Under the CDPA s178, the term “computer‐generated” is used in relation to a work to mean that such work is generated by a computer in circumstances such that there is no human author of the work. Under this regime, ownership of the work belongs to the person who undertook the arrangements necessary for its creation, the term of protection is limited to 50 years, and no moral rights are recognized.

Bently was of the view that the UK regime is not a useful model for the protection of AI generated works. The main reasons for this are: its incompatibility with the EU copyright acquis (although protection through related rights appears possible); its failure to address the issue of originality; its failure to produce sufficient legal certainty (pointing here to the interesting research of Andres Guadamuz and Ana Ramalho); and, as argued by Professor J Ginsburg, the fact that it is unnecessary, if for no other reason than that it is not required by International law.  For Bently, it is important not to miss the big picture and consider what the recognition of AI works would mean for copyright. In particular, would AI change the market for copyright content due to low-cost mass production of works that look and function like authorial works? That is the question we need to think about, and not the question of “who the author is.”

The topic of AI and moral rights was presented by Professor Valérie-Laure Bénabou. Moral rights are not harmonized in EU law and there seems to be no generally accepted conception of which rights should be given to whom. Moral rights can function both as impediments to and enablers of AI. As impediments, they could pose limitations to the creation of outputs by AI, namely regarding the processing or displaying of embedded works, which acts may call into question e.g. the rights of integrity or attribution. As enablers, she discussed the interesting possibility of creating a “sort of” moral right for AI, including the legitimacy, meaning, and enforceability of such a right. If recognized, this would imply a fundamental change in the nature of moral rights.

The second panel of the afternoon discussed other issues related to AI generated works, namely DRM and private international law. The presentation on AI and DRM was delivered by Professor Raquel Xalabarder. She started by pointing out that the European Commission’s 2018 Communication on “Artificial Intelligence for Europe” does not make reference to copyright. She then noted that AI projects involve at least three parts that may implicate copyright in works, related subject matter, software, and databases: inputs, processors, and outputs. Where that is the case, we must also consider the applicable rules on exceptions and limitations, as well as on the DRM circumvention.

Examining this landscape, Xalabarder argued that there is a need for licensing for AI as regards software and datasets. Both companies and governments are fostering development of AI projects through open source models, such as the EUPL for software (an open license without a copyleft provision), Creative Commons SA-NC-ND for works and datasets, and the CC0 for Public Sector Information. Whereas the last license is truly flexible and interoperable, the first two are not and impose downstream obstacles to licensing, such as: to the reuse of AI results (which may qualify as derivative works); for interoperability (e.g. as result of EULAs); and as regards unequal opportunities for market agents (as those with greater economic power can afford to license inputs). Still, while licensing is important, it is not sufficient to “place the power of AI at the service of human progress.” To do so, Xalabarder argues, the copyright system should recognize strong – possibly remunerated – exceptions or limitations, which cannot be overridden by contract or DRM, so as to enable specific machine reading uses of works that do not have a negative impact on building up a strong and competitive AI European sector.

Lastly, Professor Marco Ricolfi discussed the private international law perspective. In particular, he looked at which courts would deal with AI copyright matters, how to select the applicable law, and what the forum would be. Following the approach of Professor Graeme Dinwoodie and others, Ricolfi proposes that these issues are examined not from the traditional theories of conflict of laws (e.g. in the EU, “rule‐bound territoriality principles”), but rather from the perspective of “processes which lead to global law-making in a digital era”. In this respect, he identified three basic legal tools: consensus or international conventions; regulatory competition; and coercion. In his view, since copyright and patent laws are highly harmonized internationally, our focus should not be on the question of whether the conflict of laws rules are optimal, but rather whether and how – from a political viewpoint – they could be adopted and enforced.

Concluding remarks

In closing, AI certainly poses challenges for copyright. However, the magnitude of these challenges is uncertain. Since we have not reached the moment of singularity, we must assess the legal implications of AI systems that function, rather than think. From the perspective of copyright law, this means recasting a recurrent question whenever new technologies affect the use and exploitation of works: “are new formulations of [criteria for protection and] rights required, or do old formulations still hold good, necessitating only a flexible interpretation to apply to those changed conditions?”[1] This ECS summit provided some much-needed debate on the topic but, as with all things academic, more research is needed.

———————————————————————————————————————————-

[1] Sam Ricketson, The Berne Convention for the Protection of Literary and Artistic Works : 1886-1986 [London] : Centre for Commercial Law Studies, Queen Mary College : Kluwer, [1987], pp. 436-437 (referring to new modes of communication)


________________________

To make sure you do not miss out on regular updates from the Kluwer Copyright Blog, please subscribe here.


Kluwer Arbitration
This page as PDF

Leave a Reply

Your email address will not be published. Required fields are marked *