Recently, the German photographer Boris Eldagsen won a prestigious Sony World Photography Awards competition. After the winner was announced, the photographer disclosed that the image he submitted to the photography competition was generated through the use of an AI system and refused to accept the award. This has provoked a public discussion on whether AI should be attributed or mentioned when a work is generated by AI or using AI.
From a copyright law perspective, there are a few interesting issues to consider. Under most copyright laws, including the Berne Convention, authors have a right to be attributed as authors of their works. Attribution right is, however, available for human authors only, not AI. While courts are still to provide a definite answer on whether AI-generated works can be protected under copyright, it seems quite clear that an AI system will not be able to be an author of such works as it does not have a legal personality. Thus, AI systems cannot have rights that authors normally have, such as a right of attribution. It means there is no obligation, at least under copyright law, to attribute AI as an author or co-author of AI-generated work.
The next question is whether it is legal for human authors to attribute themselves as authors when a work is generated by AI. The answer to this question is not as clear because it is often uncertain whether the human qualifies as the author of the work. This will depend on whether human contribution to the work is significant enough, or amounts to ‘independent intellectual effort’, a standard for originality (and authorship) used in many jurisdictions. If a human contributed only one brief prompt to a generative AI tool (e.g. ‘make a drawing of a horse with a hat’), it is unlikely that the effort will meet the originality standard and thus the person is unlikely to qualify as an author. Some persons may still be willing to attribute themselves as authors because, for instance, human authorship is a precondition for copyright protection and they want the work to be protected for commercialization purposes. Some individuals might believe that the work would sell better if a human author is listed. Alternatively, they would like others to think that they personally created the work, without significant involvement of technology.
In such situations, attributing a person as an author of the image, however, might be incorrect and might infringe upon author attribution rules. In some jurisdictions, such as Australia, copyright laws contain a special rule prohibiting a false attribution of authorship. One of the problems in enforcing this rule is that, again, only ‘authors’ have such a right. Thus, only a rightful creator and author of the work would be able to claim the misattribution. In the case of AI-generated content, an AI system would not have standing to claim misattribution and it is questionable whether there would be any other person with a standing to sue in such a case.
Apart from copyright law, unfair competition or consumer protection laws might be important in preventing misattribution of AI-generated works. For example, in Australia, consumer protection laws prevent ‘misleading and deceptive conduct’ (section 18 of Australian Consumer Law). Arguably, claiming human authorship where the work was fully or largely generated by AI might be misleading and deceptive. This misattribution might be increasingly relevant for consumers to whom it is important that the work is generated by a human. However, it is unclear whether these provisions could be enforceable in the given context. For instance, section 18 applies to conduct ‘in trade or commerce’ context, which will not be met if AI-generated works are created and disseminated in non-commercial settings.
As a last note, numerous ethical AI guidelines require AI to be transparent. For instance, EU Trustworthy AI Guidelines require that “Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.” Australian AI Ethics principles require that “There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them”. However, apart from being a set of unenforceable guidelines, they do not make clear the extent to which a transparency requirement applies and how it should be implemented in relation to AI-generated art. Should consumers be informed that they are engaging with AI-generated work? If so, should this apply to all or only some works generated with the help of AI? When works are fully or largely generated by AI (i.e. when human contribution is minimal), it might be reasonable to require that AI use is disclosed to the public. If, however, AI contribution was less significant – e.g. AI was only used in music mastering or the production process – perhaps disclosure of AI is not necessary. Artists might also use numerous AI applications and platforms when generating their art, which makes transparency around AI use more complicated.
Overall, while copyright laws do not currently require AI to be disclosed as an author or co-author of works generated with the help of AI, a broader public discussion is required to reach a consensus on whether such attribution – or transparency around AI – is desirable, and in which situations it should be made mandatory.