Image by Nikin from Pixabay

My university, like so many others, is offering prompt engineering lessons to both students and faculty. The same is true at high schools around the world from what I can see. Cool professors have already modified their exams to ask students to write a prompt that could be submitted to a Large Language Model (LLM).  Professors who may try to be cool but aren’t (like yours truly) have given their students an answer prepared by ChatGPT and asked them to spot hallucinations, correct mistakes and generally comment on the quality of the machine’s output.

“Prompt” of course is one of those funny words with many meanings. A prompt is an instruction given to an LLM, but it can also mean to encourage; or to be quick; or to be on time (“be there at 4:00 o’clock prompt”). With that in mind, humans are promptly becoming prompt engineers. Perhaps that is the future of many cognitive jobs. But then surely there will be an AI to help us engineer prompts.

As teachers, we must encourage the use of LLMs, lest we be considered so 2020, or worse. LLMs are everywhere. They are being sold to lawyers, the film and music industry, and news outlets.  They will be able to perform a vast proportion of cognitive jobs–or at least a significant part of them. I discussed the risks to human progress of letting machines create cultural and journalistic content in a 2021 post on this blog.  As I wrote then:

Literature in all forms, fine arts and music are among the most important vehicles to both mirror and propagate those changes throughout society. If those cultural vehicles are made of art, books and lyrics created by AI machines, then those machines will control at least a part of cultural, societal and political change. Think of it as self-driving culture, and it will be a U-turn as far as human evolution is concerned.

In that post, I also predicted that many media companies would try to reduce and possibly eliminate human authors because machines are “not owed royalties.”

Events have developed faster than I could have imagined. There is now a (serious) debate among experts about the singularity—not as sci-fi this time–and about how we will know that a machine has become self-aware. Books and articles suggesting we give robots “rights” abound. Unfortunately, this debate often obscures both the need to regulate AI using existing tools, and to develop regulatory solutions that target what AI actually does, that is, not (just) its possible evolution towards extinction.  Machines care about their code, not human laws.  That is true now but if and when a machine ever achieves Artificial General Intelligence (AGI), it likely won’t give a tinker’s damn about courts, injunctions or whatever some legislator dictates.  So perhaps we should refocus.

Let’s take a simple problem: copyright. As we are eager to transform ourselves into prompt engineers, several copyright questions emerge in sharper focus. First, machines and humans are different. Human authors need time to create. They need time to hone their abilities and develop their craft.  Machines are, well, prompt compared to humans–at least once they’ve copied all existing content (for example the copying of tens of thousands of full-length books, without permission or payment).

The question of copyright in the prompts themselves inevitably surfaces. After all, engineers like to protect their outputs. Can a prompt be protected as a work of authorship? It should, if it is (a) created by one or more humans; (b) not de minimis; and (c) embodies creative choices.  The hard question is whether those creative choices— the originality, if any–of the prompt is “transferable” into the product or output of the AI machine.  Owning a prompt could then mean owning all outputs generated by the machine (which can generate dozens and dozens of outputs based on the same prompt, in various “genres,” styles”, etc.). This gets dangerously close to owning the underlying idea, and thus goes against a fundamental principle of international copyright law.

One should also consider excluding the (potentially numerous) functional elements of the prompt from the scope of protection. In this context, one might use caselaw on the protection of software which, like prompts, contains instructions designed to make the machine perform a task but still has aspects that are protected.

The relevant case law on originality “transfers” is scant. Possibly noteworthy is a 1995 ruling by the Chancery Division in England that recognized that the author of house designs who had indicated “precisely which features were to be incorporated in each house design” and “marked all the modifications he wished to incorporate in the final drawings” had marked the final drawings with his originality even though he had not produced the said drawings. We can therefore at least imagine in theory a scenario in which the prompt’s originality (subject to the three conditions above) would be sufficiently reflected in the AI machine’s product. This should, however, be a priori a rather exceptional case, in which the prompt would be very detailed and the machine would essentially be left to execute the instructions it contains. A different situation but with some analytical similarities arises when authors, particularly in the visual arts (Jeff Koons springs to mind) give hired “craftsmen” very precise instructions. Those craftsmen are typically not considered coauthors, although this may reflect a certain understanding in that industry. Otherwise, if the originality of the instructions is not sufficiently reflected in the machine’s product, there is no protected work in the output.  That should be the default position, as I see it at least.


________________________

To make sure you do not miss out on regular updates from the Kluwer Copyright Blog, please subscribe here.


Kluwer Arbitration
This page as PDF

Leave a Reply

Your email address will not be published. Required fields are marked *