The new Directive for Copyright in the Digital Single Market (“DSM Directive”) was a controversial piece of legislation. Notably, its article 17 has raised many concerns for its impact on fundamental rights, and particularly freedom of expression. In contrast to the mostly declarative or procedural guarantees included in the directive, I argue that an effective protection of freedom of speech and exceptions and limitations (E&L) on copyright requires adopting a more ambitious approach, inspired by the “privacy by design” paradigm, which I call “Free Speech by Design”. This implies explicitly discussing issues which the European lawmaker carefully avoided, such as the concrete integration of the fundamental rights framework in the algorithmic filtering systems set up by online platforms. This “Free Speech by Design” model could be followed either by platform providers or (preferably) by national legislators in their implementation of the directive, in the spirit of the recently released discussion draft for the German implementation of article 17.

To recall, article 17 provides that “online content-sharing service providers” (OCSSPs) making available content uploaded by their users perform an act of communication to the public, and can only avoid liability for unauthorized uploads by their users by complying with a “best efforts” obligation to obtain an authorization and to take preventive measures to “ensure the unavailability” of content for which they received the “relevant and necessary information”, as well as complying with a notice-and-stay-down obligation (art 17(4)). While the drafters of article 17 took care to remove any mention of “effective technologies” such as content-recognition algorithms, art. 17(4) indubitably creates an indirect obligation of algorithmic filtering for OCSSPs, as the massive amount of content uploaded on these platforms every day makes such duties excessively costly to carry out through human review.

After many concerns were raised by academics, NGOs and the UN special rapporteur for freedom of expression, article 17 was only adopted by the European lawmaker after its final drafting had gradually evolved to include several formal guarantees of user rights and freedom of expression. These guarantees include for example obligations that art. 17(4) “shall not result in the prevention of the availability of works” which are non-infringing such as those “covered by an [E&L]” (17(7), para 1), or requiring that Member States “ensure that users in each Member State are able to rely on any of the following existing [E&Ls] when uploading [content on OCSSPs]” such as the exceptions for quotation and for parody (17(7), para 2), and even making the very strong requirement that “This Directive shall in no way affect legitimate uses, such as uses under [E&Ls] provided for in Union Law” (17(9) para 3). It also provides for procedural safeguards, such as the availability of a “complaint and redress mechanism” for users, or the right for users to access a court to assert an exception or limitation (17(9), para 2).

However, due to issues of power imbalances between affected parties, fear of litigation and lack of incentive structure faced by OCSSPs, these formal guarantees and procedural safeguards will likely not suffice to provide an effective protection for E&Ls, and users’ right to freedom of expression. This can be attested at length by 15 years of legal literature regarding the InfoSoc directive, the e-commerce directive and the US DMCA notice-and-take-down regime, summarised here. Moreover, the very general and vague principles provided in these legal guarantees and safeguards imply an obligation for Member States to not only make a carbon copy of such provisions, but to concretely implement these principles in their legislation by striking a fair balance between competing fundamental rights. This is where Free Speech by Design enters into the picture.

The starting point is similar to the “Privacy by Design” paradigm. Due to the increasing risks for fundamental rights posed by technology, it is essential to take fundamental rights and values into account early on in the design of such systems (Schaar 2010). However, when public authorities attempt to regulate through technology, they often rely on opaque, unaccountable, one-sided and overbroad technological fixes (see Mulligan & Bamberger 2018), which results in obscuring policy choices and sacrificing a number of fundamental values. The complex mechanism by which article 17 of the DSM directive implicitly delegates to OCSSPs the duty to implement algorithmic copyright enforcement systems is clearly an instance of such misguided and imbalanced attempt at regulation through technology.

Therefore, I argue that national implementations of the DSM directive should aim at protecting freedom of speech (as well as fundamental rights more generally) by default in the design of technological systems taken in application of article 17. In my article “Free Speech by Design”, I propose an approach (inspired by Cavoukian 2009 and Elkin-Koren 2017), guided by the following basic principles : 1) preventing interferences rather than providing remedies, 2) embedding free speech into the design of algorithmic systems, 3) balancing legitimate interests, and 4) ensuring the visibility and explainability of speech-affecting technologies.

The need for such an approach is reinforced by the trend towards a minimalist implementation of the DSM directive, with some Member States either making a carbon copy of the directive, without attempting to concretely implement the users’ rights guarantees, or avoiding including them altogether, considering them redundant with the applicable law.

This shirking by Member States of their role of implementing the directive obviously undermines the relevance of European directives as a legislative tool, and legal certainty in general. But more fundamentally, it also falls into the opaque, unaccountable form of algorithmic regulation where the delicate balance between fundamental rights and interests such as in copyright cases is entirely left to private actors (and in some cases, ultimately to the CJEU).

Therefore, the German lawmaker should be commended for attempting to concretely implement guarantees and safeguards of user’s rights and freedom of expression, in a discussion draft analysed on this blog by Julia Reda. While the proposal is still perfectible, it represents a worthy attempt at preventing free speech-invasive events, notably by specifying conditions under which content blocking or removal are prohibited, spelling out thresholds of amount of uses which should not be subject to preventive measures, and by allowing service providers to police abusive claims. This is clearly in line with the sort of public, accountable framework for algorithmic regulation supported by the Free Speech by Design approach. Let us hope that other Member States follow Germany’s lead in taking users’ rights seriously, by going beyond purely declarative or procedural guarantees, and instead delving into the concrete balancing of the algorithmic filtering systems that article 17 indirectly mandates.

For more details, see the full article here.


_____________________________

To make sure you do not miss out on regular updates from the Kluwer Copyright Blog, please subscribe here.


Kluwer IP Law

The 2022 Future Ready Lawyer survey showed that 79% of lawyers think that the importance of legal technology will increase for next year. With Kluwer IP Law you can navigate the increasingly global practice of IP law with specialized, local and cross-border information and tools from every preferred location. Are you, as an IP professional, ready for the future?

Learn how Kluwer IP Law can support you.

Kluwer IP Law
This page as PDF

Leave a Reply

Your email address will not be published. Required fields are marked *