COMMUNIA and Gesellschaft für Freiheitsrechte co-hosted the Filtered Futures conference on 19 September 2022 to discuss fundamental rights constraints of upload filters after the CJEU ruling on Article 17 of the Directive on Copyright in the Digital Single Market (CDSMD). This blog post is based on the author’s contribution to the conference’s third session “Beyond the Judgement: The Future of Freedom of Expression.” It is published under a Creative Commons Attribution 4.0 International licence (CC BY 4.0).
On the 16th of April, the Court of Justice (CJEU) delivered its decision in case C-401/19 The Republic of Poland v European Parliament and the European Council concerning Article 17 of the Copyright in the Digital Single Market Directive (the CDSM). The Polish challenge centred on the argument that Article 17(4)(b) and (c) require ex ante preventative monitoring of all user uploads by content sharing service providers and is incompatible with Article 11 of the European Charter of Fundamental Rights. As such, the CDSM is silent on the need for ex ante preventative monitoring. Rather, Article 17(4)(b) and (c) require such providers employ their ‘best efforts’ in accordance with ‘high industry standards of professional diligence’ to ensure the unavailability of protected works. Prior to the Polish challenge, the conclusion reached by commentators on Article 17 – in particular the academic community – was that these requirements pointed in one direction only – the adoption of ex ante filtering technology. Naturally, this conclusion did not manifest in a vacuum. Indeed, it is well known that the concern of European policymakers to remedy the so-called ‘value gap’ underpinned Article 17 and a desire that content sharing service providers follow the likes of YouTube in deploying filtering solutions such as its ContentID system. Indeed, the final non-prescriptive wording of Article 17 reflects a compromise over concern that expressly mandating for filtering technology was a step too far in burdening content sharing service providers and their users alike.
The Polish Challenge: Throwing Down the Gauntlet to the Court of Justice
Against the backdrop of this implicit European policy embrace of algorithmic enforcement of copyright, the Polish challenge can be viewed as a throwing down of the gauntlet before the CJEU to make this implicit policy express, or annul Article 17 in part or in full. The choice facing the Court was to fill the policy gap leading to the compromise wording of Article 17 and justify upload filters under a fundamental rights framework or identify alternative preventative measures that would comply with the ‘best efforts’ criterion of Article 17(4)(b) and (c). In this respect, the challenge rested on the widely held assumption that upload filters would be incompatible, in particular, with the norms of freedom of expression – not least, because current filtering technology is incapable of making a sufficient distinction between legal and illegal uses of content relative to the exceptions and limitations under the copyright acquis. In light of this, the Polish challenge asserted that Article 17 was absent sufficient safeguards to protect internet user rights of freedom of expression. Recognising that Article 17 does mandate the use of upload filters, the CJEU rejected the Polish challenge, confirming the compatibility of Article 17 with Article 11 of the European Charter of Fundamental Rights and laying considerable emphasis on the safeguards that exist within Article 17.
These safeguards are intended to operate both ex ante and ex post. Under Article 17(7), prior to deploying a filtering solution, cooperation between content sharing providers and right holders, i.e. the provision of information from the latter to the former, shall not result in the blocking of legal content. Article 17(4)(b) directs that right holders provide relevant and necessary information, which according to the Court protects freedom of expression, as content sharing providers will not, in the absence of such information, make such content unavailable. Moreover, the Court reiterated that a system that cannot sufficiently distinguish between lawful and unlawful content, in particular content that would require a value judgement as to its illegality, would not be compatible with Article 11 of the Charter. In short, the Court narrowed the scope of Article 17 as applying only to manifestly infringing content. Article 17(9) establishes the first ex post safeguard, in the event that content is ‘erroneously or unjustifiably’ blocked, by requiring content sharing providers to put in place effective and expeditious complaint and redress mechanisms. Moreover, the Member States are to ensure there is access to out-of-court redress mechanisms and recourse to the courts. The Court was therefore satisfied that Article 17 did – contrary to the Polish claim to the opposite – contain sufficient safeguards to ensure compatibility with Article 11 of the Charter. (Many) questions however remain, not least, the question as to the ‘essence’ of freedom of expression in this context.
The Fact of Safeguards Tells Us Nothing About the ‘Essence’ of Freedom of Expression
Adopting a classic proportionality formula, the Court’s position is supported by its jurisprudence and Article 52(1) of the Charter. To this end, Article 52(1) allows for fundamental rights to be limited if justified and subject to minimum safeguards, while Promusicae makes clear internet users’ fundamental rights can be limited under a balancing exercise to support copyright enforcement. In turn, the Court had little difficulty in establishing that the ‘essence’ of freedom of expression in the context of Article 17 was respected, in particular, by the minimum safeguards under Article 17(7)-(9). In doing so, it drew on rulings of the European Court of Human Rights that prior restraints on freedom of expression are permissible, subject to a particularly tight legal framework. However, the fact of safeguards tells us nothing about their substance, particularly their actual or likely efficacy in practice relative to protecting internet users’ right to freedom of expression. Following the ruling, the normative position for the Member States is to now answer the question of how these safeguards are to work to ensure internet users’ fundamental rights are respected. Therein lies the problem. In framing the essence of the right as unaffected by prior restraints and insulated by minimum safeguards, the Court fails to illuminate the degree to which such restraints may impact the right or indeed to what degree the minimum safeguards will need to be calibrated to protect freedom of expression.
The objection of course is that this should not be the function of the Court but rather the Member States. Indeed, the same argument could be made concerning the Court’s filling of the policy gap by reading upload filters into Article 17 generally. In short, we are where we are because of the Court’s ruling – and we now face a dilemma. Article 17 envisages that legitimate content should not be blocked. The Court reminds us that a system that cannot sufficiently distinguish legal from illegal content will not comply with Article 11 of the Charter. Yet, the remaining safeguards within the provision deal with this precise scenario, giving rise to questions that put into sharp focus the lack of guidance from the Court as to what the substantive essence of freedom of expression is, in this context. To this end, even with the Court attempting to limit the overall scope of such filtering, mistakes will happen. What then of redress?
What if Time is of the ‘Essence’ of Freedom of Expression under the CDSM?
Let us assume the content was legal but erroneously blocked – how long is too long for the block to be disproportionate in a scenario where there are apparent safeguards so that this should not have occurred at all? Does it matter? After all, a safeguard is not a guarantee per se and in this context if it were, imbalance would surely occur in suggesting blocks could never occur. But this is precisely the point – if under classic proportionality balancing a block can occur, the redress mechanism must still be effective and, as Article 17(9) provides, without undue delay and subject to human review. The irony is that the latter, unable to achieve the former on sufficient grounds under Article 14 of the eCommerce Directive, is one of the main arguments in favour of upload filters. What then is a proportionate delay where the most appropriate redress is removal of the block? The answer may well prove elusive with the task being to apply objective standards to expressions that are by their nature inherently subjective and relative to the context in which they are made.
Indeed, coming within an exception or limitation only tells us the category of the expression – it says nothing as to the value of that expression to the user relative to any myriad of factors, from the imprint of user personality to the temporal resonance with society and culture at the point of upload. When it comes to freedom of expression in this context, time and not the existence of safeguards may well be the starting point for discerning the ‘essence’ of the right. Indeed, as we move towards greater filtered futures, we need a surer understanding of the ‘essence’ of freedom of expression in this context beyond the existence of safeguards. For the implementation of Article 17 in light of the Court’s ruling, ex ante mechanisms that allow prior user input to challenge any potential block appear a necessary minimal mitigation measure to protect internet user rights. But even this delays the inevitable question where a block is imposed on what turns out to be legal content – to what degree is time of the essence?
If the normative position following the Court’s ruling is that Article 17 is to be implemented, the assumption is that some delay is inevitable in resolving an end user’s complaint. If we accept the essence of freedom of expression is contingent on the existence of safeguards that include redress then this is unproblematic. However, in balancing copyright and freedom of expression in this context, the most proportionate (and appropriate) redress is the expeditious removal of the block, relative to the factors described above. If an objective standard remains elusive in this context precisely because of those factors, can we find balance? The perspective of classic fundamental rights balancing would still suggest an answer in the affirmative; after all, an end-user can still avail of out of court settlement procedures and the courts. However, if it is accepted that in this context the ‘essence’ of the right comes down to individualistic, nuanced and relative factors, such avenues appear wholly inadequate where time is of the essence. Following the Court’s ruling, the question of time may be derided as an abstract consideration that sits in contrast to the normative position of implementation. However, on the contrary, it is precisely because of the Court’s ruling that difficult questions remain surrounding the balance between freedom of expression and copyright in this context. In protecting end-users, a useful starting point would be to identify what the real ‘essence’ of freedom of expression is, in this context. If, as the foregoing suggests, time is of the essence, this invites the question – is balance in practice, on these terms, possible under Article 17?