As 2020 unfolds, the European Commission’s stakeholder dialogue pursuant to Article 17 of the Directive on Copyright in the Digital Single Market (CDSM directive) enters its third (and likely final) phase. After four meetings that focussed on gathering “an overview of the current market situation as regards licensing practices, tools used for online content management […] and related issues and concerns”, the next two (or more) meetings will finally deal with issues raised by the provisions in Article 17 of the CDSM directive. According to the Commission’s discussion paper for the meetings of 16 January and 10 February 2020, the objective of the third phase “is to gather evidence, views and suggestions that the services of the Commission can take into account in preparing the guidance pursuant to Article 17(10)”.
In other words, after four meetings that have set the scene, the stakeholder dialogue will now address some of the thorny issues raised by Article 17. These include the key concepts like the best effort obligations to obtain authorisation and to prevent the availability of content (Article 17(4)), as well as the safeguards for legitimate uses of content (Article 17(7)) and the complaint and redress mechanisms available to users (Article 17(9)). In preparation for these forthcoming discussions, it is worth recapitulating what we have learned since the stakeholder dialogue kicked off in October of last year.
Three takeaways from the stakeholder dialogue so far
After more than 25 hours of discussion (recordings of the four meetings can be found here: 1, 2, 3 and 4), there are three main insights that will likely have a substantial impact on the overall outcome of the stakeholder dialogue. These are the different motivations of different types of rightholders; the technical limitations of Automated Content Recognition (ACR) technologies; and the general lack of transparency with regards to current rights management practices. The first two of these are discussed in this post and the third will be covered in part 2 which will be published shortly.
Rightholders are divided by business model
On the rightholder side, the stakeholder dialogue is dominated by the music and audiovisual (AV) industries who, by and large, represent two completely different approaches to making their content available. While there are internal differences when it comes to the details of how they operate, rightholders from the music industry generally aim to license their works to as many users and intermediaries as possible. As a result, the various music industry stakeholders have been tireless in making it clear that, from their perspective, Article 17 is about licensing and that discussions about filtering and removal are a distraction.
On the other hand, rightholders from the AV industry have, by and large, made it clear that they are not interested in broad licensing of their content to platforms. AV rightholders have made it clear that their business models are built on selectively licensing different distribution channels and that they see the general availability of their works on UGC platforms as a threat to their commercial interests. As a result, the various AV industry stakeholders have made it very clear that they expect to rely heavily on the obligation on platforms to make best efforts to ensure the unavailability of works. In other words, for the AV industry Article 17 is very much about filtering/blocking content.
Other rightholders present at the dialogue mostly align with one of these two positions. Rightholders from the photo and visual arts sectors are making the case that platforms will need to start licensing their repertoires (unfortunately for them both YouTube and Facebook continue to give them the cold shoulder), while literary publishers have sided with the AV industry in pointing out that broad availability of their works on UGC platforms runs counter to their commercial interests.
This makes it clear that, once put into practice, Article 17 will be about both licensing and automated filtering/blocking of content. In this context it is interesting to see that the music industry (which has been the driving force behind Article 13/17) gets to play the good cop (“it’s all only about licensing”) while the AV industry, which (at times reluctantly) supported the music sector in its efforts to get Article 13 adopted, will now be stuck with the bad cop role trying to push through automated filtering solutions despite all their shortcomings (see below).
One of the main challenges of the next meetings will be to build a common understanding of Article 17 that takes these very different perspectives into account. It is clear that Article 17 cannot be a vehicle to force specific business models on specific sectors. As such, it must remain possible for rightholders who wish to do so to keep content off the platforms, but, in line with the user rights safeguards established in paragraphs 17(7) and 17(9) of the CDSM directive, this must not affect legitimate uses of these works, for example when they are used under exceptions and limitations to copyright.
Given the scale of user uploads to UGC platforms, it is clear that ensuring the unavailability of content will require automated content recognition tools. But, given the shortcomings of such tools, it is equally clear that their use must be subject to strong user rights safeguards that will likely not meet the expectations of AV rightholders.
Automated Content Recognition technology is context blind
During the third and fourth meetings of the stakeholder dialogue there were six presentations from companies that either have in-house content recognition technologies (YouTube and Facebook) or that offer such technologies to platforms (Audible Magic, PEX, Videntifier and Smart Protection). All of these companies extolled the virtues of their content matching algorithms, claiming negligible numbers of false positives (incorrectly identified pieces of content) and boasting about their abilities to identify content even when it has been modified to avoid detection.
The matching capacities of the different systems are impressive and it is likely that this is also the case for the multitude of other products available in the market (music industry representatives made the claim that there are currently 42 different solutions available in Europe).
While matching audio and video content to reference files provided by rightholders is essentially a solved problem, this does not mean that automated content recognition (ACR) systems are capable of determining the lawfulness of a specific use of content.
Prompted by questions from representatives of users’ rights organisations, all six providers of ACR systems made it clear that their systems do not look at the context in which a use takes place and, as such, cannot make determinations of whether or not a use falls within the scope of an exception or limitation. This inherent limitation of filtering technology is succinctly captured in statements made by Facebook and Audible Magic at the fourth meeting of the stakeholder dialogue:
“Our matching system is not able to take context into account; it is just seeking to identify whether or not two pieces of content match to one another.” (Facebook, 16-12-2019)
“Copyright exceptions require a high degree of intellectual judgement and an understanding and appreciation of context. We do not represent that any technology can solve this problem in an automated fashion. Ultimately these types of determinations must be handled by human judgement.” (Audible Magic, 16-12-2019).
The technology providers participating in the stakeholder dialogue also made it clear that this situation is unlikely to change any time soon. This limitation of ACR technology will likely have a substantial impact on the discussions in the next phase as it means that, while ACR technology plays an important role in the monetisation of content available on platforms and is essential for revenue accounting, it is generally unsuited for fully automated filtering or blocking. Without the ability to assess the context in which a use takes place, current ACR technology cannot ensure that content used under exceptions or limitations remains available as required by paragraph 17(7) of the CDSM directive.
This also means that, in its current state, ACR technology meets the requirements of the music industry use case (licensing and revenue accounting), while it falls short of the requirements of the AV industry use case (blocking). This tension also needs to be addressed in the upcoming meetings.
The third takeaway from the dialogue so far – lack of transparency on all sides – will be discussed in Part 2 of this post, together with a look ahead to the next phase of the dialogue.
_____________________________
To make sure you do not miss out on regular updates from the Kluwer Copyright Blog, please subscribe here.
Kluwer IP Law
The 2022 Future Ready Lawyer survey showed that 79% of lawyers think that the importance of legal technology will increase for next year. With Kluwer IP Law you can navigate the increasingly global practice of IP law with specialized, local and cross-border information and tools from every preferred location. Are you, as an IP professional, ready for the future?
Learn how Kluwer IP Law can support you.
In Sabam Vs Netlog, the CJEU already said filters cannot detect parodies.
Why does the upload filters supporters ignore that?
@Henry: So Sabam vs Netlog was in 2012. I could imagine that some of the proponents of upload filters would have hoped that technology had advanced more than it apparently has. Another reason is probably that upload filters are widely used by today (in completely unregulated private rights management systems like YouTube’s Content ID) where their use is not really challenged. This is actually a point where the directive improves the current situation as such systems will now need to comply with the user rights safeguards established by the directive.
As you correctly point out, all platforms completely ignore still photography, although it is a major source of their traffic. Unlike video or audio, there is no existing registry as a reference and thus one would need to be created. But who and how will it be structured to avoid creating a commercial monster? Otherwise, the issue is left in the hands of the unregulated collective rights management agencies who are notoriously inefficient.
@Paul: thanks for the comment. Two quick points: i don’t think that it is fair to say that photography is a major source of traffic for *all* platforms. For some it certainly is (Instagram, Pintrest) but for others (TikTok or YouTube) not so much and then there are a lot of platforms in-between where it is part of the offering (Facebook, Twitter, etc). But your point is valid, it will be a challenge to efficiently license photography. I would see a (limited) role for CMOs but they only represent a very small part of photographers (note that they are not unregulated, the EU has an entire directive that regulates CMOs)
@Paul: thanks for the comment. Two quick points: i don’t think that it is fair to say that photography is a major source of traffic for *all* platforms. For some it certainly is (Instagram, Pintrest) but for others (TikTok or YouTube) not so much and then there are a lot of platforms in-between where it is part of the offering (Facebook, Twitter, etc). But your point is valid, it will be a challenge to efficiently license photography. I would see a (limited) role for CMOs but they only represent a very small part of photographers (note that they are not unregulated, the EU has an entire directive that regulates CMOs)