Thoughts on the DMCA Reform Draft Proposal

The DMCA is one of the most significant laws on the Internet, as it is the de-facto standard process which governs the removal of content which allegedly infringes on copyright. That might sound mind-numbingly boring, but it’s a topic which has increasingly come into the cultural spotlight, as automated takedown mechanisms have impacted folks on Twitch, YouTube, etc – for a whole variety of arguably spurious reasons.

It’s no secret that the DMCA has significant problems (it’s a topic I’ve written about at length) – and there has been an ongoing review of the statute to try bring it up to date. Earlier this year we saw the US Copyright Office publish their recommendations on the future for the DMCA, and just this last week, a draft proposal for change was put forward for comments by Senator Tillis. The full thing is pretty long and complicated, especially if you aren’t familiar with the statute, but the accompanying summary doesn’t really give a full picture of the changes.

I’ve had a look through the proposals (specifically in relation to the notice and takedown process), and noted some specific areas of interest below. Note that this is nowhere near exhaustive, and based on first impressions. Caveat Emptor.

  1. s.512(b) – Qualifications added to the notice requirements. Here we see a bunch of different language added to the section detailing the requirements of a notice. This in of itself is not a bad thing, but the changes here make the statute much more difficult to both interpret and apply. The law is already vague and unclear in a number of areas, and this makes that worse. See s.512(b)(1)(C)(i)-(ii) specifically.
  2. s.512(a)(2)(C)(ii)Notice and Stay Down. This section introduces a requirement that material which is the subject of a DMCA takedown ‘stays down’ when a ‘complete or near complete copy’ is identified. In essence, this means that platforms will have to implement some kind of filtering technology to ensure that content is not re-uploaded. This comes despite the warnings from the USCO and others that this approach (following the European Copyright Directive) would be problematic (to say the least). It also isn’t clear at all whether this would apply retrospectively to content which has already been uploaded, or what a ‘near complete copy’ would entail. Again, this opens up issues of interpretation around the threshold for removal, and platforms would inevitably need to err on the side of caution to avoid liability. The impact of this would be that far more content would be taken down than users would expect. It also doesn’t address the question of fair use, in any way. In other words… not all unauthorised uses of copyrighted material constitute infringement (or where they do, there can be a fair use rebuttal).
  3. s.512(b)(1)(E)Good Faith Belief now subjective. The requirement for the copyright holder to make a statement that they have a good faith belief that the material is not authorised for use […] has been updated to include a ‘subjective’ qualifier. This will make it much more difficult for any claims to be brought against those who submit bogus takedown notices on the basis of their good faith statement. This directly relates to the hard-fought concession in the Dancing Baby case (Lenz v. Universal).
  4. s.512(b)(5)Anonymous Notices. This section allows for complainants to have their personal information redacted from notices, based on as-of-yet non-existent guidance from the Register of Copyrights. On the face of it this seems sensible. However, the DMCA already allows various ways for complainants to remain largely anonymous, or to have their details protected – something which is not afforded to users when submitting counter notices. Complainants can simply provide an e-mail address as the minimum contact information required, or submit through a third party agent. There is no similar provision or update given for counter notices. This is something which we have seen abused by abusive complainants to gain information on those who are critical of them.
  5. Counter Notification Challenges – Again on counter notices, this section essentially gives the complainant the final right to reply on the statutory process, before resorting to legal action. In other words, if a counter notice is submitted, complainants would be able to challenge this within the statute, and not have to show evidence that they have pursued the matter in court (as they do under the current provisions). This adds another step to the ‘complicated game of tennis’ which is the back-and-forth of the notice and takedown system, and one which benefits the complainants massively. The burden of proof is essentially reversed, and means that any users who have the right to use material will be forced to take legal action to show that the material was wrongfully removed – rather than the rights holders taking action against the infringement.
  6. s.512(f)(2)List of Abusive Complainants – This is the one positive from the list of changes. Essentially, this updates the penalties section of the DMCA to allow for those that consistently send invalid notices to be placed on a list which would allow service providers to disregard these notices for a set period of time. However, there are no real details about what the threshold for abuse would be, or about what the appeal process (if any) would be if someone was included on this list. Without these details, one suspects that the threshold would be set so high, and be subject to so much legal challenge, that it would in effect be worthless.

General Thoughts

This draft proposal is disappointing (at least with regards to the notice and takedown provisions), as it seems to ignore many of the key issues that have been consistently raised about the DMCA. Rather than correcting the imbalances that exist, the proposed changes further strengthen and entrench the position of rights holders, as well as the statute’s utility as a powerful unilateral censorship tool.

The provisions relating to counter notification are particularly troubling, as the data collected over the 20-odd year life-span of the DMCA shows that the number of counter notices which are actually filed is miniscule. There are already so many barriers and disincentives for people to challenge takedown notices (on valid grounds) that adding in more hurdles seems to be completely at odds with all of the established literature on the topic.

Despite its many flaws and criticisms, the DMCA has become a system which at least provided consistent results. These proposals bring some of the worst parts of the statute, and combine them with the very worst parts of the European Copyright Directive to give far greater takedown powers to rights holders, with seemingly no consideration of users, or the cultural importance of online expression.

This is just a draft proposal, and open for stakeholder comments. If we are going to avoid a similar disaster to the approach taken in Europe, major changes need to be made.

Book Recommendation – “The Twenty-Six Words That Created the Internet”

The latest book I have to recommend comes from law professor Jeff Kosseff, in which he examines one of the laws that have been most crucial to the development of the Internet: s.230 of the Communications Decency Act. For those not familiar with the CDA, it is a piece of American jurisprudence that has essentially enabled businesses such as Twitter and YouTube to develop platforms built on user generated content, without themselves becoming liable for everything that those users may say or do.
Jeff Kosseff - Twenty Six Words That Created the Internet - Book Cover

Understanding the CDA is increasingly important – not just for lawyers or academics focussed on intermediary liability – but for anybody with an interest in the future of the Internet. This book provides a comprehensive explanation of the law’s history and original aims, as well as its development through case law. Whilst it isn’t necessarily an ‘easy’ read due to the subject matter, Kosseff’s narrative style means that it remains engaging throughout, never letting things run dry, or too theoretically abstract.

The Twenty-Six Words That Created the Internet‘ was published in April of 2019. Given the impact of the CDA, it is almost hard to believe that such a complete study hasn’t come around before now. Either way, if you want to learn (a lot) about one of the most important laws underpinning the Internet as we know it, read this.

Disclaimer: I am not being paid to review or recommend this book, but if you click on the Amazon links above and buy a copy, Jeff Bezos might send me a few pennies to say thanks. 

Book Recommendation – “Speech Police: The Global Struggle to Govern the Internet”

‘Speech Police: The Global Struggle to Govern the Internet’ is the latest publication from speechpoliceUN Special Rapporteur on Freedom of Expression, David Kaye. Following on from his 2018 report on content regulation, this book looks at the issue of who decides what kind of speech is acceptable online, and the potential implications of the increasing expectations placed on platforms to regulate certain kinds of content.

Kaye’s narrative style is both thoughtful and engaging, covering difficult concepts in a clear and concise fashion, but also exploring aspects of the debate that are often overlooked. Coupled with a relatively low page count, this means that Speech Police is not only a valuable read for those already familiar with the questions around content moderation and freedom of expression, but is also extremely accessible for those new to the topic. As a result, this book is a must read for anybody currently studying or working in tech policy, or those who are simply concerned about the future of the Internet.

You can get a copy of Speech Police from Amazon here.

Disclaimer: I am not being paid to review or recommend this book, but if you click on the Amazon link above and buy a copy, Jeff Bezos might send me a few pennies to say thanks. 

UN Special Rapporteur’s Report on Content Regulation (2018)

With the news that the United States are to withdraw from the UN’s Human Rights Council, it seemed poignant to highlight one of their recently published Special Rapporteur’s reports, in which they looked at the state of online ‘content regulation’, and the impact on freedom of expression.

[It] examines the role of States and social media companies in providing an enabling environment for freedom of expression and access to information online.

The report itself is one of the better publications from an official entity, and talks about a lot of important issues that other bodies tend to ignore (willingly or otherwise). As a result, the whole thing is worth reading, but a few portions in particular stood out for me, and are worth sharing:

Counter Speech

One of the current major questions in the realm of intermediary liability is how platforms should deal with ‘extremist’ content. In an attempt to find a compromise between ‘doing nothing’, and total removal of anything questionable (with all of the resultant implications for freedom of expression), the concept of ‘counter speech’ is often brought up as a solution. In principle the idea is that instead of silencing disagreeable expression, people should instead seek to directly counter the ideas. This avoids the problem of subjective censorship, protecting free speech, and also ‘shines light into the dark’, rather than driving people underground where there is little or no critical dissent.

As well intentioned as this approach may be, it is one that is now unfortunately being misconstrued as an obligation for platforms to incorporate, rather than interested individuals or groups. For example, there are suggestions that the likes of YouTube should place an interstitial banner on disputed content to warn viewers of its nature. In the case of pro-ISIS videos, this notice would include links to anti-extremism programs, or counter narratives. As the report wisely notes:

While the promotion of counter-narratives may be attractive in the
face of “extremist” or “terrorist” content, pressure for such approaches runs the risk of transforming platforms into carriers of propaganda well beyond established areas of legitimate concern.

Despite the fact that there is little evidence that such an approach would do anything but bolster the already established beliefs of those viewing the content in question, there would inevitably be calls for it to be extended to any particularly contentious content. Ostensibly, pro-choice campaign websites could be overlaid with arguments from conservative religious groups; McDonalds.com with a link to the Vegan association. This may seem far fetched, but the danger is clear: as soon as we replace our own critical faculties with an obligation on intermediaries to provide ‘balance’ (even with the most extreme of content), we open the door to the normalisation of the practice. There is scant analysis of this particular issue out there at the moment, and I’m especially pleased to see it highlighted by the UNHRC.

Trusted Flaggers

Many companies have developed specialized rosters of “trusted” flaggers, typically experts, high-impact users and, reportedly, sometimes government flaggers. There is little or no public information explaining the selection of specialized flaggers, their interpretations of legal or community standards or their influence over company decisions.

Lack of definition of terms

You can’t adequately address challenges if the terms aren’t defined. For that reason, crusades against vague concepts such as ‘hate speech’, ‘fake news‘, etc are at best, doomed to failure, and at worst, a serious threat to freedom of expression. This isn’t a problem limited to the issues surrounding intermediary liability, but one which is made more visible by the globalised, cross jurisdictional nature of the Internet.

The commitment to legal compliance can be complicated when relevant State law is
vague, subject to varying interpretations or inconsistent with human rights law. For
instance, laws against “extremism” which leave the key term undefined provide discretion to government authorities to pressure companies to remove content on questionable grounds.

This is pretty self explanatory, but something which is often overlooked in discussions around the responsibilities of intermediaries in relation to content regulation. We should not accept the use of terms which have not been properly defined, as this allows any actor to co-opt them for their own purposes. Tackling ‘online abuse’, for example, is a grand aim which can easily garner much support, but which remains empty and meaningless without further explanation – and thus, open to abuse in of itself.

Vague rules

Following on from the previous section, platforms (perhaps partly as a direct result of the contemporary political rhetoric) adopt vague descriptors of the kind of content and/or behaviour which is unacceptable, in order to cover a variety of circumstances.

Company prohibitions of threatening or promoting terrorism, supporting or praising leaders of dangerous organizations and content that promotes terrorist acts or incites violence are, like counter-terrorism legislation, excessively vague. Company policies on hate, harassment and abuse also do not clearly indicate what constitutes an offence. Twitter’s prohibition of “behavior that incites fear about a protected group” and Facebook’s distinction between “direct attacks” on protected characteristics and merely “distasteful or offensive content” are subjective and unstable bases for content moderation.

Freedom of expression laws (generally) do not apply to private entities. In other words, Facebook et al are more or less free to decide on the rules of engagement for their platform. However, as these intermediaries increasingly control the spaces in which we as a society engage, they have a responsibility to ensure that their rules are at least transparent. The increasing multi-jurisdictional legal burdens and political pressures placed upon them to moderate content reduces the likelihood of this significantly. It also provides little to no stability or protection for those who hold views outside of the generally accepted cultural norms – a category that includes political activists and dissidents. In many parts of the world, having a homosexual relationship is considered ‘distasteful’ and ‘offensive’, as are the words of the current President of the United States – which demonstrates the problem with allowing (or expecting) a technology company to make such distinctions.

‘Real name’ policies

For those not familiar, this refers to the requirement from certain platforms that you must use your actual, legal name on their service – as opposed to a username, pseudonym, nickname, or anonymity. Officially the reason is that if someone is required to use their ‘real’ name, then they are less likely to engage in abusive behaviour online. We can speculate as to the real motives for such policies, but it seems undeniable that they are often linked to more accurate (aggressive) marketing to a platform’s user base. Either way, the report notes:

The effectiveness of real-name requirements as safeguards against online abuse is
questionable. Indeed, strict insistence on real names has unmasked bloggers and activists using pseudonyms to protect themselves, exposing them to grave physical danger. It has also blocked the accounts of lesbian, gay, bisexual, transgender and queer users and activists, drag performers and users with non-English or unconventional names. Since online anonymity is often necessary for the physical safety of vulnerable users, human rights principles default to the protection of anonymity, subject only to limitations that would protect their identities.

Within traditional digital rights circles (if there is such a thing), there appears to be a growing belief that anonymity is a bad thing. I’ve even heard suggestions that the government should require some kind of official identification system before people can interact online. This is clearly a terrible idea, and may seem utterly laughable, but when you consider that this is exactly what will be law for adult websites in the UK later this year, it seems like it might not be completely out of the realms of possibility after all. We need to better educate ourselves and others on the issues before the drips become a wave.

Automated decision making

Automated tools scanning music and video for copyright infringement at the point of upload have raised concerns of overblocking, and calls to expand upload filtering to terrorist-related and other areas of content threaten to establish comprehensive and disproportionate regimes of pre-publication censorship.

Artificial intelligence and ‘machine learning’ are increasingly seen as some kind of silver bullet to the issues of moderating content at scale, despite the many and varied issues with the technology. Bots do not understand context, or the legal concept of ‘fair use’; frequently misidentify content; and are generally ineffective, yet the European Union is pressing ahead with encouraging platforms to adopt automated mechanisms in their proposed Copyright Directive. Rather than just trying to placate lawmakers, intermediaries need to recognise the problems with such an approach, and more vigorously resist such a solution, instead of treating it as a purely technological challenge to overcome.

Finally…

Companies should recognize that the authoritative global standard for ensuring
freedom of expression on their platforms is human rights law, not the varying laws of States or their own private interests, and they should re-evaluate their content
standards accordingly.

This is a pretty strong statement to make, and demonstrates an approach that I strongly resonate with. In principle, at least. In practice however, companies are obliged to follow the legal obligations of the jurisdictions in which they are based (and sometimes even beyond, given the perceived reach of the GDPR). The extent and application of ‘human rights law’ varies significantly, and there are no protections for intermediaries that rely on mythical ‘global standards’ – even the UN Declaration of Human Rights.

Facebook and Free Speech: Reinforcing the Echo Chamber

In this Motherboard article, Vice yesterday highlighted some of the internal changes to Facebook’s policy on acceptable speech after the events of Charlottesville last year.

Facebook Free Speech Policy
Image via Motherboard. Included under the fair use doctrine.

Specifically, it was noted that Facebook distinguish between statements supporting a white nationalist ideology, and white supremacy, with the latter in particular considered to be associated with racism – something prohibited on the platform. In response, there have been arguments that this distinction is meaningless, and that Facebook is effectively allowing Nazis to operate on their network as a result.

Facebook infamously ‘curates’ what its users see through the use of algorithms, and they have faced ongoing criticisms that ‘echo chambers’ are created as a direct result. This was particularly true in light of both Donald Trump’s Presidential election victory, and the outcome of the EU membership referendum in the UK. On a personal note, it was something that first became obvious after the Scottish independence referendum in 2014.

With this in mind, the question becomes what people actually want or expect Facebook to be. On one hand, the possibility of anybody sharing far right or extremist ideologies is seen as abhorrent and unacceptable, but on the other, the cultivation of echo chambers that distort political and social reality is decried as irresponsible.

Unfortunately, you can’t break through an online bubble by only allowing that which you find inoffensive to be shared.

The obvious response here is that there is a difference between healthy debate and sharing views which are hateful. However, this is something of a liberal utopian ideal which doesn’t actually play out in practice. Argument is messy. Debate isn’t always healthy. People don’t always play fairly. All of this is self-evident and will remain true whenever those with opposing positions come into conflict. Arguably, those beliefs that are considered most heinous are precisely those which need to be heard, challenged, and resisted, and in the same vein, the areas online which foster these biases without question need to be opened up to opposition.

If all we want is Facebook to be a safe space to share pictures of our dogs and holiday photos, then that is one thing. However, that is never going to be the reality, irrespective of what some may claim. Whenever people have space to express themselves, they will share their views on how the world should be. If we want to avoid all of the problems that doing so within the so-called echo chambers brings, then we need to stop reinforcing them by banning the very opposing views that would break them apart in the first place.