Freedom of Speech and the DMCA: Abuse of the Notification and Takedown Process

Last month, my first academic journal article was published by the leading international publication on IP law: the European Intellectual Property Review from Thomson Reuters.

From the abstract:

The Digital Millennium Copyright Act’s “notice and takedown” process is increasingly referred to as a model solution for content removal mechanisms worldwide. While it has emerged as a process capable of producing relatively consistent results, it also has significant problems—and is left open to different kinds of abuse. It is important to recognise these issues in order to ensure that they are not repeated in future legislation.

To that end, this article examines the DMCA with reference to its historical context, and the general issues surrounding the enforcement of copyright infringement claims. It then goes on to discuss the notice and takedown process in detail—along with its advantages, disadvantages, criticisms and praise. Specific examples of the kinds of abuse reported by online service providers are outlined, along with explanations of the statutory construction that allows these situations to continue. To finish, the viability of potential alternatives and proposed changes are discussed.

The article itself is available on WestLaw, citation: E.I.P.R. 2019, 41(2) at 70However, you can also get a copy of the PDF below.

Freedom of Speech and the DMCA: Abuse of the Notification and Takedown Process (PDF)

This material was first published by Thomson Reuters, trading as Sweet & Maxwell, 5 Canada Square, Canary Wharf, London, E14 5AQ, in European Intellectual Property Review as ‘Freedom of speech and the DMCA: abuse of the notification and takedown process’.
E.I.P.R. 2019, 41(2) at 70 and is reproduced by agreement with the publishers. This download is provided free for non-commercial use only. Further reproduction or distribution is prohibited.

UN Special Rapporteur’s Report on Content Regulation (2018)

With the news that the United States are to withdraw from the UN’s Human Rights Council, it seemed poignant to highlight one of their recently published Special Rapporteur’s reports, in which they looked at the state of online ‘content regulation’, and the impact on freedom of expression.

[It] examines the role of States and social media companies in providing an enabling environment for freedom of expression and access to information online.

The report itself is one of the better publications from an official entity, and talks about a lot of important issues that other bodies tend to ignore (willingly or otherwise). As a result, the whole thing is worth reading, but a few portions in particular stood out for me, and are worth sharing:

Counter Speech

One of the current major questions in the realm of intermediary liability is how platforms should deal with ‘extremist’ content. In an attempt to find a compromise between ‘doing nothing’, and total removal of anything questionable (with all of the resultant implications for freedom of expression), the concept of ‘counter speech’ is often brought up as a solution. In principle the idea is that instead of silencing disagreeable expression, people should instead seek to directly counter the ideas. This avoids the problem of subjective censorship, protecting free speech, and also ‘shines light into the dark’, rather than driving people underground where there is little or no critical dissent.

As well intentioned as this approach may be, it is one that is now unfortunately being misconstrued as an obligation for platforms to incorporate, rather than interested individuals or groups. For example, there are suggestions that the likes of YouTube should place an interstitial banner on disputed content to warn viewers of its nature. In the case of pro-ISIS videos, this notice would include links to anti-extremism programs, or counter narratives. As the report wisely notes:

While the promotion of counter-narratives may be attractive in the
face of “extremist” or “terrorist” content, pressure for such approaches runs the risk of transforming platforms into carriers of propaganda well beyond established areas of legitimate concern.

Despite the fact that there is little evidence that such an approach would do anything but bolster the already established beliefs of those viewing the content in question, there would inevitably be calls for it to be extended to any particularly contentious content. Ostensibly, pro-choice campaign websites could be overlaid with arguments from conservative religious groups; McDonalds.com with a link to the Vegan association. This may seem far fetched, but the danger is clear: as soon as we replace our own critical faculties with an obligation on intermediaries to provide ‘balance’ (even with the most extreme of content), we open the door to the normalisation of the practice. There is scant analysis of this particular issue out there at the moment, and I’m especially pleased to see it highlighted by the UNHRC.

Trusted Flaggers

Many companies have developed specialized rosters of “trusted” flaggers, typically experts, high-impact users and, reportedly, sometimes government flaggers. There is little or no public information explaining the selection of specialized flaggers, their interpretations of legal or community standards or their influence over company decisions.

Lack of definition of terms

You can’t adequately address challenges if the terms aren’t defined. For that reason, crusades against vague concepts such as ‘hate speech’, ‘fake news‘, etc are at best, doomed to failure, and at worst, a serious threat to freedom of expression. This isn’t a problem limited to the issues surrounding intermediary liability, but one which is made more visible by the globalised, cross jurisdictional nature of the Internet.

The commitment to legal compliance can be complicated when relevant State law is
vague, subject to varying interpretations or inconsistent with human rights law. For
instance, laws against “extremism” which leave the key term undefined provide discretion to government authorities to pressure companies to remove content on questionable grounds.

This is pretty self explanatory, but something which is often overlooked in discussions around the responsibilities of intermediaries in relation to content regulation. We should not accept the use of terms which have not been properly defined, as this allows any actor to co-opt them for their own purposes. Tackling ‘online abuse’, for example, is a grand aim which can easily garner much support, but which remains empty and meaningless without further explanation – and thus, open to abuse in of itself.

Vague rules

Following on from the previous section, platforms (perhaps partly as a direct result of the contemporary political rhetoric) adopt vague descriptors of the kind of content and/or behaviour which is unacceptable, in order to cover a variety of circumstances.

Company prohibitions of threatening or promoting terrorism, supporting or praising leaders of dangerous organizations and content that promotes terrorist acts or incites violence are, like counter-terrorism legislation, excessively vague. Company policies on hate, harassment and abuse also do not clearly indicate what constitutes an offence. Twitter’s prohibition of “behavior that incites fear about a protected group” and Facebook’s distinction between “direct attacks” on protected characteristics and merely “distasteful or offensive content” are subjective and unstable bases for content moderation.

Freedom of expression laws (generally) do not apply to private entities. In other words, Facebook et al are more or less free to decide on the rules of engagement for their platform. However, as these intermediaries increasingly control the spaces in which we as a society engage, they have a responsibility to ensure that their rules are at least transparent. The increasing multi-jurisdictional legal burdens and political pressures placed upon them to moderate content reduces the likelihood of this significantly. It also provides little to no stability or protection for those who hold views outside of the generally accepted cultural norms – a category that includes political activists and dissidents. In many parts of the world, having a homosexual relationship is considered ‘distasteful’ and ‘offensive’, as are the words of the current President of the United States – which demonstrates the problem with allowing (or expecting) a technology company to make such distinctions.

‘Real name’ policies

For those not familiar, this refers to the requirement from certain platforms that you must use your actual, legal name on their service – as opposed to a username, pseudonym, nickname, or anonymity. Officially the reason is that if someone is required to use their ‘real’ name, then they are less likely to engage in abusive behaviour online. We can speculate as to the real motives for such policies, but it seems undeniable that they are often linked to more accurate (aggressive) marketing to a platform’s user base. Either way, the report notes:

The effectiveness of real-name requirements as safeguards against online abuse is
questionable. Indeed, strict insistence on real names has unmasked bloggers and activists using pseudonyms to protect themselves, exposing them to grave physical danger. It has also blocked the accounts of lesbian, gay, bisexual, transgender and queer users and activists, drag performers and users with non-English or unconventional names. Since online anonymity is often necessary for the physical safety of vulnerable users, human rights principles default to the protection of anonymity, subject only to limitations that would protect their identities.

Within traditional digital rights circles (if there is such a thing), there appears to be a growing belief that anonymity is a bad thing. I’ve even heard suggestions that the government should require some kind of official identification system before people can interact online. This is clearly a terrible idea, and may seem utterly laughable, but when you consider that this is exactly what will be law for adult websites in the UK later this year, it seems like it might not be completely out of the realms of possibility after all. We need to better educate ourselves and others on the issues before the drips become a wave.

Automated decision making

Automated tools scanning music and video for copyright infringement at the point of upload have raised concerns of overblocking, and calls to expand upload filtering to terrorist-related and other areas of content threaten to establish comprehensive and disproportionate regimes of pre-publication censorship.

Artificial intelligence and ‘machine learning’ are increasingly seen as some kind of silver bullet to the issues of moderating content at scale, despite the many and varied issues with the technology. Bots do not understand context, or the legal concept of ‘fair use’; frequently misidentify content; and are generally ineffective, yet the European Union is pressing ahead with encouraging platforms to adopt automated mechanisms in their proposed Copyright Directive. Rather than just trying to placate lawmakers, intermediaries need to recognise the problems with such an approach, and more vigorously resist such a solution, instead of treating it as a purely technological challenge to overcome.

Finally…

Companies should recognize that the authoritative global standard for ensuring
freedom of expression on their platforms is human rights law, not the varying laws of States or their own private interests, and they should re-evaluate their content
standards accordingly.

This is a pretty strong statement to make, and demonstrates an approach that I strongly resonate with. In principle, at least. In practice however, companies are obliged to follow the legal obligations of the jurisdictions in which they are based (and sometimes even beyond, given the perceived reach of the GDPR). The extent and application of ‘human rights law’ varies significantly, and there are no protections for intermediaries that rely on mythical ‘global standards’ – even the UN Declaration of Human Rights.

Facebook and Free Speech: Reinforcing the Echo Chamber

In this Motherboard article, Vice yesterday highlighted some of the internal changes to Facebook’s policy on acceptable speech after the events of Charlottesville last year.

Facebook Free Speech Policy
Image via Motherboard. Included under the fair use doctrine.

Specifically, it was noted that Facebook distinguish between statements supporting a white nationalist ideology, and white supremacy, with the latter in particular considered to be associated with racism – something prohibited on the platform. In response, there have been arguments that this distinction is meaningless, and that Facebook is effectively allowing Nazis to operate on their network as a result.

Facebook infamously ‘curates’ what its users see through the use of algorithms, and they have faced ongoing criticisms that ‘echo chambers’ are created as a direct result. This was particularly true in light of both Donald Trump’s Presidential election victory, and the outcome of the EU membership referendum in the UK. On a personal note, it was something that first became obvious after the Scottish independence referendum in 2014.

With this in mind, the question becomes what people actually want or expect Facebook to be. On one hand, the possibility of anybody sharing far right or extremist ideologies is seen as abhorrent and unacceptable, but on the other, the cultivation of echo chambers that distort political and social reality is decried as irresponsible.

Unfortunately, you can’t break through an online bubble by only allowing that which you find inoffensive to be shared.

The obvious response here is that there is a difference between healthy debate and sharing views which are hateful. However, this is something of a liberal utopian ideal which doesn’t actually play out in practice. Argument is messy. Debate isn’t always healthy. People don’t always play fairly. All of this is self-evident and will remain true whenever those with opposing positions come into conflict. Arguably, those beliefs that are considered most heinous are precisely those which need to be heard, challenged, and resisted, and in the same vein, the areas online which foster these biases without question need to be opened up to opposition.

If all we want is Facebook to be a safe space to share pictures of our dogs and holiday photos, then that is one thing. However, that is never going to be the reality, irrespective of what some may claim. Whenever people have space to express themselves, they will share their views on how the world should be. If we want to avoid all of the problems that doing so within the so-called echo chambers brings, then we need to stop reinforcing them by banning the very opposing views that would break them apart in the first place.

Shopify, Breitbart, and Freedom of Speech.

Tonight I came across an article on TechCrunch in response to an open letter from Tobias Lütke, CEO of e-commerce platform Shopify, in which he defends the company’s decision to continue hosting Breitbart’s online shop. Breitbart being the infamous far right publication of which Steve Bannon was heavily involved with.

After sustained criticism, Lütke explains in the post entitled ‘In Support of Free Speech’ that based upon a belief that ‘commerce is a powerful, underestimated form of expression’, it would be wrong to effectively censor merchants by shutting down their shops as the result of differing political views.

Reporting on the letter, TechCrunch shared their post to Facebook with the text: ‘Shopify’s CEO thinks his platform has a responsibility to continue hosting Breitbart’s store – here’s why he’s wrong.’

Screen Shot 2017-02-10 at 02.29.57.png

I was curious to see the arguments that would be proffered as to why the decision was wrong, but was ultimately left wanting. Here are the reasons given, as far as I could make out:

  1. Lütke is grossly overestimating the role of a private e-commerce platform in providing and protecting freedom of expression.
  2. Shopify cannot ‘censor’ anybody, as they are not an emanation of the State.
  3. Justifying the continued hosting of merchants who have extreme views for freedom of speech reasons is wrong, as freedom of speech does not apply to private organisations.
  4. As a private company, Shopify are not legally required to provide a platform to anybody.
  5. Shopify’s Terms of Service allow them to terminate the account of any user at any time.

In response, here’s why TechCrunch are wrong:

None of the reasons given actually explain why Shopify shouldn’t continue to host Breitbart.

Read over them again, then check out the full article here. Despite heavily criticising Shopify, and stating that Lütke is ‘wrong’, TechCrunch don’t engage at all with the heart of the issue. No, Shopify are not legally required to host the Breitbart shop, and yes, quite obviously their Terms of Service are quite obviously worded in such a way to give them that discretion in the event of any legal challenge, but that’s hardly a surprise.

Here’s the big question that went unanswered: why should Shopify not host Breitbart?Lütke hits the nail on the head with the following challenge, which the TechCrunch article completely fails to even acknowledge:

When we kick off a merchant, we’re asserting our own moral code as the superior one. But who gets to define that moral code? Where would it begin and end? Who gets to decide what can be sold and what can’t?

Rather than attempt to address this fundamental issue, TechCrunch essentially just argue that Shopify should kick Breitbart off of their platform because, er, well, legally there’s nothing to stop them. A pretty poor argument at best.

Protecting freedom of speech isn’t just down to the State.

Firstly, I’m not sure where this idea that censorship is only something that the State can give effect to comes from. It means to forbid or to ban something; to suppress speech. The source doesn’t have anything to do with it.

Screen Shot 2017-02-10 at 03.24.28.png

Secondly, there is a lot of confusion surrounding freedom of speech and the relation to the State, even from those who purport to understand the dynamic. To clear some things up, the following are true:

  • Freedom of speech law (generally) only protects citizens from the acts of State actors.
  • Private online service providers (generally) have no obligation to protect the freedom of speech rights of their users, or to give them a platform for expression.

However, to assert that a platform cannot justify their actions based on freedom of speech considerations, or to willingly strive to uphold those principles on the basis of the above is a non sequitur. Additionally, just because you can’t threaten legal action on a freeedom of speech argument against Facebook if they take down your status update, that doesn’t mean it is wrong to argue that Facebook should be doing more to consider and protect those values.

Just as we would not expect a hotel owner to be able to refuse to allow a same sex couple to share a bed, or a pub to knock back someone based purely on the colour of their skin, it is nonsense to pretend that we have no expectations of private organisations to abide by certain shared societal values.

Without touching on the claims around the importance of e-commerce as a vehicle for expression, it seems that in a world where we are increasingly reliant on private entities to provide our virtual town square equivalents, and where we expect certain values to be upheld, arguably platforms such as Shopify have an increasing moral obligation to protect (as far as is possible) the principles that are the cornerstone of our Democracies.

 

 

Censoring ‘Fake News’ is the real threat to our online freedom

As the results of the US Presidential election began to sink in, the finger of blame swung around to focus on ‘fake news’ websites, that publish factually incorrect articles with snappy headlines that are ripe for social media dissemination.

francis-fake.png
A ‘fake’ headline. Via the Independent.

Ironically, the age of propaganda has previously thought to have died out with the proliferation of easy access to the Internet, with people able to cross-reference and fact check claims from their bedroom, rather than having a single domestic point of information. Instead, what it appears we are seeing is the opposite; people congregating around a single funnel of sources (Facebook), which filters to the top the most widely shared (read: most attention grabbing) articles.

Almost immediately, the socially liberal-leaning technology giants Google and Facebook announced that they would be taking steps to prevent websites from making use of their services. This has sparked a ream of discussion about the ‘responsibility’ of other online platforms to take steps to prevent the spread of these so-called ‘fake news’ sites on their networks.

Here, probably for the first time I can remember, I find myself in agreement with what Zuckerberg has (reportedly) said in response:

The suggestion that online platforms should unilaterally act to restrict ‘fake news’ websites is one of the biggest threats to free speech to face the Internet.

Those are my words, not his – just to be clear. Click through to see what he actually said (well, as long as the source can be trusted).

It is unclear exactly what ‘fake news’ is supposed to be. Some sites ‘outing’ publishers that engage in this sort of activity have included The Onion in their lists, which in of itself demonstrates the problem of singling out websites that publish ‘fake’ news.

  • Where is the line drawn between ‘fake news’ and satire?
  • At what point do factually incorrect articles become ‘fake news’?
  • At what point do ‘trade puffs’ and campaign claims become ‘fake news’ rather than just passionate advocacy?
  • If the defining factor is intent, rather than content, who makes that determination, and based on what set of values?

It is not the job of online platforms to make determinations on the truth of the articles that their users either share, or the content that they themselves publish. There is no moral obligation or imperative on them to editorialise and ensure that only particular messages reach their networks. In fact, it is arguably the complete opposite: they have an ethical obligation to ensure that they do not interfere in the free speech of users, and free dissemination of ideas and information; irrespective of their own views on the ‘truth’ or otherwise of them.

The real challenge to free speech isn’t fake news; it’s the suggestion that we should ban it.

Misinformation is a real issue, and the lazy reliance culture facilitated by networks such as Facebook and Google where any article with a catchy headline is taken at face value is a huge problem, but the answer is not for these networks to take things into their own hands and decide what set of truths are acceptable for us to see, and which are not.

We have reached a position where half of our societies are voting one way, whilst the other half can’t believe that anybody would ever make such a decision, precisely because we have retreated into our own echo chambers – both in the physical world as well as the virtual. The solution to the political struggles we on the left face is not to further restrict the gamut of speech that is open to us in our shared online spaces, or to expect service providers to step up and act as over-arching publishers; it is to get out there and effectively challenge those ideas with people that we would normally avoid engaging with. Curtailing the free speech of others through the arbitrary definition of ‘fake news’ is not only not the answer, but it’s a terrifying prospect to the very freedoms that we are arguing to protect.

The real challenge to free speech isn’t fake news; it’s the suggestion that we should ban it.

Disclaimer: It should go without saying that these are my views, and not necessarily those of WordPress.com, or anybody else.

Twitter and S.112 of the Equality Act 2010

Yesterday it was posted in the Drum that after receiving a number of threats including rape over Twitter, that the subject of these messages – Caroline Criado-Perez – has been approached by a lawyer with respect to a possible civil law action against the service. Under Section 112 of the Equality Act 2010, no person must ‘knowingly help’ another to do anything which contravenes the conditions laid out in the Act. Without commenting directly on the facts or merits of this case, if Twitter are to be held liable for the actions of its users in such a manner, the ramifications would extend far beyond the issues at hand.

There are few that would argue that the social network knowingly and willingly designed their systems specifically to allow people to abuse others – it wouldn’t make good business sense for a start. Users don’t tend to stick around on services where they can’t filter out those that they don’t want to interact with, which is exactly why there is a ‘block’ function in place. Whether it is practical to be able to block thousands of different sources quickly and effectively is quite a different question, and one which is not new to the web – as any webmaster worth their salt will know. Block an offending IP address, and just as quickly another one will pop up: the ol’ virtual whack-a-mole.

Twitter may have ‘knowingly’ created a system where people are free to disseminate information en masse, quickly, and with any content as they so desire, but to hold them to account for ‘knowingly helping’ people to breach Equality legislation seems farcical (not to mention out-with its intended purpose). If providers are to be held responsible for that posted by its users to such an extent, then we may as well proceed to shutdown all similar platforms, as any that allow people a level of freedom of expression, as they will always be mis-used. To apply the law in this way would have a chilling effect not just on the development of the web, but on free speech itself.