UN Special Rapporteur’s Report on Content Regulation (2018)

With the news that the United States are to withdraw from the UN’s Human Rights Council, it seemed poignant to highlight one of their recently published Special Rapporteur’s reports, in which they looked at the state of online ‘content regulation’, and the impact on freedom of expression.

[It] examines the role of States and social media companies in providing an enabling environment for freedom of expression and access to information online.

The report itself is one of the better publications from an official entity, and talks about a lot of important issues that other bodies tend to ignore (willingly or otherwise). As a result, the whole thing is worth reading, but a few portions in particular stood out for me, and are worth sharing:

Counter Speech

One of the current major questions in the realm of intermediary liability is how platforms should deal with ‘extremist’ content. In an attempt to find a compromise between ‘doing nothing’, and total removal of anything questionable (with all of the resultant implications for freedom of expression), the concept of ‘counter speech’ is often brought up as a solution. In principle the idea is that instead of silencing disagreeable expression, people should instead seek to directly counter the ideas. This avoids the problem of subjective censorship, protecting free speech, and also ‘shines light into the dark’, rather than driving people underground where there is little or no critical dissent.

As well intentioned as this approach may be, it is one that is now unfortunately being misconstrued as an obligation for platforms to incorporate, rather than interested individuals or groups. For example, there are suggestions that the likes of YouTube should place an interstitial banner on disputed content to warn viewers of its nature. In the case of pro-ISIS videos, this notice would include links to anti-extremism programs, or counter narratives. As the report wisely notes:

While the promotion of counter-narratives may be attractive in the
face of “extremist” or “terrorist” content, pressure for such approaches runs the risk of transforming platforms into carriers of propaganda well beyond established areas of legitimate concern.

Despite the fact that there is little evidence that such an approach would do anything but bolster the already established beliefs of those viewing the content in question, there would inevitably be calls for it to be extended to any particularly contentious content. Ostensibly, pro-choice campaign websites could be overlaid with arguments from conservative religious groups; McDonalds.com with a link to the Vegan association. This may seem far fetched, but the danger is clear: as soon as we replace our own critical faculties with an obligation on intermediaries to provide ‘balance’ (even with the most extreme of content), we open the door to the normalisation of the practice. There is scant analysis of this particular issue out there at the moment, and I’m especially pleased to see it highlighted by the UNHRC.

Trusted Flaggers

Many companies have developed specialized rosters of “trusted” flaggers, typically experts, high-impact users and, reportedly, sometimes government flaggers. There is little or no public information explaining the selection of specialized flaggers, their interpretations of legal or community standards or their influence over company decisions.

Lack of definition of terms

You can’t adequately address challenges if the terms aren’t defined. For that reason, crusades against vague concepts such as ‘hate speech’, ‘fake news‘, etc are at best, doomed to failure, and at worst, a serious threat to freedom of expression. This isn’t a problem limited to the issues surrounding intermediary liability, but one which is made more visible by the globalised, cross jurisdictional nature of the Internet.

The commitment to legal compliance can be complicated when relevant State law is
vague, subject to varying interpretations or inconsistent with human rights law. For
instance, laws against “extremism” which leave the key term undefined provide discretion to government authorities to pressure companies to remove content on questionable grounds.

This is pretty self explanatory, but something which is often overlooked in discussions around the responsibilities of intermediaries in relation to content regulation. We should not accept the use of terms which have not been properly defined, as this allows any actor to co-opt them for their own purposes. Tackling ‘online abuse’, for example, is a grand aim which can easily garner much support, but which remains empty and meaningless without further explanation – and thus, open to abuse in of itself.

Vague rules

Following on from the previous section, platforms (perhaps partly as a direct result of the contemporary political rhetoric) adopt vague descriptors of the kind of content and/or behaviour which is unacceptable, in order to cover a variety of circumstances.

Company prohibitions of threatening or promoting terrorism, supporting or praising leaders of dangerous organizations and content that promotes terrorist acts or incites violence are, like counter-terrorism legislation, excessively vague. Company policies on hate, harassment and abuse also do not clearly indicate what constitutes an offence. Twitter’s prohibition of “behavior that incites fear about a protected group” and Facebook’s distinction between “direct attacks” on protected characteristics and merely “distasteful or offensive content” are subjective and unstable bases for content moderation.

Freedom of expression laws (generally) do not apply to private entities. In other words, Facebook et al are more or less free to decide on the rules of engagement for their platform. However, as these intermediaries increasingly control the spaces in which we as a society engage, they have a responsibility to ensure that their rules are at least transparent. The increasing multi-jurisdictional legal burdens and political pressures placed upon them to moderate content reduces the likelihood of this significantly. It also provides little to no stability or protection for those who hold views outside of the generally accepted cultural norms – a category that includes political activists and dissidents. In many parts of the world, having a homosexual relationship is considered ‘distasteful’ and ‘offensive’, as are the words of the current President of the United States – which demonstrates the problem with allowing (or expecting) a technology company to make such distinctions.

‘Real name’ policies

For those not familiar, this refers to the requirement from certain platforms that you must use your actual, legal name on their service – as opposed to a username, pseudonym, nickname, or anonymity. Officially the reason is that if someone is required to use their ‘real’ name, then they are less likely to engage in abusive behaviour online. We can speculate as to the real motives for such policies, but it seems undeniable that they are often linked to more accurate (aggressive) marketing to a platform’s user base. Either way, the report notes:

The effectiveness of real-name requirements as safeguards against online abuse is
questionable. Indeed, strict insistence on real names has unmasked bloggers and activists using pseudonyms to protect themselves, exposing them to grave physical danger. It has also blocked the accounts of lesbian, gay, bisexual, transgender and queer users and activists, drag performers and users with non-English or unconventional names. Since online anonymity is often necessary for the physical safety of vulnerable users, human rights principles default to the protection of anonymity, subject only to limitations that would protect their identities.

Within traditional digital rights circles (if there is such a thing), there appears to be a growing belief that anonymity is a bad thing. I’ve even heard suggestions that the government should require some kind of official identification system before people can interact online. This is clearly a terrible idea, and may seem utterly laughable, but when you consider that this is exactly what will be law for adult websites in the UK later this year, it seems like it might not be completely out of the realms of possibility after all. We need to better educate ourselves and others on the issues before the drips become a wave.

Automated decision making

Automated tools scanning music and video for copyright infringement at the point of upload have raised concerns of overblocking, and calls to expand upload filtering to terrorist-related and other areas of content threaten to establish comprehensive and disproportionate regimes of pre-publication censorship.

Artificial intelligence and ‘machine learning’ are increasingly seen as some kind of silver bullet to the issues of moderating content at scale, despite the many and varied issues with the technology. Bots do not understand context, or the legal concept of ‘fair use’; frequently misidentify content; and are generally ineffective, yet the European Union is pressing ahead with encouraging platforms to adopt automated mechanisms in their proposed Copyright Directive. Rather than just trying to placate lawmakers, intermediaries need to recognise the problems with such an approach, and more vigorously resist such a solution, instead of treating it as a purely technological challenge to overcome.

Finally…

Companies should recognize that the authoritative global standard for ensuring
freedom of expression on their platforms is human rights law, not the varying laws of States or their own private interests, and they should re-evaluate their content
standards accordingly.

This is a pretty strong statement to make, and demonstrates an approach that I strongly resonate with. In principle, at least. In practice however, companies are obliged to follow the legal obligations of the jurisdictions in which they are based (and sometimes even beyond, given the perceived reach of the GDPR). The extent and application of ‘human rights law’ varies significantly, and there are no protections for intermediaries that rely on mythical ‘global standards’ – even the UN Declaration of Human Rights.

Issues with Article 17 (‘Right to be Forgotten’) of the GDPR

With the GDPR’s deadline now almost upon us, one of the most talked about provisions has been the ‘Right to Erasure’ contained within Article 17.

Significantly expanding the ‘Right to be Forgotten’ doctrine established in the Google Spain case, Article 17 allows data subjects (i.e. you and I) to submit takedown requests to any organisation that collects and controls information on them.

There are a number of grounds under which people may seek to have data deleted, which cover a broad variety of circumstances. These include situations where the data is no longer necessary for the reasons it was collected; where it was unlawfully processed; where the subject withdraws their consent; as well as some others. The right is not unlimited, with exceptions where the collection and processing of the data is necessary in the exercise of the right to freedom of expression; where there is a specific legal obligation to retain the information; for reasons of public interest; etc.

Issues with Article 17

Despite some initial reservations, the GDPR (and Article 17 in particular) has generally been lauded as a victory for European citizens, who will gain far more control over what information companies hold on them than they ever previously have had. This is especially true given the arguably extra-territorial applicability, where any organisation that handles European data will be expected to comply.

However, there are a few specific issues arising from the construction of Article 17 that bear some further scrutiny. Rather than analyse the philosophical criticisms of the Right to Erasure, below I briefly look at some of the practical considerations that will need to be taken by data controllers when they receive such a Request for Erasure:

  1. Verification.
  2. Abuse, and a lack of formal requirements for removal requests.
  3. Article 85: Freedom of expression.

Verification of the Data Subject

Before giving effect to an Article 17 request, the controller must use all ‘reasonable measures’ to identify the identity of the requesting party. It is perhaps obvious that an organisation should not be deleting the accounts or other data of somebody without checking first to make sure that the person making that request is authorised to do so. However, this leaves open a number of questions about what this kind of verification will look like. In other words, what steps will be considered ‘reasonable’ under the terms of the law? Will courts begin to see arguments over online platforms account recovery procedures as a result of a denial of access to the fundamental right of privacy via the GDPR? What metrics will a data subject be able/expected to provide in order to discover their associated data? i.e. while it might be easy to request information relating to your e-mail address, what about other identifiers such as IP addresses, or names? These are questions that do not have clear answers, and will inevitably lead to an uneven application of the law, dependent on the situation.

Abuse, and a Lack of Formal Procedural Requirements for Erasure Requests

It should be self-evident at this stage that any statutory removal mechanisms will be open to abuse by parties determined to have content removed from the Internet, and in that regard, Article 17 is no different. However, there is a common misconception that the Right to Erasure gives people the right to stop any mention of them online – especially speech that is critical of them, or that they disagree with. This is not the case, and Article 17 is not crafted as a dispute resolution mechanism for defamation claims (that would be the E-Commerce Directive). These facts don’t stop people from citing the GDPR incorrectly though, and it can quickly become difficult to deal with content removal demands as a result.

The problem is compounded by the fact that there are no formal procedural requirements for an Article 17 request to be valid, unlike the notice and takedown procedure of the DMCA, or even the ECD. Requests do not have to mention the GDPR, or even Right to be Erasure specifically, and perhaps even more surprisingly, the requests don’t have to be made in writing, as verbal expressions are acceptable.

While the reasons for the lack of specific notice requirements is clearly in order to give the maximum amount of protection to data subjects (the lack of requirement for writing was apparently in order to allow people to easily ask for the removal of their data from call centres over the phone), it seems to ignore the accompanying problems with such an approach. The lack of clarity for the general public around what exactly the Right to Erasure includes, along with the lack of procedural checks and balances means that it will be increasingly difficult for organisations to identify and give effect to legitimate notices. This is especially true for online platforms that already receive a high number of reports. While many of these are often nonsense or spam, they will require far greater scrutiny in order to ensure that they aren’t actually badly worded Article 17 requests that might lead to liability.

If we look at the statistics on other notice and takedown processes such as that in the DMCA (the WordPress.com transparency report, for example), we can see that the levels of incomplete or abusive notices received are high. The implementation of even basic formal requirements would provide some minimum level of quality control over the requests, and allow organisations identifiers to efficiently categorise and give effect to legitimate Article 17 requests, rather than the prospect of having to consider any kind of report received through the lens of the GDPR.

Article 85: Freedom of expression

As mentioned earlier, a controller is not obliged to remove data where its continued retention is ‘necessary for reasons of freedom of expression and information’. The obvious question then becomes under what grounds this should be interpreted, and we find some guidance in Article 85 of the GDPR. Unfortunately however, it doesn’t say all that much:

‘Member States shall by law reconcile the right to the protection of personal data pursuant to this Regulation with the right to freedom of expression and information, including processing for journalistic purposes and the purposes of academic, artistic or literary expression.’

This appears to leave the task of determining how the balance will be made to individual Member States. Whilst this isn’t unusual in European legislation, it means that the standard will vary depending on where the organisation is based, and or where the data subject resides. At the time of writing, it isn’t clear how different Member States will address this reconciliation. Despite freedom of expression’s status as a fundamental right in European law, it is afforded scant consideration, and thus weak protection under the GDPR, preferring to defer to national law, which simply isn’t good enough. Far stronger statements and guarantees should have been provided.

Over Compliance

Unfortunately, the amount of extra work required to analyse and deal with these requests as a result of the law’s construction – along with the high financial penalties detailed in Article 83 – mean that it is likely that many organisations will simply resort to removing data, even where there is no lawful basis for the request, or requirement for them to do so.

We may fairly confidently speculate that the response from many data controllers will be to take a conservative approach to the GDPR’s requirements, and thus be less likely to push back on any potentially dubious requests as a result. Insistent complainants may find that they are able to have speech silenced without any legitimate legal basis simply out of fear or misunderstanding on the part of third party organisations.

With a well publicised and generally misunderstood right to removal, lack of procedural requirements, and a reliance on intermediaries to protect our rights to freedom of expression, we may find ourselves with more control over our own data, but with far less control over how we impart and receive information online.

Header image by ‘portal gda‘ on Flickr. Used under CC BY NC-SA 2.0 license.

Nazi Pugs Fuck Off

One of the latest cases to spark intense debate around freedom of expression happens to fall in my own back yard. The facts of the ‘nazi pug’ case concerned one Mark Meechan, aka ‘Count Dankula’, who filmed himself training his girlfriend’s dog to react to various phrases such as ‘gas the Jews’, and then posted it on YouTube. In his own words:

“My girlfriend is always ranting and raving about how cute and adorable her wee dog is, and so I thought I would turn him into the least cute thing that I could think of, which is a Nazi”

Meechan was subsequently charged and convicted in a Scottish Sheriff Court under s.127 of the Communications Act 2003, which makes it an offence to (publicly) communicate a ‘message or other matter that is grossly offensive or of an indecent, obscene or menacing character’.

Count Dankula

Offensive speech should not be a criminal offence

The accused argued that the video was intended as a joke to noise up his girlfriend, as evidenced by the disclaimer at the outset. This position was rejected by the court, who stated that humour was ‘no magic wand’ to escape prosecution, and that any determination of context was for them to decide.

In passing the sentence, the Sheriff brought up the fact that the accused’s girlfriend didn’t even subscribe to his YouTube channel, and so claimed that as a result the notion that the escapade was in fact intended as a private joke didn’t hold any water. This is important as it demonstrates a deep cultural ignorance of how people communicate in an age dominated by online platforms, but also for what may well be a more interesting point: That the actions could only be classed as an offence under the Communications Act by dint of the fact that the video was posted on a ‘public communications network’. In other words, if the same ‘joke’ had been demonstrated at a house party, down the pub, or even on stage in front of hundreds of people, then it could not have brought about the same kind of prosecution.

This brings about two questions:

  1. Should there be any distinction between posting a video online (or via telephone), and making statements in person? If so, why?
  2. Should anybody ever face jail time for making ‘offensive’ statements?

These are questions that can only realistically be properly addressed by Parliament – not the Sheriff court, though one would have hoped that they would have taken a more liberal approach to statutory interpretation, or that the Procurator Fiscal would have had more foresight to not pursue a conviction.

A bad sense of humour should not be enough to justify the possibility of a criminal offence. Further, even if the video was in fact an expression of a genuine conviction (which has not been at issue in this case), then it still should not warrant the possibility of jail time – especially not when the distinction lies on the fact that the statements were made on a ‘public communications network’ rather than in person. Remember, this was not a question of ‘incitement’, but simply offence.

Nazis are not your friends

It appears that in many ways, the court were bound by the statutory terms, and that the 2003 law itself is inadequate, to say the least. However, there is another element to this tale that is worth discussing. Namely, that individuals such as the former leader of the so called English Defence League have come out to associate themselves with the issue, and that not enough has been done to reject those attempts.

The support of the far right is not particularly surprising, as they are increasingly taking up the bastion of free expression to justify their odious positions. I is also understandable that when faced with what you perceive as an unwarranted criminal prosecution that you would welcome any support that you can get, or that the media would try to draw connections where there are none. However, the enemy of my enemy is not necessarily my friend. If arseholes such as Tommy Robinson whose views you claim to be diametrically opposed to try to co-opt your situation for their own political ends, you have a duty to clearly, loudly, and publicly tell them to fuck off. When the far right started to infiltrate punk culture based on the premise of certain shared values, the Dead Kennedys responded in no uncertain terms.

I don’t and won’t claim to know the politics of the accused in this case, but the situation should be a warning for all who consider ourselves to sit on the liberal end of the spectrum: Be wary of those who seek to use a shared belief in freedom of expression as a trojan horse. Yes, fight for the right of those you disagree with to speak, but don’t let the crows trick their way into your nest as a result.

Meechan has indicated plans to appeal the conviction in order to make a point about freedom of speech, although it is unclear at this point under what grounds he will do so. Either way, whilst this is something I would support prima facie, it is becoming increasingly tough to do so with the knowledge that each development gives people such as the EDL a platform without any real challenge.


For a more in depth analysis of the law involved in this case, have a look at this post from thebarristerblogger.com.

P.S. I don’t blame the pug.

Shopify, Breitbart, and Freedom of Speech.

Tonight I came across an article on TechCrunch in response to an open letter from Tobias Lütke, CEO of e-commerce platform Shopify, in which he defends the company’s decision to continue hosting Breitbart’s online shop. Breitbart being the infamous far right publication of which Steve Bannon was heavily involved with.

After sustained criticism, Lütke explains in the post entitled ‘In Support of Free Speech’ that based upon a belief that ‘commerce is a powerful, underestimated form of expression’, it would be wrong to effectively censor merchants by shutting down their shops as the result of differing political views.

Reporting on the letter, TechCrunch shared their post to Facebook with the text: ‘Shopify’s CEO thinks his platform has a responsibility to continue hosting Breitbart’s store – here’s why he’s wrong.’

Screen Shot 2017-02-10 at 02.29.57.png

I was curious to see the arguments that would be proffered as to why the decision was wrong, but was ultimately left wanting. Here are the reasons given, as far as I could make out:

  1. Lütke is grossly overestimating the role of a private e-commerce platform in providing and protecting freedom of expression.
  2. Shopify cannot ‘censor’ anybody, as they are not an emanation of the State.
  3. Justifying the continued hosting of merchants who have extreme views for freedom of speech reasons is wrong, as freedom of speech does not apply to private organisations.
  4. As a private company, Shopify are not legally required to provide a platform to anybody.
  5. Shopify’s Terms of Service allow them to terminate the account of any user at any time.

In response, here’s why TechCrunch are wrong:

None of the reasons given actually explain why Shopify shouldn’t continue to host Breitbart.

Read over them again, then check out the full article here. Despite heavily criticising Shopify, and stating that Lütke is ‘wrong’, TechCrunch don’t engage at all with the heart of the issue. No, Shopify are not legally required to host the Breitbart shop, and yes, quite obviously their Terms of Service are quite obviously worded in such a way to give them that discretion in the event of any legal challenge, but that’s hardly a surprise.

Here’s the big question that went unanswered: why should Shopify not host Breitbart?Lütke hits the nail on the head with the following challenge, which the TechCrunch article completely fails to even acknowledge:

When we kick off a merchant, we’re asserting our own moral code as the superior one. But who gets to define that moral code? Where would it begin and end? Who gets to decide what can be sold and what can’t?

Rather than attempt to address this fundamental issue, TechCrunch essentially just argue that Shopify should kick Breitbart off of their platform because, er, well, legally there’s nothing to stop them. A pretty poor argument at best.

Protecting freedom of speech isn’t just down to the State.

Firstly, I’m not sure where this idea that censorship is only something that the State can give effect to comes from. It means to forbid or to ban something; to suppress speech. The source doesn’t have anything to do with it.

Screen Shot 2017-02-10 at 03.24.28.png

Secondly, there is a lot of confusion surrounding freedom of speech and the relation to the State, even from those who purport to understand the dynamic. To clear some things up, the following are true:

  • Freedom of speech law (generally) only protects citizens from the acts of State actors.
  • Private online service providers (generally) have no obligation to protect the freedom of speech rights of their users, or to give them a platform for expression.

However, to assert that a platform cannot justify their actions based on freedom of speech considerations, or to willingly strive to uphold those principles on the basis of the above is a non sequitur. Additionally, just because you can’t threaten legal action on a freeedom of speech argument against Facebook if they take down your status update, that doesn’t mean it is wrong to argue that Facebook should be doing more to consider and protect those values.

Just as we would not expect a hotel owner to be able to refuse to allow a same sex couple to share a bed, or a pub to knock back someone based purely on the colour of their skin, it is nonsense to pretend that we have no expectations of private organisations to abide by certain shared societal values.

Without touching on the claims around the importance of e-commerce as a vehicle for expression, it seems that in a world where we are increasingly reliant on private entities to provide our virtual town square equivalents, and where we expect certain values to be upheld, arguably platforms such as Shopify have an increasing moral obligation to protect (as far as is possible) the principles that are the cornerstone of our Democracies.