UN Special Rapporteur’s Report on Content Regulation (2018)

With the news that the United States are to withdraw from the UN’s Human Rights Council, it seemed poignant to highlight one of their recently published Special Rapporteur’s reports, in which they looked at the state of online ‘content regulation’, and the impact on freedom of expression.

[It] examines the role of States and social media companies in providing an enabling environment for freedom of expression and access to information online.

The report itself is one of the better publications from an official entity, and talks about a lot of important issues that other bodies tend to ignore (willingly or otherwise). As a result, the whole thing is worth reading, but a few portions in particular stood out for me, and are worth sharing:

Counter Speech

One of the current major questions in the realm of intermediary liability is how platforms should deal with ‘extremist’ content. In an attempt to find a compromise between ‘doing nothing’, and total removal of anything questionable (with all of the resultant implications for freedom of expression), the concept of ‘counter speech’ is often brought up as a solution. In principle the idea is that instead of silencing disagreeable expression, people should instead seek to directly counter the ideas. This avoids the problem of subjective censorship, protecting free speech, and also ‘shines light into the dark’, rather than driving people underground where there is little or no critical dissent.

As well intentioned as this approach may be, it is one that is now unfortunately being misconstrued as an obligation for platforms to incorporate, rather than interested individuals or groups. For example, there are suggestions that the likes of YouTube should place an interstitial banner on disputed content to warn viewers of its nature. In the case of pro-ISIS videos, this notice would include links to anti-extremism programs, or counter narratives. As the report wisely notes:

While the promotion of counter-narratives may be attractive in the
face of “extremist” or “terrorist” content, pressure for such approaches runs the risk of transforming platforms into carriers of propaganda well beyond established areas of legitimate concern.

Despite the fact that there is little evidence that such an approach would do anything but bolster the already established beliefs of those viewing the content in question, there would inevitably be calls for it to be extended to any particularly contentious content. Ostensibly, pro-choice campaign websites could be overlaid with arguments from conservative religious groups; McDonalds.com with a link to the Vegan association. This may seem far fetched, but the danger is clear: as soon as we replace our own critical faculties with an obligation on intermediaries to provide ‘balance’ (even with the most extreme of content), we open the door to the normalisation of the practice. There is scant analysis of this particular issue out there at the moment, and I’m especially pleased to see it highlighted by the UNHRC.

Trusted Flaggers

Many companies have developed specialized rosters of “trusted” flaggers, typically experts, high-impact users and, reportedly, sometimes government flaggers. There is little or no public information explaining the selection of specialized flaggers, their interpretations of legal or community standards or their influence over company decisions.

Lack of definition of terms

You can’t adequately address challenges if the terms aren’t defined. For that reason, crusades against vague concepts such as ‘hate speech’, ‘fake news‘, etc are at best, doomed to failure, and at worst, a serious threat to freedom of expression. This isn’t a problem limited to the issues surrounding intermediary liability, but one which is made more visible by the globalised, cross jurisdictional nature of the Internet.

The commitment to legal compliance can be complicated when relevant State law is
vague, subject to varying interpretations or inconsistent with human rights law. For
instance, laws against “extremism” which leave the key term undefined provide discretion to government authorities to pressure companies to remove content on questionable grounds.

This is pretty self explanatory, but something which is often overlooked in discussions around the responsibilities of intermediaries in relation to content regulation. We should not accept the use of terms which have not been properly defined, as this allows any actor to co-opt them for their own purposes. Tackling ‘online abuse’, for example, is a grand aim which can easily garner much support, but which remains empty and meaningless without further explanation – and thus, open to abuse in of itself.

Vague rules

Following on from the previous section, platforms (perhaps partly as a direct result of the contemporary political rhetoric) adopt vague descriptors of the kind of content and/or behaviour which is unacceptable, in order to cover a variety of circumstances.

Company prohibitions of threatening or promoting terrorism, supporting or praising leaders of dangerous organizations and content that promotes terrorist acts or incites violence are, like counter-terrorism legislation, excessively vague. Company policies on hate, harassment and abuse also do not clearly indicate what constitutes an offence. Twitter’s prohibition of “behavior that incites fear about a protected group” and Facebook’s distinction between “direct attacks” on protected characteristics and merely “distasteful or offensive content” are subjective and unstable bases for content moderation.

Freedom of expression laws (generally) do not apply to private entities. In other words, Facebook et al are more or less free to decide on the rules of engagement for their platform. However, as these intermediaries increasingly control the spaces in which we as a society engage, they have a responsibility to ensure that their rules are at least transparent. The increasing multi-jurisdictional legal burdens and political pressures placed upon them to moderate content reduces the likelihood of this significantly. It also provides little to no stability or protection for those who hold views outside of the generally accepted cultural norms – a category that includes political activists and dissidents. In many parts of the world, having a homosexual relationship is considered ‘distasteful’ and ‘offensive’, as are the words of the current President of the United States – which demonstrates the problem with allowing (or expecting) a technology company to make such distinctions.

‘Real name’ policies

For those not familiar, this refers to the requirement from certain platforms that you must use your actual, legal name on their service – as opposed to a username, pseudonym, nickname, or anonymity. Officially the reason is that if someone is required to use their ‘real’ name, then they are less likely to engage in abusive behaviour online. We can speculate as to the real motives for such policies, but it seems undeniable that they are often linked to more accurate (aggressive) marketing to a platform’s user base. Either way, the report notes:

The effectiveness of real-name requirements as safeguards against online abuse is
questionable. Indeed, strict insistence on real names has unmasked bloggers and activists using pseudonyms to protect themselves, exposing them to grave physical danger. It has also blocked the accounts of lesbian, gay, bisexual, transgender and queer users and activists, drag performers and users with non-English or unconventional names. Since online anonymity is often necessary for the physical safety of vulnerable users, human rights principles default to the protection of anonymity, subject only to limitations that would protect their identities.

Within traditional digital rights circles (if there is such a thing), there appears to be a growing belief that anonymity is a bad thing. I’ve even heard suggestions that the government should require some kind of official identification system before people can interact online. This is clearly a terrible idea, and may seem utterly laughable, but when you consider that this is exactly what will be law for adult websites in the UK later this year, it seems like it might not be completely out of the realms of possibility after all. We need to better educate ourselves and others on the issues before the drips become a wave.

Automated decision making

Automated tools scanning music and video for copyright infringement at the point of upload have raised concerns of overblocking, and calls to expand upload filtering to terrorist-related and other areas of content threaten to establish comprehensive and disproportionate regimes of pre-publication censorship.

Artificial intelligence and ‘machine learning’ are increasingly seen as some kind of silver bullet to the issues of moderating content at scale, despite the many and varied issues with the technology. Bots do not understand context, or the legal concept of ‘fair use’; frequently misidentify content; and are generally ineffective, yet the European Union is pressing ahead with encouraging platforms to adopt automated mechanisms in their proposed Copyright Directive. Rather than just trying to placate lawmakers, intermediaries need to recognise the problems with such an approach, and more vigorously resist such a solution, instead of treating it as a purely technological challenge to overcome.

Finally…

Companies should recognize that the authoritative global standard for ensuring
freedom of expression on their platforms is human rights law, not the varying laws of States or their own private interests, and they should re-evaluate their content
standards accordingly.

This is a pretty strong statement to make, and demonstrates an approach that I strongly resonate with. In principle, at least. In practice however, companies are obliged to follow the legal obligations of the jurisdictions in which they are based (and sometimes even beyond, given the perceived reach of the GDPR). The extent and application of ‘human rights law’ varies significantly, and there are no protections for intermediaries that rely on mythical ‘global standards’ – even the UN Declaration of Human Rights.