Freedom of Speech and the DMCA: Abuse of the Notification and Takedown Process

Last month, my first academic journal article was published by the leading international publication on IP law: the European Intellectual Property Review from Thomson Reuters.

From the abstract:

The Digital Millennium Copyright Act’s “notice and takedown” process is increasingly referred to as a model solution for content removal mechanisms worldwide. While it has emerged as a process capable of producing relatively consistent results, it also has significant problems—and is left open to different kinds of abuse. It is important to recognise these issues in order to ensure that they are not repeated in future legislation.

To that end, this article examines the DMCA with reference to its historical context, and the general issues surrounding the enforcement of copyright infringement claims. It then goes on to discuss the notice and takedown process in detail—along with its advantages, disadvantages, criticisms and praise. Specific examples of the kinds of abuse reported by online service providers are outlined, along with explanations of the statutory construction that allows these situations to continue. To finish, the viability of potential alternatives and proposed changes are discussed.

The article itself is available on WestLaw, citation: E.I.P.R. 2019, 41(2) at 70However, you can also get a copy of the PDF below.

Freedom of Speech and the DMCA: Abuse of the Notification and Takedown Process (PDF)

This material was first published by Thomson Reuters, trading as Sweet & Maxwell, 5 Canada Square, Canary Wharf, London, E14 5AQ, in European Intellectual Property Review as ‘Freedom of speech and the DMCA: abuse of the notification and takedown process’.
E.I.P.R. 2019, 41(2) at 70 and is reproduced by agreement with the publishers. This download is provided free for non-commercial use only. Further reproduction or distribution is prohibited.

UN Special Rapporteur’s Report on Content Regulation (2018)

With the news that the United States are to withdraw from the UN’s Human Rights Council, it seemed poignant to highlight one of their recently published Special Rapporteur’s reports, in which they looked at the state of online ‘content regulation’, and the impact on freedom of expression.

[It] examines the role of States and social media companies in providing an enabling environment for freedom of expression and access to information online.

The report itself is one of the better publications from an official entity, and talks about a lot of important issues that other bodies tend to ignore (willingly or otherwise). As a result, the whole thing is worth reading, but a few portions in particular stood out for me, and are worth sharing:

Counter Speech

One of the current major questions in the realm of intermediary liability is how platforms should deal with ‘extremist’ content. In an attempt to find a compromise between ‘doing nothing’, and total removal of anything questionable (with all of the resultant implications for freedom of expression), the concept of ‘counter speech’ is often brought up as a solution. In principle the idea is that instead of silencing disagreeable expression, people should instead seek to directly counter the ideas. This avoids the problem of subjective censorship, protecting free speech, and also ‘shines light into the dark’, rather than driving people underground where there is little or no critical dissent.

As well intentioned as this approach may be, it is one that is now unfortunately being misconstrued as an obligation for platforms to incorporate, rather than interested individuals or groups. For example, there are suggestions that the likes of YouTube should place an interstitial banner on disputed content to warn viewers of its nature. In the case of pro-ISIS videos, this notice would include links to anti-extremism programs, or counter narratives. As the report wisely notes:

While the promotion of counter-narratives may be attractive in the
face of “extremist” or “terrorist” content, pressure for such approaches runs the risk of transforming platforms into carriers of propaganda well beyond established areas of legitimate concern.

Despite the fact that there is little evidence that such an approach would do anything but bolster the already established beliefs of those viewing the content in question, there would inevitably be calls for it to be extended to any particularly contentious content. Ostensibly, pro-choice campaign websites could be overlaid with arguments from conservative religious groups; McDonalds.com with a link to the Vegan association. This may seem far fetched, but the danger is clear: as soon as we replace our own critical faculties with an obligation on intermediaries to provide ‘balance’ (even with the most extreme of content), we open the door to the normalisation of the practice. There is scant analysis of this particular issue out there at the moment, and I’m especially pleased to see it highlighted by the UNHRC.

Trusted Flaggers

Many companies have developed specialized rosters of “trusted” flaggers, typically experts, high-impact users and, reportedly, sometimes government flaggers. There is little or no public information explaining the selection of specialized flaggers, their interpretations of legal or community standards or their influence over company decisions.

Lack of definition of terms

You can’t adequately address challenges if the terms aren’t defined. For that reason, crusades against vague concepts such as ‘hate speech’, ‘fake news‘, etc are at best, doomed to failure, and at worst, a serious threat to freedom of expression. This isn’t a problem limited to the issues surrounding intermediary liability, but one which is made more visible by the globalised, cross jurisdictional nature of the Internet.

The commitment to legal compliance can be complicated when relevant State law is
vague, subject to varying interpretations or inconsistent with human rights law. For
instance, laws against “extremism” which leave the key term undefined provide discretion to government authorities to pressure companies to remove content on questionable grounds.

This is pretty self explanatory, but something which is often overlooked in discussions around the responsibilities of intermediaries in relation to content regulation. We should not accept the use of terms which have not been properly defined, as this allows any actor to co-opt them for their own purposes. Tackling ‘online abuse’, for example, is a grand aim which can easily garner much support, but which remains empty and meaningless without further explanation – and thus, open to abuse in of itself.

Vague rules

Following on from the previous section, platforms (perhaps partly as a direct result of the contemporary political rhetoric) adopt vague descriptors of the kind of content and/or behaviour which is unacceptable, in order to cover a variety of circumstances.

Company prohibitions of threatening or promoting terrorism, supporting or praising leaders of dangerous organizations and content that promotes terrorist acts or incites violence are, like counter-terrorism legislation, excessively vague. Company policies on hate, harassment and abuse also do not clearly indicate what constitutes an offence. Twitter’s prohibition of “behavior that incites fear about a protected group” and Facebook’s distinction between “direct attacks” on protected characteristics and merely “distasteful or offensive content” are subjective and unstable bases for content moderation.

Freedom of expression laws (generally) do not apply to private entities. In other words, Facebook et al are more or less free to decide on the rules of engagement for their platform. However, as these intermediaries increasingly control the spaces in which we as a society engage, they have a responsibility to ensure that their rules are at least transparent. The increasing multi-jurisdictional legal burdens and political pressures placed upon them to moderate content reduces the likelihood of this significantly. It also provides little to no stability or protection for those who hold views outside of the generally accepted cultural norms – a category that includes political activists and dissidents. In many parts of the world, having a homosexual relationship is considered ‘distasteful’ and ‘offensive’, as are the words of the current President of the United States – which demonstrates the problem with allowing (or expecting) a technology company to make such distinctions.

‘Real name’ policies

For those not familiar, this refers to the requirement from certain platforms that you must use your actual, legal name on their service – as opposed to a username, pseudonym, nickname, or anonymity. Officially the reason is that if someone is required to use their ‘real’ name, then they are less likely to engage in abusive behaviour online. We can speculate as to the real motives for such policies, but it seems undeniable that they are often linked to more accurate (aggressive) marketing to a platform’s user base. Either way, the report notes:

The effectiveness of real-name requirements as safeguards against online abuse is
questionable. Indeed, strict insistence on real names has unmasked bloggers and activists using pseudonyms to protect themselves, exposing them to grave physical danger. It has also blocked the accounts of lesbian, gay, bisexual, transgender and queer users and activists, drag performers and users with non-English or unconventional names. Since online anonymity is often necessary for the physical safety of vulnerable users, human rights principles default to the protection of anonymity, subject only to limitations that would protect their identities.

Within traditional digital rights circles (if there is such a thing), there appears to be a growing belief that anonymity is a bad thing. I’ve even heard suggestions that the government should require some kind of official identification system before people can interact online. This is clearly a terrible idea, and may seem utterly laughable, but when you consider that this is exactly what will be law for adult websites in the UK later this year, it seems like it might not be completely out of the realms of possibility after all. We need to better educate ourselves and others on the issues before the drips become a wave.

Automated decision making

Automated tools scanning music and video for copyright infringement at the point of upload have raised concerns of overblocking, and calls to expand upload filtering to terrorist-related and other areas of content threaten to establish comprehensive and disproportionate regimes of pre-publication censorship.

Artificial intelligence and ‘machine learning’ are increasingly seen as some kind of silver bullet to the issues of moderating content at scale, despite the many and varied issues with the technology. Bots do not understand context, or the legal concept of ‘fair use’; frequently misidentify content; and are generally ineffective, yet the European Union is pressing ahead with encouraging platforms to adopt automated mechanisms in their proposed Copyright Directive. Rather than just trying to placate lawmakers, intermediaries need to recognise the problems with such an approach, and more vigorously resist such a solution, instead of treating it as a purely technological challenge to overcome.

Finally…

Companies should recognize that the authoritative global standard for ensuring
freedom of expression on their platforms is human rights law, not the varying laws of States or their own private interests, and they should re-evaluate their content
standards accordingly.

This is a pretty strong statement to make, and demonstrates an approach that I strongly resonate with. In principle, at least. In practice however, companies are obliged to follow the legal obligations of the jurisdictions in which they are based (and sometimes even beyond, given the perceived reach of the GDPR). The extent and application of ‘human rights law’ varies significantly, and there are no protections for intermediaries that rely on mythical ‘global standards’ – even the UN Declaration of Human Rights.

Issues with Article 17 (‘Right to be Forgotten’) of the GDPR

With the GDPR’s deadline now almost upon us, one of the most talked about provisions has been the ‘Right to Erasure’ contained within Article 17.

Significantly expanding the ‘Right to be Forgotten’ doctrine established in the Google Spain case, Article 17 allows data subjects (i.e. you and I) to submit takedown requests to any organisation that collects and controls information on them.

There are a number of grounds under which people may seek to have data deleted, which cover a broad variety of circumstances. These include situations where the data is no longer necessary for the reasons it was collected; where it was unlawfully processed; where the subject withdraws their consent; as well as some others. The right is not unlimited, with exceptions where the collection and processing of the data is necessary in the exercise of the right to freedom of expression; where there is a specific legal obligation to retain the information; for reasons of public interest; etc.

Issues with Article 17

Despite some initial reservations, the GDPR (and Article 17 in particular) has generally been lauded as a victory for European citizens, who will gain far more control over what information companies hold on them than they ever previously have had. This is especially true given the arguably extra-territorial applicability, where any organisation that handles European data will be expected to comply.

However, there are a few specific issues arising from the construction of Article 17 that bear some further scrutiny. Rather than analyse the philosophical criticisms of the Right to Erasure, below I briefly look at some of the practical considerations that will need to be taken by data controllers when they receive such a Request for Erasure:

  1. Verification.
  2. Abuse, and a lack of formal requirements for removal requests.
  3. Article 85: Freedom of expression.

Verification of the Data Subject

Before giving effect to an Article 17 request, the controller must use all ‘reasonable measures’ to identify the identity of the requesting party. It is perhaps obvious that an organisation should not be deleting the accounts or other data of somebody without checking first to make sure that the person making that request is authorised to do so. However, this leaves open a number of questions about what this kind of verification will look like. In other words, what steps will be considered ‘reasonable’ under the terms of the law? Will courts begin to see arguments over online platforms account recovery procedures as a result of a denial of access to the fundamental right of privacy via the GDPR? What metrics will a data subject be able/expected to provide in order to discover their associated data? i.e. while it might be easy to request information relating to your e-mail address, what about other identifiers such as IP addresses, or names? These are questions that do not have clear answers, and will inevitably lead to an uneven application of the law, dependent on the situation.

Abuse, and a Lack of Formal Procedural Requirements for Erasure Requests

It should be self-evident at this stage that any statutory removal mechanisms will be open to abuse by parties determined to have content removed from the Internet, and in that regard, Article 17 is no different. However, there is a common misconception that the Right to Erasure gives people the right to stop any mention of them online – especially speech that is critical of them, or that they disagree with. This is not the case, and Article 17 is not crafted as a dispute resolution mechanism for defamation claims (that would be the E-Commerce Directive). These facts don’t stop people from citing the GDPR incorrectly though, and it can quickly become difficult to deal with content removal demands as a result.

The problem is compounded by the fact that there are no formal procedural requirements for an Article 17 request to be valid, unlike the notice and takedown procedure of the DMCA, or even the ECD. Requests do not have to mention the GDPR, or even Right to be Erasure specifically, and perhaps even more surprisingly, the requests don’t have to be made in writing, as verbal expressions are acceptable.

While the reasons for the lack of specific notice requirements is clearly in order to give the maximum amount of protection to data subjects (the lack of requirement for writing was apparently in order to allow people to easily ask for the removal of their data from call centres over the phone), it seems to ignore the accompanying problems with such an approach. The lack of clarity for the general public around what exactly the Right to Erasure includes, along with the lack of procedural checks and balances means that it will be increasingly difficult for organisations to identify and give effect to legitimate notices. This is especially true for online platforms that already receive a high number of reports. While many of these are often nonsense or spam, they will require far greater scrutiny in order to ensure that they aren’t actually badly worded Article 17 requests that might lead to liability.

If we look at the statistics on other notice and takedown processes such as that in the DMCA (the WordPress.com transparency report, for example), we can see that the levels of incomplete or abusive notices received are high. The implementation of even basic formal requirements would provide some minimum level of quality control over the requests, and allow organisations identifiers to efficiently categorise and give effect to legitimate Article 17 requests, rather than the prospect of having to consider any kind of report received through the lens of the GDPR.

Article 85: Freedom of expression

As mentioned earlier, a controller is not obliged to remove data where its continued retention is ‘necessary for reasons of freedom of expression and information’. The obvious question then becomes under what grounds this should be interpreted, and we find some guidance in Article 85 of the GDPR. Unfortunately however, it doesn’t say all that much:

‘Member States shall by law reconcile the right to the protection of personal data pursuant to this Regulation with the right to freedom of expression and information, including processing for journalistic purposes and the purposes of academic, artistic or literary expression.’

This appears to leave the task of determining how the balance will be made to individual Member States. Whilst this isn’t unusual in European legislation, it means that the standard will vary depending on where the organisation is based, and or where the data subject resides. At the time of writing, it isn’t clear how different Member States will address this reconciliation. Despite freedom of expression’s status as a fundamental right in European law, it is afforded scant consideration, and thus weak protection under the GDPR, preferring to defer to national law, which simply isn’t good enough. Far stronger statements and guarantees should have been provided.

Over Compliance

Unfortunately, the amount of extra work required to analyse and deal with these requests as a result of the law’s construction – along with the high financial penalties detailed in Article 83 – mean that it is likely that many organisations will simply resort to removing data, even where there is no lawful basis for the request, or requirement for them to do so.

We may fairly confidently speculate that the response from many data controllers will be to take a conservative approach to the GDPR’s requirements, and thus be less likely to push back on any potentially dubious requests as a result. Insistent complainants may find that they are able to have speech silenced without any legitimate legal basis simply out of fear or misunderstanding on the part of third party organisations.

With a well publicised and generally misunderstood right to removal, lack of procedural requirements, and a reliance on intermediaries to protect our rights to freedom of expression, we may find ourselves with more control over our own data, but with far less control over how we impart and receive information online.

Header image by ‘portal gda‘ on Flickr. Used under CC BY NC-SA 2.0 license.

Nazi Pugs Fuck Off

One of the latest cases to spark intense debate around freedom of expression happens to fall in my own back yard. The facts of the ‘nazi pug’ case concerned one Mark Meechan, aka ‘Count Dankula’, who filmed himself training his girlfriend’s dog to react to various phrases such as ‘gas the Jews’, and then posted it on YouTube. In his own words:

“My girlfriend is always ranting and raving about how cute and adorable her wee dog is, and so I thought I would turn him into the least cute thing that I could think of, which is a Nazi”

Meechan was subsequently charged and convicted in a Scottish Sheriff Court under s.127 of the Communications Act 2003, which makes it an offence to (publicly) communicate a ‘message or other matter that is grossly offensive or of an indecent, obscene or menacing character’.

Count Dankula

Offensive speech should not be a criminal offence

The accused argued that the video was intended as a joke to noise up his girlfriend, as evidenced by the disclaimer at the outset. This position was rejected by the court, who stated that humour was ‘no magic wand’ to escape prosecution, and that any determination of context was for them to decide.

In passing the sentence, the Sheriff brought up the fact that the accused’s girlfriend didn’t even subscribe to his YouTube channel, and so claimed that as a result the notion that the escapade was in fact intended as a private joke didn’t hold any water. This is important as it demonstrates a deep cultural ignorance of how people communicate in an age dominated by online platforms, but also for what may well be a more interesting point: That the actions could only be classed as an offence under the Communications Act by dint of the fact that the video was posted on a ‘public communications network’. In other words, if the same ‘joke’ had been demonstrated at a house party, down the pub, or even on stage in front of hundreds of people, then it could not have brought about the same kind of prosecution.

This brings about two questions:

  1. Should there be any distinction between posting a video online (or via telephone), and making statements in person? If so, why?
  2. Should anybody ever face jail time for making ‘offensive’ statements?

These are questions that can only realistically be properly addressed by Parliament – not the Sheriff court, though one would have hoped that they would have taken a more liberal approach to statutory interpretation, or that the Procurator Fiscal would have had more foresight to not pursue a conviction.

A bad sense of humour should not be enough to justify the possibility of a criminal offence. Further, even if the video was in fact an expression of a genuine conviction (which has not been at issue in this case), then it still should not warrant the possibility of jail time – especially not when the distinction lies on the fact that the statements were made on a ‘public communications network’ rather than in person. Remember, this was not a question of ‘incitement’, but simply offence.

Nazis are not your friends

It appears that in many ways, the court were bound by the statutory terms, and that the 2003 law itself is inadequate, to say the least. However, there is another element to this tale that is worth discussing. Namely, that individuals such as the former leader of the so called English Defence League have come out to associate themselves with the issue, and that not enough has been done to reject those attempts.

The support of the far right is not particularly surprising, as they are increasingly taking up the bastion of free expression to justify their odious positions. I is also understandable that when faced with what you perceive as an unwarranted criminal prosecution that you would welcome any support that you can get, or that the media would try to draw connections where there are none. However, the enemy of my enemy is not necessarily my friend. If arseholes such as Tommy Robinson whose views you claim to be diametrically opposed to try to co-opt your situation for their own political ends, you have a duty to clearly, loudly, and publicly tell them to fuck off. When the far right started to infiltrate punk culture based on the premise of certain shared values, the Dead Kennedys responded in no uncertain terms.

I don’t and won’t claim to know the politics of the accused in this case, but the situation should be a warning for all who consider ourselves to sit on the liberal end of the spectrum: Be wary of those who seek to use a shared belief in freedom of expression as a trojan horse. Yes, fight for the right of those you disagree with to speak, but don’t let the crows trick their way into your nest as a result.

Meechan has indicated plans to appeal the conviction in order to make a point about freedom of speech, although it is unclear at this point under what grounds he will do so. Either way, whilst this is something I would support prima facie, it is becoming increasingly tough to do so with the knowledge that each development gives people such as the EDL a platform without any real challenge.


For a more in depth analysis of the law involved in this case, have a look at this post from thebarristerblogger.com.

P.S. I don’t blame the pug.

Shopify, Breitbart, and Freedom of Speech.

Tonight I came across an article on TechCrunch in response to an open letter from Tobias Lütke, CEO of e-commerce platform Shopify, in which he defends the company’s decision to continue hosting Breitbart’s online shop. Breitbart being the infamous far right publication of which Steve Bannon was heavily involved with.

After sustained criticism, Lütke explains in the post entitled ‘In Support of Free Speech’ that based upon a belief that ‘commerce is a powerful, underestimated form of expression’, it would be wrong to effectively censor merchants by shutting down their shops as the result of differing political views.

Reporting on the letter, TechCrunch shared their post to Facebook with the text: ‘Shopify’s CEO thinks his platform has a responsibility to continue hosting Breitbart’s store – here’s why he’s wrong.’

Screen Shot 2017-02-10 at 02.29.57.png

I was curious to see the arguments that would be proffered as to why the decision was wrong, but was ultimately left wanting. Here are the reasons given, as far as I could make out:

  1. Lütke is grossly overestimating the role of a private e-commerce platform in providing and protecting freedom of expression.
  2. Shopify cannot ‘censor’ anybody, as they are not an emanation of the State.
  3. Justifying the continued hosting of merchants who have extreme views for freedom of speech reasons is wrong, as freedom of speech does not apply to private organisations.
  4. As a private company, Shopify are not legally required to provide a platform to anybody.
  5. Shopify’s Terms of Service allow them to terminate the account of any user at any time.

In response, here’s why TechCrunch are wrong:

None of the reasons given actually explain why Shopify shouldn’t continue to host Breitbart.

Read over them again, then check out the full article here. Despite heavily criticising Shopify, and stating that Lütke is ‘wrong’, TechCrunch don’t engage at all with the heart of the issue. No, Shopify are not legally required to host the Breitbart shop, and yes, quite obviously their Terms of Service are quite obviously worded in such a way to give them that discretion in the event of any legal challenge, but that’s hardly a surprise.

Here’s the big question that went unanswered: why should Shopify not host Breitbart?Lütke hits the nail on the head with the following challenge, which the TechCrunch article completely fails to even acknowledge:

When we kick off a merchant, we’re asserting our own moral code as the superior one. But who gets to define that moral code? Where would it begin and end? Who gets to decide what can be sold and what can’t?

Rather than attempt to address this fundamental issue, TechCrunch essentially just argue that Shopify should kick Breitbart off of their platform because, er, well, legally there’s nothing to stop them. A pretty poor argument at best.

Protecting freedom of speech isn’t just down to the State.

Firstly, I’m not sure where this idea that censorship is only something that the State can give effect to comes from. It means to forbid or to ban something; to suppress speech. The source doesn’t have anything to do with it.

Screen Shot 2017-02-10 at 03.24.28.png

Secondly, there is a lot of confusion surrounding freedom of speech and the relation to the State, even from those who purport to understand the dynamic. To clear some things up, the following are true:

  • Freedom of speech law (generally) only protects citizens from the acts of State actors.
  • Private online service providers (generally) have no obligation to protect the freedom of speech rights of their users, or to give them a platform for expression.

However, to assert that a platform cannot justify their actions based on freedom of speech considerations, or to willingly strive to uphold those principles on the basis of the above is a non sequitur. Additionally, just because you can’t threaten legal action on a freeedom of speech argument against Facebook if they take down your status update, that doesn’t mean it is wrong to argue that Facebook should be doing more to consider and protect those values.

Just as we would not expect a hotel owner to be able to refuse to allow a same sex couple to share a bed, or a pub to knock back someone based purely on the colour of their skin, it is nonsense to pretend that we have no expectations of private organisations to abide by certain shared societal values.

Without touching on the claims around the importance of e-commerce as a vehicle for expression, it seems that in a world where we are increasingly reliant on private entities to provide our virtual town square equivalents, and where we expect certain values to be upheld, arguably platforms such as Shopify have an increasing moral obligation to protect (as far as is possible) the principles that are the cornerstone of our Democracies.

 

 

Yes, Protest Does Matter.

In the past week, we have seen peaceful protests around the world, in response to the actions taken by Donald Trump, as he has assumed the American Presidency.

Despite not having attended any of the demonstrations myself, I’ve been troubled by the fervent reaction against those who have done so, and the poor arguments that have been made against speaking out. So, without passing comment on the content of any of Trump’s policies or actions, I’ve decided to address the common criticisms publicly:

1. Protesting doesn’t make any difference.

I almost can’t believe that this statement is still being uttered in 2017, after all that has been written, and after we have seen and to-this-day celebrate the outcomes of peaceful protest in the past.

The ultimate goal of protest is obviously to bring about change, but few who take part in any single act of resistance are naive enough to believe that that one particular event will have devastating political ramifications on its own. Movements are built over time, and are successful by building the pressure on those in power.

In this particular situation, there is a real chance that sustained protest can have an impact on the policies of the Trump administration. The Republican party is not full of evil people, and many viscerally disagree with his approach to many issues, but at present feel unable to speak up against them. If all these people hear is silent indifference to what is going on, they are far less likely to have the courage to take the first steps themselves in opposition.

For many, even if there is absolutely zero chance of political change, demonstrations are still immensely important. First and foremost, they are about standing up and publicly stating that you refuse to quietly accept actions that you fundamentally disagree with, and may otherwise be powerless to stop. It’s about demonstrating to other people who facing the brunt of the effects that they are not alone. That’s why they are called ‘demonstrations’.

I won’t draw comparisons between Trump and Hitler at this point, but I do find it rather curious how one of the biggest questions people have when looking back at history is how the German population could possibly have let fascism take hold, seemingly without much protest. I wonder how many people were dismissing those who spoke up, with the same argument: ‘Protesting won’t make a difference’.

2. It’s a foreign country. It doesn’t have any impact on you or people you know. Focus on your own issues.

There are a few constitutent parts to this. Firstly, this kind of statement is often made in a blanket fashion, completely ignoring the personal relationships that the person on the receiving end may have. Where their wife may come from; where their friends may live; where the company their work for is based, for example.

Secondly, even if a person has zero personal ties to the US, the idea that we could close our eyes and ears to what happens outside of our country is a non-sequitur. In fact, it’s the worst kind of nationalism. Following the argument through logically, no Scottish person should ever speak about the evils of apartheid – because it was a South African issue. Neither should the UK have gotten involved in the Second World War. There are innumerable examples of why this doesn’t hold water.

There is a valid criticism to be made of people who only care and speak up about what they see on the news in a foreign country, whilst acting completely indifferent about what is happening in their own back garden. However, that sort of criticism can only be made with in depth knowledge of a person and their motives, and is certainly not something that should be applied with a broad brush to people whose background you have no idea about. Just because somebody is concerned about the actions of Trump, doesn’t mean that they aren’t equally as passionate about the right wing agenda of the UK Government, or that they volunteer at a local foodbank every night.

All of this aside, the reality is that what happens in America does impact what happens in the UK. The policies and rhetoric of the most powerful man on Earth, who leads the biggest military superpower in modern history, who happens to be our supposedly closest ally, definitely has repercussions around the globe. To pretend otherwise is simply foolish.

To bring it home, so to speak: the ‘solidarity’ word is one that comes with a lot of baggage, but it is exactly what protest is often about: making a statement about what kind of society you want and believe in, even in spite of everything that may be happening elsewhere. It’s about saying: ‘The most powerful nation on the planet may be targetting refugees, but we won’t accept those same actions here.’ If all the protests in Glasgow yesterday achieved was to make a single refugee feel more welcome and secure in their adopted city, then they were already a success.

3. The American people chose to vote for Trump. Get over it.

This is one of the most ridiculous assertions of the lot. The idea that once a political party or candidate wins an election that they are infallible, and should be immune from any sort of criticism is ludicrous. At best it is complete hypocrisy on the part of those uttering this nonsense, and at worst an extremely dangerous perspective, that results in human rights abuses in countries like Turkey and Russia.

4. Protesters are just idiots who are virtue signalling whilst contributing exactly zero to the cause they’re apparently so passionate about.

This is pretty much a word for word comment from someone who didn’t approve of the demonstrations held in Glasgow yesterday, but the language is similar to a lot of others.

Here’s how ‘virtue signalling’ is defined:

virtue signalling (US virtue signaling)

noun [mass noun]

the action or practice of publicly expressing opinions or sentiments intended to demonstrate one’s good character or the moral correctness of one’s position on a particular issue: it’s noticeable how often virtue signalling consists of saying you hate things | standing on the sidelines saying how awful the situation is does nothing except massage your ego by virtue signalling.

On its own, the phrase is seemingly innocuous, but more and more frequently it is now being used to dismiss people who are taking a position that others disagree with, without them having to actually intellectually engage with that position. It’s become one of the lazy phrases like ‘fake news’ that I can’t stand, as it doesn’t actually mean anything in practice.

Given that the phrase is based on intent, the only way ‘virtue signalling’ could accurately be ascribed to those who chose to demonstrate against Trump or his actions, would be if the person using it knew those intentions. In other words, they would need to know the specific motivating factors involved… something that is clearly impossible when applied to a group.

It’s probably worth being crystal clear on this: disagreeing with your position doesn’t mean that somebody is ‘virtue signalling’. It means they disagree with your position. Challenge them on their arguments, not with some spurious empty phrase that only serves to shut down discussions that you can’t handle.

Trump image by Gage Skidmore – used under CC-BY-SA 2.0 license

Censoring ‘Fake News’ is the real threat to our online freedom

As the results of the US Presidential election began to sink in, the finger of blame swung around to focus on ‘fake news’ websites, that publish factually incorrect articles with snappy headlines that are ripe for social media dissemination.

francis-fake.png
A ‘fake’ headline. Via the Independent.

Ironically, the age of propaganda has previously thought to have died out with the proliferation of easy access to the Internet, with people able to cross-reference and fact check claims from their bedroom, rather than having a single domestic point of information. Instead, what it appears we are seeing is the opposite; people congregating around a single funnel of sources (Facebook), which filters to the top the most widely shared (read: most attention grabbing) articles.

Almost immediately, the socially liberal-leaning technology giants Google and Facebook announced that they would be taking steps to prevent websites from making use of their services. This has sparked a ream of discussion about the ‘responsibility’ of other online platforms to take steps to prevent the spread of these so-called ‘fake news’ sites on their networks.

Here, probably for the first time I can remember, I find myself in agreement with what Zuckerberg has (reportedly) said in response:

The suggestion that online platforms should unilaterally act to restrict ‘fake news’ websites is one of the biggest threats to free speech to face the Internet.

Those are my words, not his – just to be clear. Click through to see what he actually said (well, as long as the source can be trusted).

It is unclear exactly what ‘fake news’ is supposed to be. Some sites ‘outing’ publishers that engage in this sort of activity have included The Onion in their lists, which in of itself demonstrates the problem of singling out websites that publish ‘fake’ news.

  • Where is the line drawn between ‘fake news’ and satire?
  • At what point do factually incorrect articles become ‘fake news’?
  • At what point do ‘trade puffs’ and campaign claims become ‘fake news’ rather than just passionate advocacy?
  • If the defining factor is intent, rather than content, who makes that determination, and based on what set of values?

It is not the job of online platforms to make determinations on the truth of the articles that their users either share, or the content that they themselves publish. There is no moral obligation or imperative on them to editorialise and ensure that only particular messages reach their networks. In fact, it is arguably the complete opposite: they have an ethical obligation to ensure that they do not interfere in the free speech of users, and free dissemination of ideas and information; irrespective of their own views on the ‘truth’ or otherwise of them.

The real challenge to free speech isn’t fake news; it’s the suggestion that we should ban it.

Misinformation is a real issue, and the lazy reliance culture facilitated by networks such as Facebook and Google where any article with a catchy headline is taken at face value is a huge problem, but the answer is not for these networks to take things into their own hands and decide what set of truths are acceptable for us to see, and which are not.

We have reached a position where half of our societies are voting one way, whilst the other half can’t believe that anybody would ever make such a decision, precisely because we have retreated into our own echo chambers – both in the physical world as well as the virtual. The solution to the political struggles we on the left face is not to further restrict the gamut of speech that is open to us in our shared online spaces, or to expect service providers to step up and act as over-arching publishers; it is to get out there and effectively challenge those ideas with people that we would normally avoid engaging with. Curtailing the free speech of others through the arbitrary definition of ‘fake news’ is not only not the answer, but it’s a terrifying prospect to the very freedoms that we are arguing to protect.

The real challenge to free speech isn’t fake news; it’s the suggestion that we should ban it.

Disclaimer: It should go without saying that these are my views, and not necessarily those of WordPress.com, or anybody else.

Why I’ve Switched to WordPress.com

The eagle-eyed amongst you may have noticed that not only have I switched the blog’s theme in the past few days, but I’ve also shifted the hosting completely over from a self-hosted WordPress.org instance, to one on the servers of WordPress.com. (Confused? This article will explain the difference.)

For years I’ve always run sites using WordPress software that I’ve configured myself, rather than those on WordPress.com, based on the following reasons:

  • Hacker Mentality – Not wanting to let go of complete control of my site, and the ability to do with it what I please (like hosting weird web apps and playing about with plugins)
  • Cost – I was always under the impression it would be relatively expensive to keep all of my stuff on WordPress.com’s servers, as generous pals have hosted my sites previously
  • Transition Pain – Moving from an already established and customised site to a different platform seemed like a faff, with inevitable SEO problems/broken links
  • Features and Customisation – Not believing that I’d be able to get my blog to look/feel the way I wanted it to within the WordPress.com boundaries, and that I would miss features (like permalink restructuring)

The more I thought about it, the more I realised that I didn’t actually need to run a self-hosted site for http://iamsteve.in. The design of the site was pretty straightforward, there was no real complicated customisations involved, and the cost of shifting to WordPress.com wasn’t what I thought it might work out at; definitely not for a site that isn’t hosting large numbers of images anyway.

In fact, the benefits of being hosted on WordPress.com seemed more and more appealing:

  • A dedicated, and passionate support team that are on hand to help out with any issues (Working alongside them, this was an even bigger boon for me personally)
  • A streamlined interface that I use everyday (for both work and pleasure)
  • No more having to login to separate admin panels all the time
  • A site that is integrated into the highly active WordPress.com community – and so more engagement with other users on the posts
  • No more worrying about rogue plugins crashing or needing to be re-configured after an update breaks something
  • The ability to take massive spikes of bandwidth, as I’m hosted on WordPress.com’s massive network

and one of the most important things of all:

  • The knowledge that my host won’t be intimidated by any legal pressures that come from any of the critical posts I write. (See here for more)

I’m incredibly proud to be part of a team that fights back against those who attempt to censor bits of the Internet that they don’t like on a daily basis, and it makes sense to bring my own writing into that fold. I know I have good people on my side should anything hairy come up.

Really the only thing that I was left swithering over was the pain of moving across. I thought I would give it a bash, and two hours later, the entire site is completely migrated over (multiple domain names and all). The difficulties I thought I’d run into didn’t even crop up as issues at all. All of my custom permalinks are smartly resolved by the WordPress software to their new locations (which I am both almost in disbelief and awe at).

I’m pleased. Not a bad experiment after all.

Twitter and S.112 of the Equality Act 2010

Yesterday it was posted in the Drum that after receiving a number of threats including rape over Twitter, that the subject of these messages – Caroline Criado-Perez – has been approached by a lawyer with respect to a possible civil law action against the service. Under Section 112 of the Equality Act 2010, no person must ‘knowingly help’ another to do anything which contravenes the conditions laid out in the Act. Without commenting directly on the facts or merits of this case, if Twitter are to be held liable for the actions of its users in such a manner, the ramifications would extend far beyond the issues at hand.

There are few that would argue that the social network knowingly and willingly designed their systems specifically to allow people to abuse others – it wouldn’t make good business sense for a start. Users don’t tend to stick around on services where they can’t filter out those that they don’t want to interact with, which is exactly why there is a ‘block’ function in place. Whether it is practical to be able to block thousands of different sources quickly and effectively is quite a different question, and one which is not new to the web – as any webmaster worth their salt will know. Block an offending IP address, and just as quickly another one will pop up: the ol’ virtual whack-a-mole.

Twitter may have ‘knowingly’ created a system where people are free to disseminate information en masse, quickly, and with any content as they so desire, but to hold them to account for ‘knowingly helping’ people to breach Equality legislation seems farcical (not to mention out-with its intended purpose). If providers are to be held responsible for that posted by its users to such an extent, then we may as well proceed to shutdown all similar platforms, as any that allow people a level of freedom of expression, as they will always be mis-used. To apply the law in this way would have a chilling effect not just on the development of the web, but on free speech itself.