Nazi Pugs Fuck Off

One of the latest cases to spark intense debate around freedom of expression happens to fall in my own back yard. The facts of the ‘nazi pug’ case concerned one Mark Meechan, aka ‘Count Dankula’, who filmed himself training his girlfriend’s dog to react to various phrases such as ‘gas the Jews’, and then posted it on YouTube. In his own words:

“My girlfriend is always ranting and raving about how cute and adorable her wee dog is, and so I thought I would turn him into the least cute thing that I could think of, which is a Nazi”

Meechan was subsequently charged and convicted in a Scottish Sheriff Court under s.127 of the Communications Act 2003, which makes it an offence to (publicly) communicate a ‘message or other matter that is grossly offensive or of an indecent, obscene or menacing character’.

Count Dankula

Offensive speech should not be a criminal offence

The accused argued that the video was intended as a joke to noise up his girlfriend, as evidenced by the disclaimer at the outset. This position was rejected by the court, who stated that humour was ‘no magic wand’ to escape prosecution, and that any determination of context was for them to decide.

In passing the sentence, the Sheriff brought up the fact that the accused’s girlfriend didn’t even subscribe to his YouTube channel, and so claimed that as a result the notion that the escapade was in fact intended as a private joke didn’t hold any water. This is important as it demonstrates a deep cultural ignorance of how people communicate in an age dominated by online platforms, but also for what may well be a more interesting point: That the actions could only be classed as an offence under the Communications Act by dint of the fact that the video was posted on a ‘public communications network’. In other words, if the same ‘joke’ had been demonstrated at a house party, down the pub, or even on stage in front of hundreds of people, then it could not have brought about the same kind of prosecution.

This brings about two questions:

  1. Should there be any distinction between posting a video online (or via telephone), and making statements in person? If so, why?
  2. Should anybody ever face jail time for making ‘offensive’ statements?

These are questions that can only realistically be properly addressed by Parliament – not the Sheriff court, though one would have hoped that they would have taken a more liberal approach to statutory interpretation, or that the Procurator Fiscal would have had more foresight to not pursue a conviction.

A bad sense of humour should not be enough to justify the possibility of a criminal offence. Further, even if the video was in fact an expression of a genuine conviction (which has not been at issue in this case), then it still should not warrant the possibility of jail time – especially not when the distinction lies on the fact that the statements were made on a ‘public communications network’ rather than in person. Remember, this was not a question of ‘incitement’, but simply offence.

Nazis are not your friends

It appears that in many ways, the court were bound by the statutory terms, and that the 2003 law itself is inadequate, to say the least. However, there is another element to this tale that is worth discussing. Namely, that individuals such as the former leader of the so called English Defence League have come out to associate themselves with the issue, and that not enough has been done to reject those attempts.

The support of the far right is not particularly surprising, as they are increasingly taking up the bastion of free expression to justify their odious positions. I is also understandable that when faced with what you perceive as an unwarranted criminal prosecution that you would welcome any support that you can get, or that the media would try to draw connections where there are none. However, the enemy of my enemy is not necessarily my friend. If arseholes such as Tommy Robinson whose views you claim to be diametrically opposed to try to co-opt your situation for their own political ends, you have a duty to clearly, loudly, and publicly tell them to fuck off. When the far right started to infiltrate punk culture based on the premise of certain shared values, the Dead Kennedys responded in no uncertain terms.

I don’t and won’t claim to know the politics of the accused in this case, but the situation should be a warning for all who consider ourselves to sit on the liberal end of the spectrum: Be wary of those who seek to use a shared belief in freedom of expression as a trojan horse. Yes, fight for the right of those you disagree with to speak, but don’t let the crows trick their way into your nest as a result.

Meechan has indicated plans to appeal the conviction in order to make a point about freedom of speech, although it is unclear at this point under what grounds he will do so. Either way, whilst this is something I would support prima facie, it is becoming increasingly tough to do so with the knowledge that each development gives people such as the EDL a platform without any real challenge.


For a more in depth analysis of the law involved in this case, have a look at this post from thebarristerblogger.com.

P.S. I don’t blame the pug.

Shopify, Breitbart, and Freedom of Speech.

Tonight I came across an article on TechCrunch in response to an open letter from Tobias Lütke, CEO of e-commerce platform Shopify, in which he defends the company’s decision to continue hosting Breitbart’s online shop. Breitbart being the infamous far right publication of which Steve Bannon was heavily involved with.

After sustained criticism, Lütke explains in the post entitled ‘In Support of Free Speech’ that based upon a belief that ‘commerce is a powerful, underestimated form of expression’, it would be wrong to effectively censor merchants by shutting down their shops as the result of differing political views.

Reporting on the letter, TechCrunch shared their post to Facebook with the text: ‘Shopify’s CEO thinks his platform has a responsibility to continue hosting Breitbart’s store – here’s why he’s wrong.’

Screen Shot 2017-02-10 at 02.29.57.png

I was curious to see the arguments that would be proffered as to why the decision was wrong, but was ultimately left wanting. Here are the reasons given, as far as I could make out:

  1. Lütke is grossly overestimating the role of a private e-commerce platform in providing and protecting freedom of expression.
  2. Shopify cannot ‘censor’ anybody, as they are not an emanation of the State.
  3. Justifying the continued hosting of merchants who have extreme views for freedom of speech reasons is wrong, as freedom of speech does not apply to private organisations.
  4. As a private company, Shopify are not legally required to provide a platform to anybody.
  5. Shopify’s Terms of Service allow them to terminate the account of any user at any time.

In response, here’s why TechCrunch are wrong:

None of the reasons given actually explain why Shopify shouldn’t continue to host Breitbart.

Read over them again, then check out the full article here. Despite heavily criticising Shopify, and stating that Lütke is ‘wrong’, TechCrunch don’t engage at all with the heart of the issue. No, Shopify are not legally required to host the Breitbart shop, and yes, quite obviously their Terms of Service are quite obviously worded in such a way to give them that discretion in the event of any legal challenge, but that’s hardly a surprise.

Here’s the big question that went unanswered: why should Shopify not host Breitbart?Lütke hits the nail on the head with the following challenge, which the TechCrunch article completely fails to even acknowledge:

When we kick off a merchant, we’re asserting our own moral code as the superior one. But who gets to define that moral code? Where would it begin and end? Who gets to decide what can be sold and what can’t?

Rather than attempt to address this fundamental issue, TechCrunch essentially just argue that Shopify should kick Breitbart off of their platform because, er, well, legally there’s nothing to stop them. A pretty poor argument at best.

Protecting freedom of speech isn’t just down to the State.

Firstly, I’m not sure where this idea that censorship is only something that the State can give effect to comes from. It means to forbid or to ban something; to suppress speech. The source doesn’t have anything to do with it.

Screen Shot 2017-02-10 at 03.24.28.png

Secondly, there is a lot of confusion surrounding freedom of speech and the relation to the State, even from those who purport to understand the dynamic. To clear some things up, the following are true:

  • Freedom of speech law (generally) only protects citizens from the acts of State actors.
  • Private online service providers (generally) have no obligation to protect the freedom of speech rights of their users, or to give them a platform for expression.

However, to assert that a platform cannot justify their actions based on freedom of speech considerations, or to willingly strive to uphold those principles on the basis of the above is a non sequitur. Additionally, just because you can’t threaten legal action on a freeedom of speech argument against Facebook if they take down your status update, that doesn’t mean it is wrong to argue that Facebook should be doing more to consider and protect those values.

Just as we would not expect a hotel owner to be able to refuse to allow a same sex couple to share a bed, or a pub to knock back someone based purely on the colour of their skin, it is nonsense to pretend that we have no expectations of private organisations to abide by certain shared societal values.

Without touching on the claims around the importance of e-commerce as a vehicle for expression, it seems that in a world where we are increasingly reliant on private entities to provide our virtual town square equivalents, and where we expect certain values to be upheld, arguably platforms such as Shopify have an increasing moral obligation to protect (as far as is possible) the principles that are the cornerstone of our Democracies.

 

 

Trump, Prostitutes, and 4chan. Still want to ban sites that publish fake news?

Today the big story on the web is that a story leaked from a ‘British intelligence officer’ about Russia blackmailing Donald Trump, published by BuzzFeed, and then dutifully re-posted by other major established media outlets was allegedly made up by posters on 4chan.

Whilst the articles state that the claims are ‘unverified’, and ‘contain errors’, it appears that there has been very little in the way of fact checking or corroboration of sources going on. Indeed, publishing allegations without due dilligence is exactly the operational basis of other sites that don’t fall under the banner of ‘credible’ media. The fact is that the outcome in either case is the same: either willingly or blindly (through a desire to publish content first to drive advertising revenue), these sites are spreading misinformation. Looking at the Mirror’s coverage, one would be forgiven for thinking that the info was at least partially credible:

Screen Shot 2017-01-11 at 12.46.40.png

It’s all too easy to scoff at the Mirror, or BuzzFeed. Nobody takes them seriously after all; everybody knows that! That clearly isn’t actually the case, and it demonstrates the problem with the reactionary drive towards ‘banning’ or filtering sites that publish fake news from online platforms.

Of course, these claims to have made up the story could very well be made up themselves… but that doesn’t invalidate the criticism. If anything, it highlights the issue with asking or expecting third parties such as online service providers to filter out untrue content.

To echo the questions I raised in my previous post on this topic: Exactly what constitutes fake news, where do we draw the line, at what point do ‘credible’ news sources lose that credibility, and who makes those determinations? Should BuzzFeed articles be removed from Facebook? What about The Mirror? What about CNN? Maybe only articles claiming to have made up fake news should be treated as fake news. Where does it stop?

For an interesting read on this that was shared by my colleague Davide recently, check out this page:

https://www.theguardian.com/commentisfree/2017/jan/08/blaming-fake-news-not-the-answer-democracy-crisis

It only gets worse when charges of fake news come from the media, which, due to the dismal economics of digital publishing, regularly run dubious “news” of their own. Take the Washington Post, that rare paper that claims to be profitable these days. What it has gained in profitability, it seems to have lost in credibility.

Edit: I published this earlier today before Trump’s press conference, and felt compelled to update it as a result of what he said. Responding to questions from the media, he apparently decided to pick up the ‘fake news’ mantle:

When Jim Acosta, Senior White House Correspondent for CNN, attempted to ask Trump a question, the President-elect refused to answer. “Not you. Your organization is terrible,” Trump said. “I’m not going to give you a question, you are fake news.”
So now Trump has appropriated the term ‘fake news’ to thwart off any criticism without response. That’s what happens when you set up an empty vessel as something that is inherently wrong with no real definition. This should have been easy to avoid. – (source)

This is precisely why setting up a straw man term such as ‘fake news’ is so dangerous, because an empty vessel that is inherently bad without any clear definition leaves the power in the hands of those who want to wield it for their own ends. If we want to try and combat ‘fake news’, we first need to understand what it is we are fighting against. Otherwise, the question becomes whether it is our version of fake news that is bad, or Donald Trump’s?

Real Punishments Needed for DMCA Takedown Abuse

Note: The opinions expressed within are mine, and mine alone – not necessarily endorsed by Automattic or WordPress.com.

Last week Automattic released an update to our transparency report, detailing the number of takedown and information requests that were received between January and the end of June this year – as well as the number that had been acted upon, or rejected. There has been some good coverage of what’s included in the report by TorrentFreak, ARSTechnica, and TechDirt.

One area that’s particularly interesting is that relating to the DMCA notification and takedown process, regarding instances of alleged copyright infringement. The full figures are available on the page itself, but here are the highlights:

WordPress.com Transparency Report

If you’re like me, it can be difficult to pull out something meaningful from a table of figures, at least at first glance. The important thing to note here is that 43% of the total notices received were rejected – either for being incomplete, or abusive. This figure rises to 67% if you remove sites that were ultimately suspended for a terms of service violation from the ‘Percentage of notices where some or all content was removed’ column.

Incomplete notices can be anything from the complainant not including a signature; failing to specify the content that they are claiming copyright over; or not including the required statements ‘under penalty of perjury’. Abusive notices include those that target material which is not copyrightable (such as trademarks or allegedly defamatory content); where the complainant misrepresents their copyright; or attempts to prevent fair use of the material – protected by US copyright law.

Many complainants simply want to get content removed from the web, irrespective of which route they have to take to get it. As a result, a variety of different tactics are deployed, particularly when a third party agent is engaged to carry out the task. For example, the wording of takedown demands may be fudged in order to give them the appearance of a valid DMCA takedown notification, whilst failing to substantively fulfil the statutory requirements. In other cases, claims regarding alleged copyright infringement are mingled together with threats concerning trademark infringement or defamation – obfuscating the invalidity of the DMCA takedown itself in the process. Web Sheriff in particular have been known to adopt this practice, with ‘kitchen sink’ takedown demands listing what seems like every law passed in the last 20 years incase one of them might apply in any given scenario. The Pirate Bay have infamously mocked Web Sheriff in the past for some of their tactics:

Pirate Bay Web Sheriff Mockery

It can be a difficult process to manually review and untangle exactly what a complaint relates to, and whether or not it is a valid DMCA takedown. Clarification e-mails often go ignore, something that is particularly true in cases where the notifications are being generated by bots. Replying to point out that a notification is incomplete, or that the material is actually hosted elsewhere in many cases is met by nothing except a deaf ear, and a duplicate takedown demand the following day.

Whilst the DMCA’s safe harbor provisions are designed to provide protection for third party intermediaries as well as the rights of copyright holders, the phenomenon of automated takedown demands has resulted in a massively lopsided burden on those service providers who take their responsibilities seriously, and do not just acquiesce to every single takedown notification automatically.

Complainants are able to submit grossly inaccurate DMCA takedowns on a massive scale, routinely through the use of automated systems that indiscriminately target particular keywords across the web – all without any real fear of legal consequence. The sheer volume notices generated means that the vast majority of service providers simply remove content immediately and automatically, without scrutinising them for their formal completeness or legal validity. The few that do choose to go through them manually in order to protect their users (like WordPress.com), end up facing a huge burden.

Without stronger statutory consequences for those who abuse the DMCA’s notification and takedown system, the battle for freedom of expression online will be increasingly difficult. The majority of service providers will inevitably default to censorship in the first instance, as the number of notifications (and therefore the resources required to push back effectively) increases.

Blow Struck by WordPress.com Against Fraudulent DMCAs

Abuse of the American online copyright takedown system (DMCA) is rife. People frequently submit fraudulent notifications to online service providers in order to censor views that they disagree with, curbing legitimate freedom of expression. Examples include those trying to stifle negative reviews about their businesses or products, preventing political satire, and even inappropriately targetting the normative use of a trademark.

All too often, OSPs simply shrug their shoulders when confronted with these scenarios, and process the notices anyway in order to avoid losing their safe harbor protections. Even when alerted to what’s going on in specific circumstances, many choose a policy of non-intervention, rather than to defend their users.

The result of one of two cases which were filed by Automattic in response to fraudulent takedown notifications submitted concerning material posted by WordPress.com was released a few days ago, Westlaw citation: 2014 WL 7894441. The judgement concerned a notice sent by a group called ‘Straight Pride UK’, who objected to the publication of an e-mail interview which a journalist Oliver Hotham had conducted. Under §512(f) of the DMCA, Automattic were awarded a total of just over $25,000 in damages – $960 of which was for Hotham’s time.

The outcome was a ‘default judgement’, as the defendant’s (unsurprisingly) didn’t turn up to the hearing, despite being served properly through the standard international processes. It’s unlikely that either Automattic or Hotham will ever see any of the money, so it is largely a symbolic victory. However, it should not be dismissed too quickly, as the case highlights a number of important issues:

  • The DMCA is frequently abused, with few consequences for those who misrepresent their copyrights
  • Taking action against this abuse is expensive, and happens extremely infrequently
  • Enforcing damages against those from outside the US is difficult, and so there is a hole in the remedies available where those who abuse the system fall into this category
  • Even where organisations or individuals are resident in the US, major online service providers do nothing about the fraudulent notices they receive that could be actionable
  • In order for damages to be awarded, material must be removed as the result of a misrepresentation. There are no consequences for fraudulent notifications that are caught by diligent service providers first – at their own risk

The DMCA is a blunt tool that has an incredible power to silence dissenting voices without recourse. The only way in which this is going to change is if service providers begin to stand up against the abuses, using the considerable resources as their disposal – both to further the conversations in this area, and also to take legal action where possible.

Transparency: I am a Community Guardian for WordPress.com.

 

Hyperlinks, Copyright Infringement, and the DMCA

Hyperlinks are a fundamental part of the core fabric of the web. As the basic tool used to connect pieces of information together, it’s difficult to imagine how the Internet could function without them. 

Despite its critical nature, the role of hyperlinks has attracted the attention of those seeking to prevent particular kinds of information from being shared. One of the prime examples relates to copyright, and efforts to disrupt the dissemination of materials without authorisation. As part of this, the delivery mechanism of the hyperlink – as well as the infringing act itself – has come under fire.

Hyperlinks and Case Law

There is not a sizeable wealth of case law available that directly relates to the question of whether a hyperlink can constitute copyright infringement. As a result, discussions concerning potential liability often draw upon analogies taken from older cases – sometimes with judgements over a century old – in order to apply established legal principles to the uncertainties thrown up by technological evolution.

In the case of Hird v. Wood from 1984, the defendant was seated near to a sign on which defamatory messages were displayed. Despite not having created the sign, it was held that he incurred liability simply through the act of drawing attention to it. In the later case of Byrne v. DeanHird was referenced, the judge noting the following:

If defamatory matter is left on the wall of [a] premises by a person who has the power to remove the defamatory matter from the wall he can be said to have published the defamatory matter to the persons who read it.

Applied to our topic, it would appear that this principle would impose an obligation on the operators of websites (as well as their hosts) to remove hyperlinks that led to illegal material.

The Supreme Court in Crookes v. Newton did not completely accept the above analogy. Instead, they found the argument put forward by the respondent to be more persuasive: a comparison between hyperlinks and footnotes. In other words, both are ‘content neutral’, communicating the existence of something, but not necessarily commenting on the content. This view is one also supported by Tim Berners-Lee, credited with the creation of the World Wide Web:

The intention […] was that normal links should simply be references, with no implied meaning.

However, the court also recognised that the Internet is ‘a potentially powerful vehicle’ for defamation, and that the context itself was important in establishing possible liability. In the words of the court:

Individuals may attract liability for hyperlinking if the manner in which they have referred to content conveys defamatory meaning.

Analogy is one instrument that can be used when considering the relationship between the use of hyperlinks and the law; powerful, albeit imprecise. The issues involved in real life situations are more complex than can be addressed by analogy alone, and courts have often taken differing approaches in making their determinations.

One of the first cases to directly challenge the legality of the use of hyperlinks, and their potential to constitute an infringement of copyright concerned their use on the website of a Scottish newspaper named the ‘Shetland News’.

The links in question were published on the Shetland News website. They took the form of headlines copied from the site of a rival paper: the ‘Shetland Times’. By visiting the Shetland News website and clicking on these links, visitors were taken to the corresponding articles on the Shetland Times website. Confused yet? The way things were set up meant it was possible to completely avoid having to visit the Shetland Times’ homepage, and therefore missing out on its advertising. As a result, the Shetland News website was receiving ad-based income for providing direct links to articles that they themselves had not authored. 

Understandably, the Shetland Times weren’t too pleased about this, and succeeded in having the use of the links halted through the use of an interim interdict. The reasoning for the decision was that the links came in the form of headlines that had been copied verbatim from the other site, and so there was potential copyright infringement. Disappointingly from an academic point of view, the case was settled out of court, with no final judgement made on the actual liability arising from the use of the links.

One of the first significant cases in the US regarding the status of hyperlinks was that of Kelly v. Arriba Soft Corp. This concerned the display of thumbnail images from a professional photographer’s website in search results. The images were available both as resized thumbnails, and full sized previews, which is akin to the functionality provided by Google Images. Arriba was sued for copyright infringement, and the appeal judgement from the Ninth Circuit Court in San Francisco found that the thumbnails were protected under the doctrine of fair use. However, there was liability incurred for the displaying of the images in a new display window – a practice known as ‘in-line linking’. After an amicus brief filed by the EFF, the judgement was revised; the concerns about the in-line linking removed, with the fair use affirmation standing. 

In the case of Intellectual Reserve, Inc. v. Utah Lighthouse Ministry, Inc., the court found that hyperlinks pointing towards illegally distributed material could in of themselves be considered to be contributory copyright infringement. In this particular situation, the facts were complicated because the owner of the website in question had originally stored copies of the protected content on their own servers, before replacing them with hyperlinks to copies stored elsewhere. It was the context of these actions that was important – echoing the ratio decidendi in Hird.

One of the more commonly cited cases in this area is the infamous Grokster case, in which the Supreme Court introduced a new potential for liability: that of inducement. It was held that where technology is created for the intended or actual purpose of encouraging its users to breach copyright, then the creators themselves could be held liable for contributory copyright infringement. Ultimately, Grokster was shut down. Despite the concern by service providers over the precedent of this case, the facts were very particular to this situation, with the platform actively fostering the ‘blatant and overwhelming’ infringing activity of their users. It is extremely unlikely that the same definition would be applied to the majority of contemporary online intermediaries.

Hyperlinks, the DMCA, and Contributory Infringement

Online service providers often receive DMCA takedown notifications that target hyperlinks leading to allegedly infringing material, rather than material that resides on their servers. They are faced with an interesting quandary as a result: whether or not to remove the link.

On the one hand, hyperlinks generally do not constitute copyright infringement. However, it is the context that is the determining factor when considering potential for liability. A link created by a user to illegal material may well be infringing, but where does this leave the service provider?

Service providers are afforded safe harbor immunity from the infringing actions of their users, provided they ‘remove or disable access to’ material upon receipt of a valid DMCA takedown notification. However, it is unclear how this would apply in the case of hyperlinks. In our example, the infringing material itself is located on servers out-with their control, but there is still potentially infringing activity taking place on their platform. Would a host be liable for a failure to remove a hyperlink to material, where that hyperlink was found to be an infringing act, based on the context?

We can take some insight into how the decisions of future courts may fall in this scenario by considering the ‘server test’ discussed in the Perfect 10 cases. Here, the facts concerned the display of websites in Google’s Image search that were infringing upon the copyright interests of the plaintiff. It was found that Google was not liable for direct infringement on the basis that the material at issue did not reside on their servers, and was served up from another host through their use of framing, or ‘in-line linking’. With regards to contributory infringement specifically, the court held that there was no liability, as the infringing activity itself would still exist irrespective of whether or not Google Images existed. In other words, they were not found to be encouraging the copyright infringement.

In Flava, the defendants were operators of a ‘social bookmarking’ service called myVidster that allowed users to share videos from different locations around the web, which were then embedded on their platform, served up from the original locations. They were sued for contributory copyright infringement, based on the actions of users that were sharing clips that had been uploaded without authorisation.

The service provider had already received a number of takedown notifications regarding the material, and it was argued that they had not taken enough action as required to qualify for safe harbor protections. However, that is not the be all and end all. In the words of the court: ‘a non-infringer doesn’t need a safe harbor.’ The parties who uploaded the videos in the first place were the ones whose activity was infringing, and the question is whether myVidster had encouraged their infringing activity to an extent that constituted contributory infringement. The court did not find this to be the case, holding that myVidster was neither a direct or contributory infringer. In other words, they were too far removed from the infringing activity, 

My View

Irrespective of the potential for individuals to incur liability based on the context of hyperlinks which they create, the application of the DMCA should not extend to their removal.

Part of the criticism of the DMCA is that there is a substantial burden placed on copyright holders to track down and report instances of infringement across the web. Much like the mythological Hydra, as one falls, more spring up to take its place. Slaying the beast requires attacking the root of the problem; treating hyperlinks as valid subjects of takedown notifications is to mis-understand the task, and only serves in the creation of extra conceptual heads to pursue.

Rather than target hyperlinks, the focus of enforcement efforts should be on the actual source of the infringing material: the host. Take out the location pointed to by hyperlinks, and they are instantly rendered obsolete. This is not only a far more effective approach in tacking infringing activity, but one that also avoids creating extra and unnecessary work. Financially, this means less is paid to third party agents such as DMCA.com, whose revenue is based on successful takedown notifications.

The DMCA is already a blunt, and powerful tool. Abuse of the system is rife, and often deployed for the purposes of censoring legitimate expression, rather than to curb copyright infringement. To extend its remit to include the removal of hyperlinks is a dangerous step, that fundamentally alters our relationship with a core structural element of the web, and risks a (further) chilling effect on freedom of speech.

Despite recent case law seeming to support this principle, the judgements have been extremely dependent on the circumstances involved, and there has been no definitive authorities. As a result, it is up to online service providers to shape the approach to the issue, rejecting DMCA notification takedowns that concern hyperlinks. Policy decisions such as this create the normative frameworks that have the power to help ensure or hinder a free and open web, and it’s critical that tech companies lead the charge, rather than taking a minimum risk stance.

Sorry Ms. Jackson.

At WordPress.com part of my job is to push back against those who seek to abuse the law to censor blogs that they don’t like or agree with. Complainants commonly make claims using the process of trademark and copyright law to intimidate sites into doing what they want, even where they have no valid right to do so. Sadly, many people don’t have the knowledge or resources to fight against this sort of thing, and will remove content after being on the receiving end.

Recently we (the Terms of Service team) received correspondence from someone claiming to act on behalf of Janet Jackson. They had submitted a trademark complaint about the use of the phrase ‘Janet Jackson’ on a particular website. Effectively, they wanted the article removed for an alleged violation of their trademark, despite the fact that the page only mentioned Janet once. This is clearly not what trademark law is meant to be used for – something any law student would be able to tell, and so was clearly just a cynical ploy to have the content removed.

We refused to take action against the site, and notified the owner about what was going on. She has posted up a great response over on her blog. Here are some excerpts:

Allow me to convey my gratitude as both a fan and a corresponding legal target. I recently received the most flattering letter from your IP lawyers in which they allege that I committed a federal crime of TM infringement by mentioning your name in a blog post. That they would devote time and energy to catching my blog in their social media dragnet and do me the honor of writing a cease and desist letter is thrilling.

You see, humble as it may be, I take my writing very seriously (as, apparently do your lawyers). I have a Ph.D. in English, teach college writing and literature courses in Boston, and am working on my first novel manuscript. For anyone to allege lightly and insubstantially that my writing infringes on any kind of TM, IP, Copyright, is personally insulting and slanderous. WordPress’s lawyers have proven their worth by establishing promptly that your lawyer’s charges against me are entirely unfounded. I will not burden this article any further regarding the ins and outs of IP law and this case. WordPress understands the importance of protecting independent writers and free speech from corporate legal bullies.

Click through to read the full thing.

The ‘Right to be Forgotten’ is not a Bad Thing

There has been much said in the past week about the ‘right to be forgotten’ principle being developed in European Law, after the decision by the European Court of Justice in the case of Google Spain v AEPD and Mario Costeja González.

 

new-google-logo-knockoff

Why this decision isn’t a good thing

The decision of the ECJ has been subject to swathes of criticism for a variety of reasons. However, one of the biggest issues to raise its head is the ideological discussion of Data Protection v. Freedom of Expression.

Originally, data protection was intended to help protect individuals from organisations collecting and storing information on them erroneously. In general, data protection is a good thing. Infact, it’s bloody awesome. It means that when any company or other body collects ‘personal data’ on you, recording it in a filing system, you have the right not only to see it, but to have inaccurate data modified, as well as to prevent the processing of it for marketing purposes.* Sounds good, right? Oh, and this also applies to organised filing systems that are stored on paper, not just electronically.

In reaching its decision in the Google Spain case, the ECJ has applied the established approach to data protection, whilst at the same time injecting the relatively new principle of the ‘right to be forgotten’. The problem with this is that the circumstances are fundamentally different to those in which the protections were introduced to be applicable to.

In the Google Spain case, the information was held to be legally published on the site of the newspaper in question, and so is not required to be removed. However, because Google collected, stored, and processed the links to the information, it was then considered a ‘data collector’ under the data protection definitions, and so obliged to consider, and give effect to the removal request.

This is DUMB.

This is not the same as a situation where an organisation is keeping detailed personal records on an individual (such as their medical details, telephone number, or address, for example), that would not otherwise necessarily be found elsewhere. In this situation, the information is already in the public domain, published lawfully. The fact that Google collects the locations of this data, stores it, and then offers up the hyperlinks in search results should not bring it under the gambit of The Directive in its current form. I won’t even begin to think too much about the baffling way in which this seems to fly in the face of the general approach to hyperlinking that was laid out in the Svensson case, earlier this year.

In any event, these removals only apply to the EU – not to Google sites (or those of any other ‘search engine operator’) that lie outside. Clearly the ECJ must not have heard of a proxy before. At the root of it, this is bad law because in the context of a global Internet, it is meaningless.

Why the right to be forgotten isn’t a bad thing

When the right to be forgotten was first being discussed, it was in relation to something far more sensible – something which had very little to do with freedom of expression at all. It was to do with the right of users to have online service providers remove the personal information held on them when they chose to delete their account. Ever tried to delete your Facebook account completely? It’s not exactly a walk in the park. It wasn’t about trying to hide past transgressions that have already received media attention, and it wasn’t about curtailing the basic architecture of the web – it was about being able to tell Zuckerberg that when you want to leave, they should honour that.

The problem with the ECJ’s decision is the way in which they have applied the principles of data protection, rather than data protection itself. Whether or not the Court wilfully misunderstood, in order to crowbar the right to be forgotten into the judgement in this case is one thing, but that doesn’t mean the entire principle should be dismissed.

Sadly, a lot of the commentary has focussed on the specific facts of this case, and applied them broadly to support a wider theoretical gap between the supposed American principle of freedom of expression, and the European importance on privacy. Whilst that is a whole separate discussion, I do not believe that this should be reduced to some sort of absolute Transatlantic ideological difference. Instead, it should be seen for what it is: a bad application of principles that are fundamentally designed to protect individuals.

The right to be forgotten is valuable, but it should never have come close to impinging on the freedom to ‘receive and impart information‘ on that which is already lawfully published.

* This interpretation is based on the UK Data Protection Act of 1998, which gave effect to Directive 95/46/EC – the EU Data Protection Directive.

More reading:

You can read the full text of the original application, the opinion, and the judgement of the ECJ over on Curia.

The relevant (English) press release from the ECJ on the Google decision is here.

Here is a helpful description of how Google’s new form dealing with right to be forgotten requests will operate.

Stanford Law Review article on the Right to be Forgotten here.

Article on the decision and censorship from Index here.

‘What you need to know about the ‘Right to be Forgotten’ – here.