Issues with Article 17 (‘Right to be Forgotten’) of the GDPR

With the GDPR’s deadline now almost upon us, one of the most talked about provisions has been the ‘Right to Erasure’ contained within Article 17.

Significantly expanding the ‘Right to be Forgotten’ doctrine established in the Google Spain case, Article 17 allows data subjects (i.e. you and I) to submit takedown requests to any organisation that collects and controls information on them.

There are a number of grounds under which people may seek to have data deleted, which cover a broad variety of circumstances. These include situations where the data is no longer necessary for the reasons it was collected; where it was unlawfully processed; where the subject withdraws their consent; as well as some others. The right is not unlimited, with exceptions where the collection and processing of the data is necessary in the exercise of the right to freedom of expression; where there is a specific legal obligation to retain the information; for reasons of public interest; etc.

Issues with Article 17

Despite some initial reservations, the GDPR (and Article 17 in particular) has generally been lauded as a victory for European citizens, who will gain far more control over what information companies hold on them than they ever previously have had. This is especially true given the arguably extra-territorial applicability, where any organisation that handles European data will be expected to comply.

However, there are a few specific issues arising from the construction of Article 17 that bear some further scrutiny. Rather than analyse the philosophical criticisms of the Right to Erasure, below I briefly look at some of the practical considerations that will need to be taken by data controllers when they receive such a Request for Erasure:

  1. Verification.
  2. Abuse, and a lack of formal requirements for removal requests.
  3. Article 85: Freedom of expression.

Verification of the Data Subject

Before giving effect to an Article 17 request, the controller must use all ‘reasonable measures’ to identify the identity of the requesting party. It is perhaps obvious that an organisation should not be deleting the accounts or other data of somebody without checking first to make sure that the person making that request is authorised to do so. However, this leaves open a number of questions about what this kind of verification will look like. In other words, what steps will be considered ‘reasonable’ under the terms of the law? Will courts begin to see arguments over online platforms account recovery procedures as a result of a denial of access to the fundamental right of privacy via the GDPR? What metrics will a data subject be able/expected to provide in order to discover their associated data? i.e. while it might be easy to request information relating to your e-mail address, what about other identifiers such as IP addresses, or names? These are questions that do not have clear answers, and will inevitably lead to an uneven application of the law, dependent on the situation.

Abuse, and a Lack of Formal Procedural Requirements for Erasure Requests

It should be self-evident at this stage that any statutory removal mechanisms will be open to abuse by parties determined to have content removed from the Internet, and in that regard, Article 17 is no different. However, there is a common misconception that the Right to Erasure gives people the right to stop any mention of them online – especially speech that is critical of them, or that they disagree with. This is not the case, and Article 17 is not crafted as a dispute resolution mechanism for defamation claims (that would be the E-Commerce Directive). These facts don’t stop people from citing the GDPR incorrectly though, and it can quickly become difficult to deal with content removal demands as a result.

The problem is compounded by the fact that there are no formal procedural requirements for an Article 17 request to be valid, unlike the notice and takedown procedure of the DMCA, or even the ECD. Requests do not have to mention the GDPR, or even Right to be Erasure specifically, and perhaps even more surprisingly, the requests don’t have to be made in writing, as verbal expressions are acceptable.

While the reasons for the lack of specific notice requirements is clearly in order to give the maximum amount of protection to data subjects (the lack of requirement for writing was apparently in order to allow people to easily ask for the removal of their data from call centres over the phone), it seems to ignore the accompanying problems with such an approach. The lack of clarity for the general public around what exactly the Right to Erasure includes, along with the lack of procedural checks and balances means that it will be increasingly difficult for organisations to identify and give effect to legitimate notices. This is especially true for online platforms that already receive a high number of reports. While many of these are often nonsense or spam, they will require far greater scrutiny in order to ensure that they aren’t actually badly worded Article 17 requests that might lead to liability.

If we look at the statistics on other notice and takedown processes such as that in the DMCA (the WordPress.com transparency report, for example), we can see that the levels of incomplete or abusive notices received are high. The implementation of even basic formal requirements would provide some minimum level of quality control over the requests, and allow organisations identifiers to efficiently categorise and give effect to legitimate Article 17 requests, rather than the prospect of having to consider any kind of report received through the lens of the GDPR.

Article 85: Freedom of expression

As mentioned earlier, a controller is not obliged to remove data where its continued retention is ‘necessary for reasons of freedom of expression and information’. The obvious question then becomes under what grounds this should be interpreted, and we find some guidance in Article 85 of the GDPR. Unfortunately however, it doesn’t say all that much:

‘Member States shall by law reconcile the right to the protection of personal data pursuant to this Regulation with the right to freedom of expression and information, including processing for journalistic purposes and the purposes of academic, artistic or literary expression.’

This appears to leave the task of determining how the balance will be made to individual Member States. Whilst this isn’t unusual in European legislation, it means that the standard will vary depending on where the organisation is based, and or where the data subject resides. At the time of writing, it isn’t clear how different Member States will address this reconciliation. Despite freedom of expression’s status as a fundamental right in European law, it is afforded scant consideration, and thus weak protection under the GDPR, preferring to defer to national law, which simply isn’t good enough. Far stronger statements and guarantees should have been provided.

Over Compliance

Unfortunately, the amount of extra work required to analyse and deal with these requests as a result of the law’s construction – along with the high financial penalties detailed in Article 83 – mean that it is likely that many organisations will simply resort to removing data, even where there is no lawful basis for the request, or requirement for them to do so.

We may fairly confidently speculate that the response from many data controllers will be to take a conservative approach to the GDPR’s requirements, and thus be less likely to push back on any potentially dubious requests as a result. Insistent complainants may find that they are able to have speech silenced without any legitimate legal basis simply out of fear or misunderstanding on the part of third party organisations.

With a well publicised and generally misunderstood right to removal, lack of procedural requirements, and a reliance on intermediaries to protect our rights to freedom of expression, we may find ourselves with more control over our own data, but with far less control over how we impart and receive information online.

Header image by ‘portal gda‘ on Flickr. Used under CC BY NC-SA 2.0 license.

Don’t Blame Twitter for the Failings of the Law

Recently, it was reported that the daughter of Robin Williams has left Twitter, after receiving graphic tweets depicting his suicide. This event has led to pressure being placed on the platform to take stronger action against those engaging in abusive behaviour:

 ‘They have a moral responsibility to protect their users, but they simply don’t.’

– Austin Awareness charity campaigner Kevin Healey. (#)

In response, Twitter has issued a statement declaring that they will be re-considering their present policies.

‘We will not tolerate abuse of this nature on Twitter.’

– Del Harvey, Twitter Vice President of Trust and Safety. (#)

The issue of abusive users on social channels is nothing new, providing a constant source of easy headlines. Part of the reason for this is due to a quirk of circumstance. ‘Ordinary’ Twitter users that receive abusive mentions (public messages directed to them) are able to block the users in question, so that they no longer see any further messages in their notifications (#). Whilst there is clearly no way to prevent the initial messages, this is a quick and simple way to stem any future abuse from that user account.

The real problem comes when there are users who are in the public spotlight. Just as these people will be the recipient of many messages from fans and well-wishers, they will also inevitably receive abusive communications in higher numbers. At this stage, the blocking mechanism becomes ineffective due to the volume involved. Given the already elevated profile of these people, there is more of a story to be told. It becomes something of a self-fulfilling prophecy. It’s important to note that this is a very particular problem, and one that the average Twitter user will not encounter. That, of course, doesn’t mean that it’s something that should just be ignored.

Abusive messages sent to those in the public eye is far from a new phenomenon, pre-dating the existence of the Internet. Bags of letters from fans sent through the post would also be accompanied by hate mail and death threats.

In the UK, Section 1 of the Malicious Communications Act 1998 makes it an offence to send a ‘letter, electronic communication or article of any description’ containing a ‘message which is indecent or grossly offensive’, or ‘a threat’ to another person. (#) This covers not only abusive postal communications, but those sent over Twitter as well. There are similar protections enacted in different jurisdictions worldwide.

Given that this is the case, why do we place a greater burden of expectation on online service providers than we do on those who enforce the law?

The Royal Mail does not have the same technical abilities available to them as entities such as Twitter do, and therefore it manages to avoid coming under fire for acting as the carrier of abusive messages. However, the idea that responsibilities of the State should be shifted to private entities in this manner is troubling.

People will always use different methods of communication to send abusive messages. The Internet makes this easy to do so in a quick, and highly visible way. Given that these actions are illegal, then it is something that should be pursued by the arms of the law that are meant to protect its citizens. The responsibility of protecting society, mediating between individuals, or making determinations of fact should not be left to any private party – be that Facebook, Twitter, the Royal Mail, or your own ISP.

Of course, online platforms often do make determinations about the kind of community they wish to foster. Content that is completely legal to host (such as porn) is prohibited on many services. The question about how these policies are created and shaped is undoubtedly one that users should speak up about, and challenge where they disagree. It is completely right that Twitter users should express their discontent if the community which they are a part of is becoming something undesirable. However, if the issue here is really about the volume of abuse that individuals are engaged in communicating, and the resulting inability of the law to effectively deal with it, then let’s be honest about that. Ultimately, this is a societal and legal problem, not the responsibility of the Internet.

 

Automattic/WordPress.com fight back against Censorship

WordPress LogoAutomattic – the company behind WordPress.com, have taken a decisive step in the fight against bogus DMCA claims.

Under the Digital Millennium Copyright Act, people can submit a takedown notice to web service providers where their intellectual property is being used without permission. This is the legislative attempt to protect hosts like Google, WordPress, Tumblr, etc from being held responsible for the content that their users post – provided that they swiftly restrict access.

However, whilst this system is designed to give a balance between protection and enforcement, the reality is that many times it is abused by those who wish to silence critics, or to censor views with which they disagree. The Church of Scientology infamously issued thousands of DMCA takedown notices to stop the spread of anti-Scientology views on Youtube, for example. This tactic is highly effective, as the content is almost always restricted (at its peak moment of attention), and the process to challenge the notices (a ‘counter notice’) isn’t something that creators are, or arguably should be, familiar with. In effect, it becomes a virtual game of ping-pong, with the burden of proof shifting to the ‘author’ of the content to prove that they actually have the rights to publish. Sites themselves can take action, but with the sheer volume of notices that they receive, it is often impractical, and rarely a route that businesses want to go down.

I’m both pleased and proud to see that WordPress are fighting back against two such bogus DMCA claims, as announced in this latest blog post, where you can find all the details of the two cases in question.

For the full text of the original post from Oliver Hotham – one of those that fell victim to the misrepresentative DMCA, continue reading below, where it is republished with permission.

Continue reading “Automattic/WordPress.com fight back against Censorship”

Do we need a ‘Cyber Fire Department’?

Yesterday I attended the ScotSoft 2013 technology forum hosted by ScotlandIS in the Sheraton ‘Grand Hotel and Spa’ through in Edinburgh. The event – followed afterwards by an awards dinner (which I did not attend!) – had a number of speakers that covered issues across the software business lifecycle, from acquiring initial financial backing to long-term development plans.

ScotlandIS LogoThe keynote was on the future of the Internet, and came from none other than Google’s ‘Chief Internet Evangelist’, Vint Cerf. It only took a few minutes to realise why he rightfully deserves what is probably the coolest job title that any self-respecting geek could ever have. Whilst the rest of the day had been very much focussed on those involved in the business side of the tech industry, Vint spoke with a natural and pervasive authority on everything from the implementation of IPv6 (‘Go ask your ISPs what their roll-out plan is’), to the distributed and often chaotic nature of Internet Governance. It should perhaps have been obvious that this would be the case from one of the ‘founding fathers’ of the Internet, but it is a rare thing indeed to find someone who is not only so formidably technically able, but who also has the charm and charisma to communicate that passion and ability to others so effectively. In many ways, it brings into question the existence of the much fabled, so-called ‘digital native’, and whether or not such a thing can or should be defined by reference to any particular generation.

Vint covered many topics in the short time he was allocated – from the crude beginnings of ARPANET, all the way through to using TCP/IP in space – but there was one fleeting reflection in particular that really captured my imagination: the idea of a ‘Cyber Fire Department’. This wasn’t something that there was too much time spent expanding upon, but he explained by giving the example of somebody trying to single handedly stop their house from burning down with a bucket of water; eventually, they would need other people to assist with bigger hoses and more water than they could supply on their own. With people increasingly concerned about the issue of safety online, the notion of a service that responded to people experiencing overwhelming technological difficulties was something that he suggested ‘we should be thinking about’.

It’s this idea I’d like to think about.

Binary Hose PipeWhy on earth would we need or want such a thing?

At first, it might seem a ludicrous proposition, especially to those who still instinctively perceive the Internet as some sort of glorified playground for teenagers to frivolously socialise. To many, the web simply isn’t serious business, despite all of the evidence to the contrary. Truth is, it may well be easier to simply be dismissive rather than to face the difficult challenges that will inevitably need to be tackled as the result of the increasing permeation of the Internet into our everyday lives.

We now have a globally interconnected network which has transformed the way we communicate, and become incorporated into the very foundation of our economies. This is not a phenomenon that is going to be reversed, and if anything, is set to increase rapidly as mobile devices proliferate, and more and more objects get the ability to share information on the net (the latest hot phrase being the ‘Internet of things’).

Just as fire spreads quickly from adjoining buildings due to carelessness or lack of education, the same is true of the Internet; weaknesses in one system potentially having a devastating knock-on effect on others that are connected either directly or indirectly. In order to ensure the integrity of such an important asset, it appears that to contemplate the proposition of an emergency cyber response brigade seems eminently sensible.

What would a ‘Cyber Fire Department’ look like? What would it involve?

Let us assume that such a service was run separately from region to region, rather than some centralised, global endeavour. Aside from simply flying in the face of the distributed nature of the web in principle, I’m sure that all of us can imagine the bureaucratic nightmare that such an international entity would inevitably end up finding itself embroiled in (ICANN, anyone?).

The gut reaction to the suggestion of such a service may be to query the merit of a 999/911 type response to issues that do not fundamentally involve crimes relating to the person, but this model doesn’t necessarily have to be the one that is adopted. If brought into existence, the thing would not be required to have the same status as the major emergency services, nor indeed have to be publicly funded. One needs only to look at the myriad of examples that are out there already, such as the Royal National Lifeboat Institution (RNLI) to see how such a service can be both publicly available and independent.

…but would the market swallow this? There are already commercial offerings from the likes of the ‘Geek Squad’ marketed as emergency technical support. It seems unlikely that there would be any philanthropic provision from a non-profit organisation with substantial enough backing to effectively take on the private actors, which would seem to indicate the inevitability of some sort of central Government involvement.

Perhaps a bigger hurdle to be overcome would not be the financial element of the funding, but the ideological implications of the origin. Already, creeping state involvement in the regulation of the Internet is being pushed back by advocates of the ‘open web’, and the introduction of such a significant step could be easily seen as too much interference in a sphere that by its very nature transcends the boundaries of nation states.

How far do we take this?

If we accept the premise that the Internet is a precious enough asset that we should adopt some sort of cyber fire department, then there are other interesting questions that become raised as a consequence. Off the top of my head, some of these might include:

  • Ageing computer systems and equipment pose some of the most significant security risks. Should we implement an MOT style check to ensure that the equipment people are using is of an adequate standard to help ensure safety online?
  • Do we grant the cyber fire department statutory powers to ensure that ‘cyber safety’ regulations are enforced, much as their equivalents in the actual fire service have?
  • Viruses are often spread by those who are unfamiliar with how to properly navigate online. Does this mean that we should implement a driver’s license style test before they are granted access to the Internet?

Some of this sounds preposterous, and would (rightly) be considered a massive encroachment into online freedom, but it wasn’t so long ago that the idea of state-wide Internet filters blocking access to content including message boards seemed completely out of the question too.

Thinking about it

The question about whether we should adopt an emergency cyber response service in the style of a cyber fire brigade may seem like being a long way off from any serious implementation, and it probably is. However, the discussion does spark off a whole slew of related considerations that we should be taking seriously. As the UK Government comes under criticism for its ‘digital by default’ strategy for not taking into account those without either the access or training to get online, the issue of digital engagement and education seems to go hand-in-hand with a lot of the concerns relating to online safety.

Whatever the outcome, we are at a point of transition, and the policy issues that are involved are as fascinating as they are complex. Like Vint said yesterday, it’s something we should be thinking about.