Issues with Article 17 (‘Right to be Forgotten’) of the GDPR

With the GDPR’s deadline now almost upon us, one of the most talked about provisions has been the ‘Right to Erasure’ contained within Article 17.

Significantly expanding the ‘Right to be Forgotten’ doctrine established in the Google Spain case, Article 17 allows data subjects (i.e. you and I) to submit takedown requests to any organisation that collects and controls information on them.

There are a number of grounds under which people may seek to have data deleted, which cover a broad variety of circumstances. These include situations where the data is no longer necessary for the reasons it was collected; where it was unlawfully processed; where the subject withdraws their consent; as well as some others. The right is not unlimited, with exceptions where the collection and processing of the data is necessary in the exercise of the right to freedom of expression; where there is a specific legal obligation to retain the information; for reasons of public interest; etc.

Issues with Article 17

Despite some initial reservations, the GDPR (and Article 17 in particular) has generally been lauded as a victory for European citizens, who will gain far more control over what information companies hold on them than they ever previously have had. This is especially true given the arguably extra-territorial applicability, where any organisation that handles European data will be expected to comply.

However, there are a few specific issues arising from the construction of Article 17 that bear some further scrutiny. Rather than analyse the philosophical criticisms of the Right to Erasure, below I briefly look at some of the practical considerations that will need to be taken by data controllers when they receive such a Request for Erasure:

  1. Verification.
  2. Abuse, and a lack of formal requirements for removal requests.
  3. Article 85: Freedom of expression.

Verification of the Data Subject

Before giving effect to an Article 17 request, the controller must use all ‘reasonable measures’ to identify the identity of the requesting party. It is perhaps obvious that an organisation should not be deleting the accounts or other data of somebody without checking first to make sure that the person making that request is authorised to do so. However, this leaves open a number of questions about what this kind of verification will look like. In other words, what steps will be considered ‘reasonable’ under the terms of the law? Will courts begin to see arguments over online platforms account recovery procedures as a result of a denial of access to the fundamental right of privacy via the GDPR? What metrics will a data subject be able/expected to provide in order to discover their associated data? i.e. while it might be easy to request information relating to your e-mail address, what about other identifiers such as IP addresses, or names? These are questions that do not have clear answers, and will inevitably lead to an uneven application of the law, dependent on the situation.

Abuse, and a Lack of Formal Procedural Requirements for Erasure Requests

It should be self-evident at this stage that any statutory removal mechanisms will be open to abuse by parties determined to have content removed from the Internet, and in that regard, Article 17 is no different. However, there is a common misconception that the Right to Erasure gives people the right to stop any mention of them online – especially speech that is critical of them, or that they disagree with. This is not the case, and Article 17 is not crafted as a dispute resolution mechanism for defamation claims (that would be the E-Commerce Directive). These facts don’t stop people from citing the GDPR incorrectly though, and it can quickly become difficult to deal with content removal demands as a result.

The problem is compounded by the fact that there are no formal procedural requirements for an Article 17 request to be valid, unlike the notice and takedown procedure of the DMCA, or even the ECD. Requests do not have to mention the GDPR, or even Right to be Erasure specifically, and perhaps even more surprisingly, the requests don’t have to be made in writing, as verbal expressions are acceptable.

While the reasons for the lack of specific notice requirements is clearly in order to give the maximum amount of protection to data subjects (the lack of requirement for writing was apparently in order to allow people to easily ask for the removal of their data from call centres over the phone), it seems to ignore the accompanying problems with such an approach. The lack of clarity for the general public around what exactly the Right to Erasure includes, along with the lack of procedural checks and balances means that it will be increasingly difficult for organisations to identify and give effect to legitimate notices. This is especially true for online platforms that already receive a high number of reports. While many of these are often nonsense or spam, they will require far greater scrutiny in order to ensure that they aren’t actually badly worded Article 17 requests that might lead to liability.

If we look at the statistics on other notice and takedown processes such as that in the DMCA (the WordPress.com transparency report, for example), we can see that the levels of incomplete or abusive notices received are high. The implementation of even basic formal requirements would provide some minimum level of quality control over the requests, and allow organisations identifiers to efficiently categorise and give effect to legitimate Article 17 requests, rather than the prospect of having to consider any kind of report received through the lens of the GDPR.

Article 85: Freedom of expression

As mentioned earlier, a controller is not obliged to remove data where its continued retention is ‘necessary for reasons of freedom of expression and information’. The obvious question then becomes under what grounds this should be interpreted, and we find some guidance in Article 85 of the GDPR. Unfortunately however, it doesn’t say all that much:

‘Member States shall by law reconcile the right to the protection of personal data pursuant to this Regulation with the right to freedom of expression and information, including processing for journalistic purposes and the purposes of academic, artistic or literary expression.’

This appears to leave the task of determining how the balance will be made to individual Member States. Whilst this isn’t unusual in European legislation, it means that the standard will vary depending on where the organisation is based, and or where the data subject resides. At the time of writing, it isn’t clear how different Member States will address this reconciliation. Despite freedom of expression’s status as a fundamental right in European law, it is afforded scant consideration, and thus weak protection under the GDPR, preferring to defer to national law, which simply isn’t good enough. Far stronger statements and guarantees should have been provided.

Over Compliance

Unfortunately, the amount of extra work required to analyse and deal with these requests as a result of the law’s construction – along with the high financial penalties detailed in Article 83 – mean that it is likely that many organisations will simply resort to removing data, even where there is no lawful basis for the request, or requirement for them to do so.

We may fairly confidently speculate that the response from many data controllers will be to take a conservative approach to the GDPR’s requirements, and thus be less likely to push back on any potentially dubious requests as a result. Insistent complainants may find that they are able to have speech silenced without any legitimate legal basis simply out of fear or misunderstanding on the part of third party organisations.

With a well publicised and generally misunderstood right to removal, lack of procedural requirements, and a reliance on intermediaries to protect our rights to freedom of expression, we may find ourselves with more control over our own data, but with far less control over how we impart and receive information online.

Header image by ‘portal gda‘ on Flickr. Used under CC BY NC-SA 2.0 license.

Advertisements

Edinburgh Airport Doesn’t Get Privacy

In Edinburgh Airport. Went to use the WIFI. They apparently don’t understand how checkboxes are meant to work.

Screen Shot 2015-12-16 at 13.56.15.png

“I would like to” should never be a compulsory field.

Just as well I registered with a spambox address eh?

Get tae.

Private Internet Access VPN on the Draft Surveillance Bill

With the upcoming draconian Digital Surveillance Bill in the UK, that has been described as ‘worse than scary’ by the UN’s Privacy Chief, I’ve again resorted to sending all of my web traffic over VPN.

The VPN I use is Private Internet Access (PIA), for their stance on user privacy. I was curious to see what they had to say about the new Bill, so dropped them an e-mail. I got two separate responses, one from their tech support, and one from their legal team. They’re worth reproducing publicly.

Here they are. First off, the more general position from their tech team:

Hello,

Thank you for contacting us. It is our current interpretation that the EU Data Retention Directive 2006/24/EG is not applicable to private VPN services such as ours and instead applies to larger public communications networks. The law requires that telephone and internet providers temporarily store data about a user such as assigned Internet Protocol (“IP”) addresses, timestamps, and more to assist law enforcement and investigations. Private VPN providers do not fall within the purview of the European definition of a public communications network, so it is our position that the EU Data Retention Directive 2006/24/EG does not apply to our business organization.

PIA absolutely does not keep any logs, of any kind, period. While this does make things harder in some cases, specifically dealing with outbound mail, advanced techniques to handle abuse issues, and things of that nature, this provides a high level of security and privacy to all of our users. Logs are never written to the hard-drives of any of our machines and are specifically written to the null device, which simply acts if the data never existed.

The Mandatory Data Retention logs in the EU and many areas applies to Telecommunications and Internet Service Providers as they are a “Public Communications Network”. This is not applicable to our VPN service as we are a private network.

Due to this, we’re unable to provide information on our customers usage of our service under any circumstance, including subpoenas and court orders, which are extremely closely reviewed before we make any response by our experienced legal team.

PrivateInternetAccess.com is a business that strives to protect privacy and the privacy rights of our clients. Although we will comply with all valid subpoena requests, our legal team scrutinizes each and every legal request that we receive for compliance with both the “spirit” and letter of the law. For invalid or overly broad subpoenas, we will often question or attempt to narrow the scope of any subject matter sought.

Moreover, when it is possible and a valid option we will provide the user an opportunity to object to any requested disclosures. We cannot provide information that we do not have. PrivateInternetAccess.com will not participate with any request that is unconstitutional.

https://www.privateinternetaccess.com/pages/privacy-policy/

and secondly, the more direct answer from the legal team:

Thanks for the email. We are aware of this proposed law pending in the UK. First, the law has to actually go into effect first before we will consider making any changes. We are paying close attention to this proposed law and we will make any adjustments as necessary to maintain the privacy of our users. Second, PIA will not maintain logs because we do not believe that we will be classified as an ISP under the new law. The log keeping requirements are specific to ISPs and we do not fall under that definition. We hope that helps answer your questions.

Help fight back against the Bill with the Open Rights Group:

https://www.openrightsgroup.org/blog/2015/investigatory-powers-bill-published-and-now-the-fight-is-on

Yes, I do use ad-blockers, and No, I don’t feel bad about it

Ad-blockers are small, self-explanatory bits of software that have been around for ages – preventing countless numbers of adverts from being displayed on the websites of those who make use of them every day.

In the past few weeks, a debate has been ignited over this practice, with the wildly successful ‘Peace’ app being pulled from download by its creator just days after its release – supposedly having undergone a change of heart.

Advertisers and publishers are understandably unhappy at the number of people who choose to block their adverts, even going as far as to call the act itself ‘immoral’ – equating the consumption of content for free with theft.

I was challenged by a colleague in a discussion about the issue when I said that I had been using ad-blocking software for years. It was something I’d never really stopped to consider in any sort of depth, and once I’d typed up my response I was encouraged to post it up here.

Before we go on, I should say that this isn’t really about the legitimacy or otherwise of ads themselves, but the use of ad-blockers specifically. You’ll probably note that there ads on this site, for example. As far as I’m concerned, ads have their place, and you can completely consistently choose to monetise content with them whilst also simultaneously respecting the decision of others to block them. With that disclaimer out of the way, here we go.

Why I use ad-blockers

  1. Adverts are intrusive – Online adverts dilute the experience of the website you are trying to visit, and often interfere with being able to view the content itself. When I want to read an article, I don’t want a giant flashing banner to distract me from what I’m doing – not to mention provide a massive headache.
  2. A dark history – Is it any wonder that people can’t stand adverts, and seek to block them where possible, when we’ve been subjected to pop-ups, pop-unders, scrolling flash adverts, and sneaky malware for the past decade plus? Adverts had their chance, and they screwed it up. The day that browsers implemented popup blocking was a wonderful day. Blocking ads completely is just the next natural step.
  3. Blocking online behavioural tracking – This is related to the above, but in a different way. Not only have ads interfered with the operation of our devices, but now we find out that they have been tracking our moves across the web, building up profiles that they can then sell on to third parties. Uhm, nope.

Why I don’t feel bad about it

  1. Ethics – Without going into some elongated discussion about moral relativism, the suggestion that somehow blocking ads is ‘unethical’ or ‘immoral’ is one that I find massively distasteful, and frankly ridiculous. It seems to me that if anybody is going to throw the first stone in an ethics discussion, then the advertising industry should remember the glass mansion that they’ve built for themselves.
  2. Information should be free – I am aware of the many and varied caveats, exceptions, and qualifications to this, but in principle I subscribe to the ideology that information and knowledge should be free.
  3. I’m not going to buy your stuff, however ‘relevant’ it is – One argument is that ‘if only ads were relevant, then this wouldn’t be an issue!’. To me, that misses the point. The issue isn’t about how relevant or otherwise the ads are; it’s about the fact that the ads exist in the first place. In order to actually get really ‘good’ ads (if there is such a thing) that people will click on, it requires a massive amount of profiling.
  4. I didn’t agree to pay for your content – I reject the idea that by simply visiting a website to read content that has been made publicly available, that somehow I have agreed to finance its operation. Just because advertisers and publishers have chosen to hang their existence on one specific kind of economic model, does not mean that I am obliged – either legally or morally – to support it.
  5. Public space – Fundamentally, I resent the increasing ingression into public, communal spaces by capitalist entities. On the web, at least I can control my exposure to the constant barrage of advertisements, and limit their effects. I will choose whether or not to block unsolicited adverts that are transmitted to my device, and I think that is my right.
  6. I will choose who and how I support financially – In years gone by, before publications moved online, people would refuse to support certain ones (such as the Daily Mail) by simply not purchasing their paper. Now, it can be almost impossible to tell the source of a link without clicking on it first. URL un-shortening services exist, but they are cumbersome and impractical. One of the big reasons I use ad-blockers is because I refuse to inadvertently finance publications with reprehensible editorial positions.

Obiter

The relationship we have with information, and the media/publishers has been completely transformed. It’s something I have seen first hand, with good friends losing their jobs as photographers due to the democratisation of that industry. It’s something I don’t actually have an issue with. Content doesn’t stop getting created just because the professionals of olden days are no longer getting the financials they were previously – we’ve seen that in the music industry. It just means that the kind of content, the source, and people’s ability to rely on it as a full time occupation changes. Ideologically this is something that I’m comfortable with.

To finish, here’s the question that sparked all of this thought-process off, and my tl;dr response:

Do you feel like you’re supporting the publishers whose content you’re consuming?

No, but I reject the premise that there’s any sort of obligation or moral requirement to. Infact, I purposefully choose not to support many publishers on purpose. If I want to support them financially, then I’ll do so in other ways.

The Scottish Government’s Plans for a National Identity Database

Over the past couple of weeks, it has come to light that the Scottish Government are holding a public consultation on changes to the National Health Service Central Register (Scotland) Regulations 2006. 

The NHSCR is essentially a database that holds records on every single person in Scotland who was either born – or registered with a GP – in the country. This is tied to a unique number called the UCRN. Since the bulk of us need to see the doctor now and then – and don’t have private healthcare – that means pretty much all of us is on there. The changes would allow the register to collect some additional information (in the form of postcodes), and then share that data with other public sector organisations.

The proposed aims of these changes are as follows:

i. Improve the quality of the data held within the NHSCR

ii. Assist the tracing of certain persons, for example, children who are missing within the education system and foreign individuals who received NHS treatment in Scotland and left the country with outstanding bills

iii. Enable the approach to secure and easy access to online services (myaccount) to extend beyond services of Scottish local authorities and health boards to a wider range of public services

iv. Enable the identification of Scottish tax payers to ensure the accurate allocation of tax receipts to Scotland associated with the Scottish Rate of Income Tax.

So hold on, how on earth will changes made to a register held by the NHS help trace missing people, or to sort income tax? I’m glad you asked!

Data Sharing

Despite being buried away in a seemingly minor consultation in an innocuous piece of legislation, the proposals are actually pretty significant. In essence, they are seeking to use NHS records as a central location for a whole manner of other organisations to track details about people resident in Scotland.

On the face of it, the sheer dishonesty involved in appropriating a database which has been collected through public trust for other purposes is dismaying enough. However, there are some legitimate aims in there. After all, who could argue with attempting to trace missing children more efficiently? Given the sensitive nature of the information involved, I’m sure that we can expect that the other organisations which would gain access to view and share these types of personal details would be small, and tightly controlled. Right?

Wrong.

In the proposed new schedule, there are 98 different organisations listed who would get access to a core set of records. Amongst them are:

  • The Scottish Ministers
  • The Scottish Parliament
  • Revenue Scotland

Well, okay… not great, but hard to really justify spitting the dummy out over.

But wait, there’s more:

  • The Foods Standard Agency in Scotland
  • The Drinking Water Quality Regulator in Scotland
  • The Queen’s Printer for Scotland

Err… what?

That’s not all though!

  • Glasgow Prestwick Airport
  • Cairngorms National Park Authority
  • Scottish Canals

and… possibly the best one of them all:

  • Quality Meat Scotland

Yep, that’s right. Quality Meat Scotland.

Don’t believe me? See the full list for yourself.

Now, correct me if I’m wrong, but I see absolutely no reason for these people to have access to my private information:

Scotsheep 2012 Kings Arms 053

I’m sure they’re wonderful human beings that do a great job, but when I go to see the doctor about a private matter, I don’t expect that information to then be available to anybody else, especially not a seemingly arbitrary selection of other public organisations.

Here’s some other possible data exchanges that I find curious:

  • The Forestry Commission sharing information on people with the National Library of Scotland (to find out which books are pulped most, perhaps?)
  • SQA (the exams people) sharing information on people with The Crofters Commission (finding under-qualified Crofters?)
  • Scottish Canals sharing information on people with The Board of Trustees of the Royal Botanic Gardens, Edinburgh (?!?!)

There are other, more serious potential implications though:

  • The address information of vulnerable people being discovered, or exposed to disgruntled or abusive ex-partners
  • Details of people’s personal medical records (including mental health issues such as depression) being laid bare for others to access – with the potential for discrimination on that basis markedly high

These possibilities are purely hypothetical at this point, and would arguably be outside of the scope of the proposals in their current form. However, they illustrate the risks that are presented by linking up disparate data-sets in this manner. Once the UCRN is deployed across the public sector, there is little to prevent the above examples from being enabled. The consultation does not the risks that are presented by this, and haven’t given the impression of any sort of detailed consideration about either the privacy implications, or general public interest of this move.

One would expect there should be detailed regulations in place to control the sort of information transfer being described, yet the consultation remains remarkably quiet on the matter, stating only the following:

In each of the proposed amendments outlined above the minimum amount of data would be shared for the specific purposes outlined. The organisation will provide information on the individual they wish to identify and will receive equivalent information from the NHSCR and the principal reference number which is the UCRN. Where an organisation wishes to take advantage of this legislation it will also require to have in place data sharing agreements to ensure that appropriate processes are put in place and followed and that the data is used for the specific purpose identified.

That’s all very well and good, but there is a worryingly scant supply of details on the framework that would ensure these protections would be afforded, or what these ‘appropriate processes’ might be to prevent extra data being shared between organisations without justification. There is also nothing to stop this limited, and disparate set of aims (tracking missing children, establishing a more efficient online user account system for public services, and ensuring Scottish people pay income tax) from expanding in the future to share much more data.

This is a far bigger issue than it is being presented as.

Here is a summary of the issues:

  • The proposed changes would create a single national identity database in Scotland
  • There have been no adequate considerations of the privacy or data implications outlined in the consultation
  • There is no way to guarantee that the scope of the data to be shared would not increase in future, once the mechanism is established
  • The changes would undermine the public’s trust in the NHS, by using it as a vehicle to deliver these proposals

The consultation is woefully inadequate for the significance of these proposals, and the questions framed as if their premise is already universally accepted as a good thing. Almost laughably, instead of leaving space for any potential concerns, the consultation asks about suggestions for other organisations who the data should be shared with. That’s in addition to Prestwick Airport and Quality Meat, for the record.

The Scottish Government should halt the proposals, and instead move to recognise these changes for what they are: a significant development in our relationship with public sector organisations, requiring a full debate in Parliament, with the chance for both MSPs and the public to scrutinise them.

Read more from the Open Rights Group on this here.

Details on the Consultation itself is here. If you’re looking to do so, make sure and get yours in quick, as the closing date is the 25th of February.

Facebook’s Real Name Policy is Back

Facebook have pushed ahead with the enforcement of their ‘real name’ policy, which requires users to use their real, or ‘authentic’ name.

This comes after a previous attempt stalled, following an uproar from the community which forced Facebook to give a rare apology.

Here’s the gist of the requirements:

Screen Shot 2014-11-27 at 12.26.07

Source: https://www.facebook.com/help/112146705538576

Sounds fair enough on the surface of it, and gives enough room for interpretation to allow aliases or nicknames – precisely what appeased the criticisms from last time. However, the practical implementation seems quite different.

Screen Shot 2014-11-27 at 12.26.23

Again, these additional requirements don’t seem too restrictive. If anything, they seem fairly flexible, whilst retaining some sort of continuity. However, the practical implementation has been completely different.

Today, there have been reports that users have been locked out of their accounts, after Facebook has deemed their names to not be ‘authentic’ enough. This included a determination that the name ‘Daz’ (a common offshoot of Darren) was not acceptable, and ‘Nikki’ should be changed to ‘Nicola’ – despite the insistence that shortened nicknames (like ‘Bob’ in the case of Robert) are fine.

Now comes the kicker. In order to get back into your account, you either need to provide a ‘real’ name, or some sort of ‘acceptable identification’ to prove that you are known by the name or alias you had beforehand.

Let’s take a look at what the acceptable forms of identification are, according to Facebook:

Screen Shot 2014-11-27 at 12.44.06

Uhm, sorry… what? Despite their warning that you should be sure to blank out any other personal information, there is no reason in hell that anybody should ever be giving copies of the above documents to Facebook. The idea that this would ever be requested is completely ridiculous. If Facebook demanded I send a copy of my passport – redacted or otherwise – then they would be politely told where to shove it.

But hey! Should you not wish to share such an important piece of sensitive ID with a social network based in a different country, you have another option. You can provide two bits of ID from the following list:

Screen Shot 2014-11-27 at 12.46.29

This just becomes more ludicrous. Here’s why:

  • There is no way for Facebook to verify any of the above properly.
  • All of this ‘evidence’ can easily be doctored by any muppet.
  • Even if you are known by a certain name in your everyday life, you won’t have that alias on official documents that require your legal name. In which case, how on earth are you meant to prove the existence of a nickname?
  • WTF is a ‘permit’ anyway?

There are plenty of reasons why people would legitimately want to avoid using their full, legal name online (those in teaching, or the health service, or…); those who have already lost the ability to remain hidden in searches thanks to previous changes, with the process to use a nickname or alias instead verging on the impossible. But there’s something far more fundamental here: That it’s absolutely fuck all to do with Facebook what name you choose to go by. Making determinations about what is and isn’t ‘authentic’ is evidence of an organisation that has no concern for its users other than its own commercial interests.

We need to find a better way to communicate than this by using this lot.

Wave Goodbye to Anonymity on Facebook

anonymityJust after I posted a week or so ago reflecting on my return to a more open Facebook existence, they’ve gone and announced that they are doing away with one of the privacy features that was so important to staying anonymous on the site.

In a blog post posted yesterday, Facebook quietly announced that they were getting rid of the ability to hide your name from searches – stating that it was irrelevant since people could still click on your name in comments that you make on other people’s timelines.

In the words of Michael Richter – Chief Privacy Officer:

people told us that they found it confusing when they tried looking for someone who they knew personally and couldn’t find them in search results

Well, too bad for them. If I choose to be hidden from search results, then it’s my conscious choice. Facebook shouldn’t be second guessing the reasons for users to wish to remain harder to find on the site.

Apparently a ‘tiny’ proportion of Facebook’s billion plus users were making use of the privacy setting, which eh… still equates to a huge number of people. It shouldn’t really be any surprise, given that the settings were so unintuitive to find and use. Make something hard to configure, and you have a perfect excuse to remove it later when the adoption rate is relatively low.

This move essentially means that there is no way to keep out of the spotlight on Facebook any longer – confirming my belief that Facebook is designed to incrementally pull you further in to the network, even when you purposefully want to remain on the outskirts. Even if you restrict all of your posts to a limited number of people, you are still going to have to contend with the fact that people will be able to find you in searches, and explain why you have decided not to ‘confirm their friendship.

The only way to get around this will be to use a fake name and e-mail address, but that is forbidden by the site’s policies, and could see you booted for good.

Maybe that wouldn’t be such a bad thing.