Don’t Blame Twitter for the Failings of the Law

Recently, it was reported that the daughter of Robin Williams has left Twitter, after receiving graphic tweets depicting his suicide. This event has led to pressure being placed on the platform to take stronger action against those engaging in abusive behaviour:

 ‘They have a moral responsibility to protect their users, but they simply don’t.’

– Austin Awareness charity campaigner Kevin Healey. (#)

In response, Twitter has issued a statement declaring that they will be re-considering their present policies.

‘We will not tolerate abuse of this nature on Twitter.’

– Del Harvey, Twitter Vice President of Trust and Safety. (#)

The issue of abusive users on social channels is nothing new, providing a constant source of easy headlines. Part of the reason for this is due to a quirk of circumstance. ‘Ordinary’ Twitter users that receive abusive mentions (public messages directed to them) are able to block the users in question, so that they no longer see any further messages in their notifications (#). Whilst there is clearly no way to prevent the initial messages, this is a quick and simple way to stem any future abuse from that user account.

The real problem comes when there are users who are in the public spotlight. Just as these people will be the recipient of many messages from fans and well-wishers, they will also inevitably receive abusive communications in higher numbers. At this stage, the blocking mechanism becomes ineffective due to the volume involved. Given the already elevated profile of these people, there is more of a story to be told. It becomes something of a self-fulfilling prophecy. It’s important to note that this is a very particular problem, and one that the average Twitter user will not encounter. That, of course, doesn’t mean that it’s something that should just be ignored.

Abusive messages sent to those in the public eye is far from a new phenomenon, pre-dating the existence of the Internet. Bags of letters from fans sent through the post would also be accompanied by hate mail and death threats.

In the UK, Section 1 of the Malicious Communications Act 1998 makes it an offence to send a ‘letter, electronic communication or article of any description’ containing a ‘message which is indecent or grossly offensive’, or ‘a threat’ to another person. (#) This covers not only abusive postal communications, but those sent over Twitter as well. There are similar protections enacted in different jurisdictions worldwide.

Given that this is the case, why do we place a greater burden of expectation on online service providers than we do on those who enforce the law?

The Royal Mail does not have the same technical abilities available to them as entities such as Twitter do, and therefore it manages to avoid coming under fire for acting as the carrier of abusive messages. However, the idea that responsibilities of the State should be shifted to private entities in this manner is troubling.

People will always use different methods of communication to send abusive messages. The Internet makes this easy to do so in a quick, and highly visible way. Given that these actions are illegal, then it is something that should be pursued by the arms of the law that are meant to protect its citizens. The responsibility of protecting society, mediating between individuals, or making determinations of fact should not be left to any private party – be that Facebook, Twitter, the Royal Mail, or your own ISP.

Of course, online platforms often do make determinations about the kind of community they wish to foster. Content that is completely legal to host (such as porn) is prohibited on many services. The question about how these policies are created and shaped is undoubtedly one that users should speak up about, and challenge where they disagree. It is completely right that Twitter users should express their discontent if the community which they are a part of is becoming something undesirable. However, if the issue here is really about the volume of abuse that individuals are engaged in communicating, and the resulting inability of the law to effectively deal with it, then let’s be honest about that. Ultimately, this is a societal and legal problem, not the responsibility of the Internet.

 

Advertisements

Automattic/WordPress.com fight back against Censorship

WordPress LogoAutomattic – the company behind WordPress.com, have taken a decisive step in the fight against bogus DMCA claims.

Under the Digital Millennium Copyright Act, people can submit a takedown notice to web service providers where their intellectual property is being used without permission. This is the legislative attempt to protect hosts like Google, WordPress, Tumblr, etc from being held responsible for the content that their users post – provided that they swiftly restrict access.

However, whilst this system is designed to give a balance between protection and enforcement, the reality is that many times it is abused by those who wish to silence critics, or to censor views with which they disagree. The Church of Scientology infamously issued thousands of DMCA takedown notices to stop the spread of anti-Scientology views on Youtube, for example. This tactic is highly effective, as the content is almost always restricted (at its peak moment of attention), and the process to challenge the notices (a ‘counter notice’) isn’t something that creators are, or arguably should be, familiar with. In effect, it becomes a virtual game of ping-pong, with the burden of proof shifting to the ‘author’ of the content to prove that they actually have the rights to publish. Sites themselves can take action, but with the sheer volume of notices that they receive, it is often impractical, and rarely a route that businesses want to go down.

I’m both pleased and proud to see that WordPress are fighting back against two such bogus DMCA claims, as announced in this latest blog post, where you can find all the details of the two cases in question.

For the full text of the original post from Oliver Hotham – one of those that fell victim to the misrepresentative DMCA, continue reading below, where it is republished with permission.

Continue reading “Automattic/WordPress.com fight back against Censorship”

Do we need a ‘Cyber Fire Department’?

Yesterday I attended the ScotSoft 2013 technology forum hosted by ScotlandIS in the Sheraton ‘Grand Hotel and Spa’ through in Edinburgh. The event – followed afterwards by an awards dinner (which I did not attend!) – had a number of speakers that covered issues across the software business lifecycle, from acquiring initial financial backing to long-term development plans.

ScotlandIS LogoThe keynote was on the future of the Internet, and came from none other than Google’s ‘Chief Internet Evangelist’, Vint Cerf. It only took a few minutes to realise why he rightfully deserves what is probably the coolest job title that any self-respecting geek could ever have. Whilst the rest of the day had been very much focussed on those involved in the business side of the tech industry, Vint spoke with a natural and pervasive authority on everything from the implementation of IPv6 (‘Go ask your ISPs what their roll-out plan is’), to the distributed and often chaotic nature of Internet Governance. It should perhaps have been obvious that this would be the case from one of the ‘founding fathers’ of the Internet, but it is a rare thing indeed to find someone who is not only so formidably technically able, but who also has the charm and charisma to communicate that passion and ability to others so effectively. In many ways, it brings into question the existence of the much fabled, so-called ‘digital native’, and whether or not such a thing can or should be defined by reference to any particular generation.

Vint covered many topics in the short time he was allocated – from the crude beginnings of ARPANET, all the way through to using TCP/IP in space – but there was one fleeting reflection in particular that really captured my imagination: the idea of a ‘Cyber Fire Department’. This wasn’t something that there was too much time spent expanding upon, but he explained by giving the example of somebody trying to single handedly stop their house from burning down with a bucket of water; eventually, they would need other people to assist with bigger hoses and more water than they could supply on their own. With people increasingly concerned about the issue of safety online, the notion of a service that responded to people experiencing overwhelming technological difficulties was something that he suggested ‘we should be thinking about’.

It’s this idea I’d like to think about.

Binary Hose PipeWhy on earth would we need or want such a thing?

At first, it might seem a ludicrous proposition, especially to those who still instinctively perceive the Internet as some sort of glorified playground for teenagers to frivolously socialise. To many, the web simply isn’t serious business, despite all of the evidence to the contrary. Truth is, it may well be easier to simply be dismissive rather than to face the difficult challenges that will inevitably need to be tackled as the result of the increasing permeation of the Internet into our everyday lives.

We now have a globally interconnected network which has transformed the way we communicate, and become incorporated into the very foundation of our economies. This is not a phenomenon that is going to be reversed, and if anything, is set to increase rapidly as mobile devices proliferate, and more and more objects get the ability to share information on the net (the latest hot phrase being the ‘Internet of things’).

Just as fire spreads quickly from adjoining buildings due to carelessness or lack of education, the same is true of the Internet; weaknesses in one system potentially having a devastating knock-on effect on others that are connected either directly or indirectly. In order to ensure the integrity of such an important asset, it appears that to contemplate the proposition of an emergency cyber response brigade seems eminently sensible.

What would a ‘Cyber Fire Department’ look like? What would it involve?

Let us assume that such a service was run separately from region to region, rather than some centralised, global endeavour. Aside from simply flying in the face of the distributed nature of the web in principle, I’m sure that all of us can imagine the bureaucratic nightmare that such an international entity would inevitably end up finding itself embroiled in (ICANN, anyone?).

The gut reaction to the suggestion of such a service may be to query the merit of a 999/911 type response to issues that do not fundamentally involve crimes relating to the person, but this model doesn’t necessarily have to be the one that is adopted. If brought into existence, the thing would not be required to have the same status as the major emergency services, nor indeed have to be publicly funded. One needs only to look at the myriad of examples that are out there already, such as the Royal National Lifeboat Institution (RNLI) to see how such a service can be both publicly available and independent.

…but would the market swallow this? There are already commercial offerings from the likes of the ‘Geek Squad’ marketed as emergency technical support. It seems unlikely that there would be any philanthropic provision from a non-profit organisation with substantial enough backing to effectively take on the private actors, which would seem to indicate the inevitability of some sort of central Government involvement.

Perhaps a bigger hurdle to be overcome would not be the financial element of the funding, but the ideological implications of the origin. Already, creeping state involvement in the regulation of the Internet is being pushed back by advocates of the ‘open web’, and the introduction of such a significant step could be easily seen as too much interference in a sphere that by its very nature transcends the boundaries of nation states.

How far do we take this?

If we accept the premise that the Internet is a precious enough asset that we should adopt some sort of cyber fire department, then there are other interesting questions that become raised as a consequence. Off the top of my head, some of these might include:

  • Ageing computer systems and equipment pose some of the most significant security risks. Should we implement an MOT style check to ensure that the equipment people are using is of an adequate standard to help ensure safety online?
  • Do we grant the cyber fire department statutory powers to ensure that ‘cyber safety’ regulations are enforced, much as their equivalents in the actual fire service have?
  • Viruses are often spread by those who are unfamiliar with how to properly navigate online. Does this mean that we should implement a driver’s license style test before they are granted access to the Internet?

Some of this sounds preposterous, and would (rightly) be considered a massive encroachment into online freedom, but it wasn’t so long ago that the idea of state-wide Internet filters blocking access to content including message boards seemed completely out of the question too.

Thinking about it

The question about whether we should adopt an emergency cyber response service in the style of a cyber fire brigade may seem like being a long way off from any serious implementation, and it probably is. However, the discussion does spark off a whole slew of related considerations that we should be taking seriously. As the UK Government comes under criticism for its ‘digital by default’ strategy for not taking into account those without either the access or training to get online, the issue of digital engagement and education seems to go hand-in-hand with a lot of the concerns relating to online safety.

Whatever the outcome, we are at a point of transition, and the policy issues that are involved are as fascinating as they are complex. Like Vint said yesterday, it’s something we should be thinking about.