The latest book I have to recommend comes from law professor Jeff Kosseff, in which he examines one of the laws that have been most crucial to the development of the Internet: s.230 of the Communications Decency Act. For those not familiar with the CDA, it is a piece of American jurisprudence that has essentially enabled businesses such as Twitter and YouTube to develop platforms built on user generated content, without themselves becoming liable for everything that those users may say or do.
Understanding the CDA is increasingly important – not just for lawyers or academics focussed on intermediary liability – but for anybody with an interest in the future of the Internet. This book provides a comprehensive explanation of the law’s history and original aims, as well as its development through case law. Whilst it isn’t necessarily an ‘easy’ read due to the subject matter, Kosseff’s narrative style means that it remains engaging throughout, never letting things run dry, or too theoretically abstract.
‘The Twenty-Six Words That Created the Internet’ was published in April of 2019. Given the impact of the CDA, it is almost hard to believe that such a complete study hasn’t come around before now. Either way, if you want to learn (a lot) about one of the most important laws underpinning the Internet as we know it, read this.
In this Motherboard article, Vice yesterday highlighted some of the internal changes to Facebook’s policy on acceptable speech after the events of Charlottesville last year.
Specifically, it was noted that Facebook distinguish between statements supporting a white nationalist ideology, and white supremacy, with the latter in particular considered to be associated with racism – something prohibited on the platform. In response, there have been arguments that this distinction is meaningless, and that Facebook is effectively allowing Nazis to operate on their network as a result.
Facebook infamously ‘curates’ what its users see through the use of algorithms, and they have faced ongoing criticisms that ‘echo chambers’ are created as a direct result. This was particularly true in light of both Donald Trump’s Presidential election victory, and the outcome of the EU membership referendum in the UK. On a personal note, it was something that first became obvious after the Scottish independence referendum in 2014.
With this in mind, the question becomes what people actually want or expect Facebook to be. On one hand, the possibility of anybody sharing far right or extremist ideologies is seen as abhorrent and unacceptable, but on the other, the cultivation of echo chambers that distort political and social reality is decried as irresponsible.
Unfortunately, you can’t break through an online bubble by only allowing that which you find inoffensive to be shared.
The obvious response here is that there is a difference between healthy debate and sharing views which are hateful. However, this is something of a liberal utopian ideal which doesn’t actually play out in practice. Argument is messy. Debate isn’t always healthy. People don’t always play fairly. All of this is self-evident and will remain true whenever those with opposing positions come into conflict. Arguably, those beliefs that are considered most heinous are precisely those which need to be heard, challenged, and resisted, and in the same vein, the areas online which foster these biases without question need to be opened up to opposition.
If all we want is Facebook to be a safe space to share pictures of our dogs and holiday photos, then that is one thing. However, that is never going to be the reality, irrespective of what some may claim. Whenever people have space to express themselves, they will share their views on how the world should be. If we want to avoid all of the problems that doing so within the so-called echo chambers brings, then we need to stop reinforcing them by banning the very opposing views that would break them apart in the first place.