Last month, my first academic journal article was published by the leading international publication on IP law: the European Intellectual Property Review from Thomson Reuters.
From the abstract:
The Digital Millennium Copyright Act’s “notice and takedown” process is increasingly referred to as a model solution for content removal mechanisms worldwide. While it has emerged as a process capable of producing relatively consistent results, it also has significant problems—and is left open to different kinds of abuse. It is important to recognise these issues in order to ensure that they are not repeated in future legislation.
To that end, this article examines the DMCA with reference to its historical context, and the general issues surrounding the enforcement of copyright infringement claims. It then goes on to discuss the notice and takedown process in detail—along with its advantages, disadvantages, criticisms and praise. Specific examples of the kinds of abuse reported by online service providers are outlined, along with explanations of the statutory construction that allows these situations to continue. To finish, the viability of potential alternatives and proposed changes are discussed.
The article itself is available on WestLaw, citation: E.I.P.R. 2019, 41(2) at 70. However, you can also get a copy of the PDF below.
This material was first published by Thomson Reuters, trading as Sweet & Maxwell, 5 Canada Square, Canary Wharf, London, E14 5AQ, in European Intellectual Property Review as ‘Freedom of speech and the DMCA: abuse of the notification and takedown process’.
E.I.P.R. 2019, 41(2) at 70 and is reproduced by agreement with the publishers. This download is provided free for non-commercial use only. Further reproduction or distribution is prohibited.
There has been a recent trend of seemingly well-intentioned musicians taking to Twitter to engage with critics of the seriously flawed Copyright Directive, and in particular Article 13. Whatever the content of their arguments, it almost inevitably boils down to some kind of accusation that whoever disagrees with them is ‘just an academic’, a ‘big tech apologist’, or someone that doesn’t understand or appreciate what it’s like to be an independent musician.
I’ve been on the receiving end of these kind of claims, to the point that engaging any further became fruitless. Simply by dint of my position as a legal academic/employee of a tech company, the claim is that I must have an inherent bias that clouds my ability to critically analyse how copyright law will impact artists, because I am not a musician.
The thing is, I am a musician, and have been for almost 20 years. I sing and play guitar in a grunge band called Closet Organ, who successfully crowdfunded our last album, which included a vinyl LP release. I make chip-music and have played live as unexpected bowtie in places as far flung as London and Osaka. There’s also the innumerable other projects including the ‘bizarre and disturbing’ electronica of cup fungus, the scuzzy pop of Hog Wild, and the chilled out samplewave of ease and desist. I’ve personally put on a pile of gigs, been on tour as a music photographer more than a few times, was Review Editor of a fairly significant indie zine, and currently run my own underground tape label Cow Tongue Taco Records. I loved and played music long before I ever took a law class, or was employed by… well, anybody.
Safe to say, I have some investment in independent music.
Why ContentID doesn’t work for independent artists
For those not familiar with Article 13 of the proposed EU Copyright Directive, the long and short is that it will effectively require service providers such as Facebook to implement content filtering systems to detect and remove/prevent the upload of material that belongs to another party. YouTube already has a similar system in place – by far the largest and most complicated of its kind in the world – but the Directive would massively extend its reach.
There are numerous and detailed criticisms of Article 13, but all of them seem to fall on deaf ears as they come from the perceived position of a ‘corporate shill’, so here I want to briefly outline just one major issue that independent artists experience with the current ContentID system – and why any kind of expansion will inevitably be damaging rather than of benefit.
If an independent artist wants to get their music out there into the world, to the most popular music sharing sites, they need to use some kind of recognised distributor – as direct submissions are either impossible, or extremely restricted. A pile of these have sprung up, including Amuse, RouteNote, DistroKid, etc. Some charge a subscription fee per year, some take a cut of any revenue generated, and some of them don’t even have a website – operating just from an app. The concept is simple: You send your music to them, and they distribute it digitally to the various partners. One of these partners is YouTube.
What isn’t made clear by these distribution networks is that by submitting your music to YouTube, you essentially give the distributor a licence to enforce your copyright on the platform using the ContentID system. This automatically detects any music uploaded along with a YouTube video (including short clips), and flags it up as unauthorised. To many this might sound great. Stop people stealing your stuff!
The problem of course is that there is very often no way to denote authorised uses or channels with these common distribution services. Let’s consider the following two scenarios:
Scenario A: a young singer songwriter starts to build up a decent following online, by sharing clips on SoundCloud and YouTube. With the money they’ve made from the ads on their DIY videos, they put together a full-length album and use one of the most popular distribution services to make it available on Spotify, Apple Music, Amazon, YouTube etc. As they get more and more well known, they dig deep and fund a really flashy music video to promote the album. After teasing it on Facebook and Twitter, they upload it only to find that it has immediately been flagged for a copyright violation – on behalf of the distributor. The video won’t necessarily come down, but it does mean that they won’t be able to monetise it – and will lose out on the ad revenue they were expecting to recoup the cost of the production. Panicked, they dispute the claim using YouTube’s resolution procedure, but there’s no indication of how long that might take, and it has thrown off all of the promotions they were planning. There’s no explanation of this anywhere in the distributor’s app that they used, and they can’t get a hold of anybody who understands the issue and has access to release the video for commercial use.
Scenario B: An artist (A) is asked by a fellow musician (B) if they would be interested in a collaboration. The process is simple: B will supply A with some vocal samples that A can then chop up and use however they wish. A gladly accepts, and comes up with a whole electronic composition that brings the vocals to life. B loves the track, and asks if they can use it on their upcoming DIY release. A agrees. B’s friend runs a small label who agrees to put out the album, and they use a distribution service which sends the album to all the major partners automatically – including YouTube’s ContentID system. A few years later, A is producing short video blogs and decides to use one of their old tracks as background music. It gets flagged up as a copyright violation automatically, which A disputes – but the appeal is rejected by the distributor, who has no knowledge of how the track came about in the first place.
Both of these scenarios are common, and a version of B actually happened to me personally. There are plenty of other similar situations, which are easily discoverable with a bit of Googling.
There are a few takeaways here:
Independent musicians are at the mercy of a system which locks them out from negotiating their own contracts without major label backing, and they therefore have to rely on gatekeepers which provide an inadequate level of information and control over their own music.
Artists who are starting out lack the information required in order to make informed decisions about their interaction with such services, and can inadvertently give away their ability to exploit their creations commercially due to how the systems are constructed.
The ContentID approach to copyright enforcement gives huge clout to the first entity to register a piece of work within their system – which is rarely going to be the artist themselves.
This model has no room for the ad-hoc, informal, and varying ways in which independent musicians create and share their works online.
The current ContentID system works on a first-come, first-served basis. It puts huge power in the hands of intermediary distribution services which do not provide a service that can ever give artists the amount of control over their licenses they would require to fully exploit their creations. The nature of the beast means that informal collaborations between like-minded folks can unexpectedly tie up their creative expression years down the road. Article 13 will only expand these systems, which will inevitably be less sophisticated on other platforms than ContentID. Independent artists lose the ability to share their work even further.
So… as an academic, a tech employee, but perhaps most importantly a musician: Article 13 is a disastrous piece of law, and should be scrapped.
Back in May I wrote about the data subject access request I had submitted to the Home Office, and how they required a ‘written confirmation of likeness’ signed by a very particular list of people before providing any information. This is purportedly to verify your identity, but as I noted at the time, the requirements are stricter than those that the same organisation sets for processing passport applications. One may reasonably surmise that this could be an attempt to put people off from making these requests.
I am following up with this post to document what happened after I submitted the request, for those interested in the reach and limits of data protection law.
Objection to the Home Office’s disproportionate requirements
At the time, I objected to the unusually stringent verification requirements, as well as that these would not be accepted online. Extract below:
As you will be aware, data controllers are required to undertake ‘reasonable measures’ to verify the identity of the person making the Data Subject Access Request. I submit that by providing a copy of my passport, and the passport number, that this more than satisfies the legal requirement.
Further, I submit that since the list of those who are considered appropriate to provide this written confirmation is less extensive than those who can act as a counter-signatory for a passport application in the first place, that this requirement is demonstrably disproportionate, and as such not required to respond to my request.
The Home Office responded simply to reiterate that the verification must be done via post:
We require that you send in a copy of your ID via the post, please have your photographic ID certified and sent to us at the address below.
We request certified ID in this method for security to reduce the chances of fraudulent data requests.
‘To reduce the chances of fraudulent data requests’? Aye, right. They did not address my questions about inconsistency.
I responded to press them on this:
I understand that you are obliged to take ‘reasonable measures’ to verify the identity of the person making the subject access request. However,
1. I do not see how requiring this to come via the post makes any difference whatsoever.
2. The requirements for certification are far stricter in terms of who can make such a certification than those who can counter-sign a passport (https://www.gov.uk/countersigning-passport-applications). This is not ‘reasonable’, or ‘proportionate’ within the meaning of the relevant law.
I am prepared to send in a certified copy of my ID to verify my identity, but I reject the requirement to have the certification made by one of the following:
* a legal representative, registered with the Office for the Immigration Services Commissioner (OISC) * a solicitor, barrister or chartered legal executive * a commissioner for oaths * a registered charity
Instead, I ask you again to confirm that you will accept a ‘written confirmation of true likeness’ from someone on the same list that you accept for passport counter-signatories (detailed in the URL above).
If you refuse this, then your requirements would appear designed solely to prevent people from getting access to their data by implementing unreasonable stipulations, and I will be making a formal complaint to the ICO.
They did not respond to this, or my follow up e-mail a few weeks later, so on the 20th of June I reported them to the UK’s Information Commissioner (ICO).
Specifically, I drew attention to the inconsistency in the listed requirements for ID verification when it came to passport applications versus data subject access requests, and that it appeared that those that related to the latter were therefore disproportionate.
They replied in just over a week:
The DPA 1998 and DPA 2018 do not state what identification or verification data controllers may request. Data controllers must be satisfied as to the identity of the requester to ensure personal data is not inappropriately disclosed. This also helps prevent fraud. The ICO therefore reviews concerns regarding this matter on a case-by-case basis.
The ICO is satisfied that generally, the level of identification and verification requested by the HO for SARs is both reasonable and proportionate. This is because the HO must be certain of a requester’s identity before releasing any personal information.
In light of the above, we would advise that you provide the HO with the requested documents and verification of these documents to allow the organisation to process your SAR.
Basically, they just reiterated that data controllers have to take steps to verify the identify of those requesting data before processing a subject access request – choosing not to address the specific questions I had raised around proportionality.
I pressed them on this, and after about a month the ICO responded:
I understand that you are concerned about the level of identification requested by the Home Office for subject access requests, as it requires more identification for this than for passport applications.
As stated previously, this is not a matter that is of concern to the ICO at this time. I understand that it appears there is inconsistency within the Home Office in regards to identification requested. However, due to the nature of information held by the Home Office, it must satisfy itself as to the identity of a requester before disclosing personal data.
As it is not up to the ICO as to what the Home Office requests for different applications, and if you are concerned about inconsistencies within the Home Office, we suggest you raise this with the organisation.
I apologise the ICO can’t be of further assistance at this time. However, please note that the concerns you have raised will be kept on file. This will help us over time to build a picture of the Home Office’s information rights practices.
What this tells us
This process was informative as it demonstrates the barriers that organisations such as the Home Office will place in the way of those who seek to exercise their rights under data protection law. By making the process as difficult and cumbersome as possible, it locks out all but the most determined and able.
It also tells us a bit about the ICO’s role and reach in these cases: Namely, that it is extremely limited – at least when it comes to making assessments of proportionality. Rather than taking a holistic view of the data protection practices and requirements of an organisation, the ICO simply looks at each portion in isolation. In other words, it doesn’t matter whether the Home Office’s approach is entirely inconsistent, and demonstrates a clear lack of proportionality on any reasonable assessment of all the facts. The ICO only has to be satisfied that the requirements relating to a very narrow and immediate situation are proportionate, irrespective of the wider context.
This makes no sense except in the most literal of readings, and makes a mockery of the spirit of data protection legislation. We shouldn’t be too surprised that this is the approach of the Home Office though, given the appalling state of the UK’s immigration law.
I am currently debating whether or not to proceed with the formal ID verification process to see what they will provide once you get through the barriers. Watch this space.
The Commodore 64 is a classic, and it has played an especially important role for me. It was the first computer I ever owned; given to me by my parents as a fifth birthday president, and my granda used to spend hours showing me how to program it from a big purple book he had gotten from a magazine. I credit this introduction with piquing my interest in technology early on. Staring at the rainbow loading screen, waiting for games to load from the cassette deck is also probably to blame for my terrible eyesight. I rediscovered the C64 as a teenager when I learned of its coveted SID sound chip, and I’ve been making music with it ever since under the guise of unexpected bowtie.
One of the things that always stood out in my mind was the C64’s keyboard, with its thick brown keys and symbols that I never really understood. If I could, I would use it all the time, but that was never really practical. Recently though, I fell down the rabbit hole of ‘mechanical keyboards’ online, where people build and use special keyboards with custom switches, sizes, and layouts. One of the projects I found included a re-working of the C64’s keyboard which brought it into the modern world, and I felt inspired to do something similar.
Sourcing a C64
I originally planned to source a broken Commodore online and use the keys from it, rather than defiling my beloved console. Truth be told though, mine was in a bit of a shabby state, and even broken C64s go for a pretty penny in the UK. I decided to just make use of what I had, with the belief that I could always pop the keycaps back on if I changed my mind later.
Purists, look away now.
Removing the keys
Removing the keys initially proved a bit trickier than I had anticipated… partly because I was trying to get them off while the thing was fully assembled. What you need to do is open up the breadbin itself, and then detach the keyboard from the chassis by taking out some screws. It was pretty easily done once I realised this was what was required – especially since my case had already been taken apart a fair few times to get at the SID.
Popping the keys off can be tricky, as they are far stiffer than one would expect from a more modern device. Keycap removers didn’t really work, and so I resorted to using a set of pliers, being careful to pull straight up to avoid damaging the posts underneath. Each key has a large spring underneath which can ping off easily if you move too quickly, so watch out for that.
Underneath was filthy, and it seemed like a good opportunity to give things a clean even if I didn’t end up building anything. I left both the spacebar and the Shift Lock keys in place, as I suspected they would be a bit more fiddly to deal with, and I wasn’t planning on using them anyway.
Finally, I gave the keycaps a clean with some soapy water and left them to dry.
The new board
There are lots of mechanical keyboard designs to choose from. For this project, I went with the Preonic, and got a partially built kit as part of a group buy from Massdrop, which came with a fetching orange aluminium case. It is a compact, ‘ortho-linear’ keyboard – which means the keys are arranged in a grid like pattern, rather than staggered as you would see more commonly. The idea is that your fingers have to move less, and in a more natural way when typing – which reduces strain. I opted for the Preonic (rather than the Planck, which has less keys), as I wanted to make the most of the C64 keycaps.
One of the main reasons people like mechanical keyboards is the quality and range of switches available. The switches are the bits underneath the plastic caps with the letters on them (or not, depending on what you prefer). Unlike the squishy keys you find on laptops and other modern computers that are so unsatisfying to use, mechanical keyboards feel great. They can either be smooth (linear), have a bump (tactile), or have a bump and an audible noise (clicky). Most people probably mentally associate typing experiences from yester-year with the clicky type – personified by the Cherry MX Blue switch, though the Commodore 64 actually had smooth keys. For this build I opted for the Cherry MX white (aka milky). They are clicky keys, but with a far less sharp and pronounced click than the Cherry MX blues. This means it’s a bit more socially acceptable when typing around other people (!).
Fitting the keycaps
The biggest issue with this project was that the keycaps from the Commodore 64 aren’t compatible with any of the switches that are commonly used today, and so wouldn’t just snap on. However, some wonderful person has designed a C64 to Cherry MX adaptor that can be 3d printed and shared it for free (open source is wonderful).
I don’t have a 3d printer, so obviously had to outsource this. Getting a decent price in the UK was tricky at first, as nobody would take up an order that small, but eventually I got 80 caps for £22.98 including delivery from 3DPrintDirect.co.uk. That would be more than enough to cover the 60 keys on the Preonic. The material was SLS – carbon reinforced plastic.
It took about two weeks for the adaptors to arrive. I had read that some people had trouble with their adaptors, especially if the finish on the 3d printing was rough, but mine worked out pretty well. At first I thought I might have to disassemble the switches to install them as they were very tight, but in the end I could just press them against a flat surface and push hard with the keycaps on. My fingers hurt after doing a bunch, so I did them in batches. When the board arrived, I realised that it actually made much more sense to put the adaptors into the caps first, and hit them gently with a small hammer, before pressing the whole thing onto the switch. Getting them at the right angle could be tricky, but they all turned out fine in the end.
It’s worth noting that I did also discover that removing an adaptor from a keycap isn’t really possible without destroying the soft plastic and leaving the inside of the cap gunked up, so that means that these caps are now committed to the project, and won’t be reusable on the C64 itself like I had hoped.
Building the Preonic
The Preonic kit I got came from Massdrop in a bundle, and it was packed and presented beautifully. The instructions however, weren’t exactly n00b friendly, and it took me a bit to work out exactly where the spacers and screws were meant to go; it seemed like there were extra unnecessary holes in different places, which threw me. Eventually it came together though, and things began to take shape.
I’ve done a fair amount of soldering in my time modifying Game Boys etc, so it wasn’t a difficult task to deal with the through-hole switches. I did discover that one of them was bust after putting it all together though, which meant I had to de-solder and replace it, which was a bit of a pain.
When I started looking at the actual layout of the board, I ran into a couple of issues with my plan to use the Preonic. Firstly, I had overlooked the fact that the board is built around a grid of single keys, and doesn’t really support anything larger except in the middle of the bottom row. That meant that I didn’t have enough keys from my C64 to cover the full thing, and no perfect option for the space bar.
Secondly, my choice was further restricted by the design of the C64’s keys, as the caps are ‘sculpted’ depending on where they sit, so I couldn’t just take one from the top row and put it on the bottom, even if it would work better for my purposes.
In the end, I managed to source some extra single keys online, and did the best with what I had. I had feared that it would be wildly off, or that I would need to use really inappropriate keys, but it actually worked out not too bad at all.
Some folks have commented that the layout doesn’t make much sense, and I should say that I am not religiously sticking to what is printed on the keycaps. For example, I am using the ‘Return’ key in the place of a space bar, as that was the only key from the original board which seemed appropriate and fit the slot on the PCB. I’m using an equals key for Enter. The @ key brings up a list of my bookmarks with Shiori; The £ key activates my Alfred snippets; RUN STOP is my Hyper Key; < and > activate the ‘Lower’ and ‘Raise’ layers of the Preonic, and so forth. The beauty of the software which powers these keyboards (QMK) is that you can map and re-map the layout to whatever makes sense for your own needs.
As a finishing touch, I got a special rainbow coloured USB cable made up from coolcable.co.uk as a nod to Commodore. Special thanks to them who put up my last minute changes to the connectors as I ordered the wrong thing.
I still need to set up the Preonic’s layers to suit my own custom layout, but I really like the board. It’s nice, solid and relatively compact, and now I have a personal homage to my granda and first computer. The keys feel pretty great to type on, though they could potentially have done with some heavier switches as the cap and adaptor combo mean that you end up putting more force on the keys than you usually would.
Edit: This was featured on Hackaday, which is awesome.
For the past few months I’ve been bombarded with adverts for some supposedly next generation headphone technology which adapts to your individual hearing, the ‘Nuraphone’. Various industry professionals were shown in the accompanying videos, reacting with apparent amazement at the sound quality. As a bit of an audio geek, I wanted to see for myself whether or not there was anything to the claims, and ended up picking up a pair recently – something that probably also partly confirms that online marketing campaigns really do work.
One thing that’s worth noting is that Nura released a software upgrade for the headphones at the end of July which added a bunch of features, and corrected some issues that people had reported. They even added in active noise cancellation – which wasn’t present before. That is particularly cool, and something that it’s good to see a company do. Newer stock of the Nuraphone come pre-loaded with this update.
Custom Hearing Profiles
The main principle behind the Nuraphone is relatively simple, in theory at least. You first connect them to a smartphone app, where some quick tests result in a custom ‘hearing profile’ being created. This profile is then stored in the headphones, and playback is tailored accordingly – theoretically resulting in sound that is far more pleasing to your ears than a generic approach would provide.
Over and In-Ear Design
The other main defining characteristic of the Nuraphone is their unique over/in-ear design. The outer silicone cup encloses your ears (and doesn’t irritate any of my many cartilage piercings), and delivers bass frequencies, whereas the inner ear portion sits just inside your outer ear, delivering the mid and treble frequencies. This helps provide a nice separation, and a much fuller sound. It works very well, though does feel pretty weird at first.
Immersion mode uses drivers in the outer cups to provide ‘feeling of a live performance’. This basically makes the bass much fuller and deeper, without compromising on the quality of the mid or treble frequencies.
What Other People Say
I was intrigued by the Nuraphone, but also pretty skeptical. It didn’t help that the reviews online ranged wildly from the bizarrely enthusiastic to the dismissive. There were many puff pieces from sites who were clearly just happy to get a free pair, but there were also users who claimed that they and their friends had the quality was so good that they had actually been moved to tears on their first listen. Others weren’t as enamoured, angrily dismissing the Nuraphone as nothing but marketing spin, or even suggesting that £30 generic earbuds sounded just as good. Ouch.
In general, many of the criticisms seemed to focus on the following:
The design was too heavy and uncomfortable – especially if you have glasses.
The sound wasn’t that impressive.
The setup was complicated and confusing, resulting in different profiles every time.
The ‘generic’ profile that you compare your personalised profile with sounded worse than it should, and that something shady must be going on.
I don’t think I’ve ever seen such conflicting experiences with a single product, which only pushed me to try them out for myself.
Setting up the headphones was as simple as downloading the app, connecting via Bluetooth, and following the on-screen instructions. I had no bother getting the cups in the right place, and despite trying the calibration process a few times, my profiles were pretty much the same each time. It couldn’t really have been much easier.
The one real annoyance is that once set up is complete, you’re encouraged to switch between a generic ‘flat’ profile and your own personalised one to hear the difference. As many people have commented online, the problem is that the generic profile sounds pretty terrible on its own – far worse than you would expect even as a baseline from far cheaper headphones. It’s as if it has been heavily compressed. As a result it all seems a bit disingenuous, and Nura have probably not done themselves many favours by including this ‘feature’.
After reading all of the negative reviews, I expected the Nuraphone to be extremely heavy and uncomfortable, but in actual fact, they slipped on with ease; the silicone of both the ear-cups and headband feeling both soft and comfortable. They weren’t particularly heavy – at least not markedly more so than other cans I’ve worn, and even the in-ear protrusions didn’t feel all that bad – going less deep than I expected. As for my glasses, they didn’t seem to make the slightest bit of difference to proceedings.
I wore the headphones for about seven to eight hours on the first day (with breaks), and my ears definitely got a bit sore by the end, though I knew to expect this from the Nura support team, who claimed that the tips of the in-ear portions would soften up with use. Despite this physical discomfort, my ears didn’t ever get hot or sweaty, which seems to be thanks to the use of the ‘TeslaValve’ air-flow technology. Not just a gimmick after all!
After listening to feedback on the issue of comfort, Nura have started to include different tip sizes with new purchases, including small and medium. There are also some third party options out there that are compatible, though whether they have a significant impact on the sound (negatively) is up for debate.
Many of the fundamental controls of the Nuraphone are contained within the accompanying smartphone app. Some of these can be mapped to a couple of touch sensitive ‘pads’ on the side of the cans – such as volume up/down, play/pause, etc. Both single and double tap gestures are supported, allowing you to choose four controls to make use of.
I personally have volume up and down configured for the single taps, with social mode and play/pause for the double taps. At first I thought this was a bit awkward, but after a day or so I found it really natural, and have even found myself trying to tap on the side of my other headphones out of a force of habit. After using the Nuraphone controls, the physical buttons on my Sennheisers feel a bit clunky and unintuitive.
Speaking of social mode, this is a nifty feature that turns off the active noise cancellation, and turns on microphones to allow you to hear what’s going on without having to take off the headphones. I’ve found this really useful in practice, though it can be a bit disconcerting to have certain ‘exterior’ sounds suddenly amplified louder than they actually are with the headphones off.
It’s not all great news though. The controls themselves are extremely sensitive, with no adjustment possible. Often I would end up accidentally triggering the pads when adjusting my glasses, or just moving my arms about, and I had to continually turn the headphones back down. There are also some functions that can only be accessed via the app, which is a bit annoying. For example, at the time of writing you can’t adjust the level of Immersion mode without being connected to the phone app – which would be handy for switching between genres. I’ve heard from Nura that they are working on a desktop app which would help alleviate this issue for those of us who regularly listen on our computers.
Finally, the controls don’t work when a cable is connected, which is understandable, but a bit irritating, breaking the continuity of experience.
I’ve been dubious of wireless technology for a long time, given the pretty crappy historic quality of even higher end headphones. As a result, I only recently bought into the whole arena and was pleasantly surprised. Still clinging to this, I was put off at first by the prospect of having to buy a separate, expensive analogue cable, but realised that I don’t actually need one for everyday listening.
In terms of the quality of the bluetooth connection itself, it was generally pretty good. For some reason my first pair worked fine with my laptop, but dropped out constantly when connected to my phone when I put it in my pocket. That meant they were totally useless for walking about, and I was restricted to using them when stationary at a desk, or on the couch till they got replaced. Generally though, once this was resolved, the connectivity was good, and I only had the odd blip here and there. Definitely not something that would be irritating or especially notable.
This all leads nicely to my biggest issue with the design of the Nuraphone: the proprietary nature of the cable connection. Apparently in order to allow compatibility with a bunch of separate devices, they had to create a new single port that is used for everything, including charging. I’’m not really convinced by this argument, and it means that it’s yet another cable to carry when travelling, which isn much harder and pricier to replace. The nightmare scenario would be misplacing one while on a trip.
In the past I have played with various special music EQ apps that provide extra stereo spacing and increased ‘ambience’, but always returned to listening flat, as they never did a great job of applying equally the quality across genres. In other words, what sounded good with electronica would make grunge sound utter garbage. I suspected that this would be much the same with the Nuraphone, and I was pretty disappointed when it seemed to be the case upon first listening. Techno sounded pretty good, but generally the sound was too coloured, with vocals often lost in the mix. It wasn’t until I realised that I had an additional EQ set on my Mac already that was interfering with things that I actually heard what the Nuraphone could do, and my perception completely changed. Why bother even bringing this up? To highlight that it’s important to start with a flat EQ, or you’re going to get a distorted perception of the sound – which is easily done.
While I wasn’t exactly reduced to tears, the sound from the Nuraphone really was pretty impressive, bringing a whole new feeling of dynamism to music that I knew well. It was as if my favourite tracks had been given a personal re-mastering, and it was amazing. I scoured my library to find old songs and re-discover them in a different light. Despite an overall perceivable increase in quality, a lot of this seemed to come down to the ‘Immersion’ mode. When I first read about it, I was dubious. Cranking up the bass so it rattles against your ears doesn’t really ‘capture the feeling of being at a live performance’. However, I have to honestly say that there is something in that claim. With the slider at 75% I could feel every kick of the bass drum in early Green Day albums as if Tre Cool himself was in the same room, and there were times during some songs that genuinely reminded of being at a music festival due to the interaction of instrumentation.
There have been some comments that the Immersion mode is too heavy handed, such as:
The Immersion effect might suit bassheads or movie-watchers, but you’ll get the most balanced sound with it on very low – or turned off.
That hasn’t really been my experience (though maybe I’m a basshead?). Even at a fairly high level (70%), the Immersion mode only serves to add a bit of greater depth to the tracks, and doesn’t result in distorted or farty bass – though there is a definite cut off where this changes. To be honest though, if I wanted a flatter, more ‘balanced’ sound, these are not the headphones I personally would be using in the first place.
How do they compare to my other headphones though? I don’t really have technical details here, but there’s no doubt in my mind that they offer a much more dynamic and engaging sound. My other headphones still sound and feel great, but if I want to really get lost in the music, I’ll go with the Nuraphone every time.
Portability: Unfortunately, despite having pretty great passive and active noise cancellation, the Nuraphone don’t seem especially great for travelling. They don’t fold down, and the protective case is pretty chunky. I still need to decide whether or not they are worth sacrificing precious space for in my carry on.
Audio fade in: The Nuraphone have a built in ‘fade in’ to the audio when they start playing back from silence. That means that if you fire up Spotify, stick on the headphones, and start playing a track with a banging intro, it ends up lacking punch. I can see the benefit of this feature when you put the headphones back on – to avoid getting blasted mid-song at full volume, but this should 100% be an optional feature, not something that can’t be turned off.
Noise cancellation: The passive noise cancellation of these headphones is pretty decent already, but the active noise cancellation is especially good. The combination seems to block out more ambient noise than my Sennheiser HD 4.50 BTNC cans, and it’s pretty amazing to hear the difference toggling ‘social mode’ on and off makes.
Battery life: The Nuraphone boast an impressive 20 hour battery life. I haven’t tested them continuously, but what I do know is that the headphones arrived on a Monday; I used them heavily, and by Friday morning they were only down to 40%. In practice this means I’d only really need to charge them about once a week, which is pretty great.
I really didn’t want to like these. Despite having a glimmer of hope that they might be decent, I planned to try them out for a few weeks then send them back after discovering that they were mostly just marketing puff after all. However, that isn’t how things worked out.
The truth is that I really like the Nuraphone. The sound is different to anything I’ve heard before in a set of headphones, and the Immersion mode really helps bring certain kinds of music alive. I’m excited about listening to music again, and for that reason alone they are well worth the money for me. If they only took standard cables, and folded down to be a bit more portable, it’d be tough to find any big flaws.
They retail for £349, but you can often find 20% off discount codes on Reddit.
With the news that the United States are to withdraw from the UN’s Human Rights Council, it seemed poignant to highlight one of their recently published Special Rapporteur’s reports, in which they looked at the state of online ‘content regulation’, and the impact on freedom of expression.
[It] examines the role of States and social media companies in providing an enabling environment for freedom of expression and access to information online.
The report itself is one of the better publications from an official entity, and talks about a lot of important issues that other bodies tend to ignore (willingly or otherwise). As a result, the whole thing is worth reading, but a few portions in particular stood out for me, and are worth sharing:
One of the current major questions in the realm of intermediary liability is how platforms should deal with ‘extremist’ content. In an attempt to find a compromise between ‘doing nothing’, and total removal of anything questionable (with all of the resultant implications for freedom of expression), the concept of ‘counter speech’ is often brought up as a solution. In principle the idea is that instead of silencing disagreeable expression, people should instead seek to directly counter the ideas. This avoids the problem of subjective censorship, protecting free speech, and also ‘shines light into the dark’, rather than driving people underground where there is little or no critical dissent.
As well intentioned as this approach may be, it is one that is now unfortunately being misconstrued as an obligation for platforms to incorporate, rather than interested individuals or groups. For example, there are suggestions that the likes of YouTube should place an interstitial banner on disputed content to warn viewers of its nature. In the case of pro-ISIS videos, this notice would include links to anti-extremism programs, or counter narratives. As the report wisely notes:
While the promotion of counter-narratives may be attractive in the
face of “extremist” or “terrorist” content, pressure for such approaches runs the risk of transforming platforms into carriers of propaganda well beyond established areas of legitimate concern.
Despite the fact that there is little evidence that such an approach would do anything but bolster the already established beliefs of those viewing the content in question, there would inevitably be calls for it to be extended to any particularly contentious content. Ostensibly, pro-choice campaign websites could be overlaid with arguments from conservative religious groups; McDonalds.com with a link to the Vegan association. This may seem far fetched, but the danger is clear: as soon as we replace our own critical faculties with an obligation on intermediaries to provide ‘balance’ (even with the most extreme of content), we open the door to the normalisation of the practice. There is scant analysis of this particular issue out there at the moment, and I’m especially pleased to see it highlighted by the UNHRC.
Many companies have developed specialized rosters of “trusted” flaggers, typically experts, high-impact users and, reportedly, sometimes government flaggers. There is little or no public information explaining the selection of specialized flaggers, their interpretations of legal or community standards or their influence over company decisions.
Lack of definition of terms
You can’t adequately address challenges if the terms aren’t defined. For that reason, crusades against vague concepts such as ‘hate speech’, ‘fake news‘, etc are at best, doomed to failure, and at worst, a serious threat to freedom of expression. This isn’t a problem limited to the issues surrounding intermediary liability, but one which is made more visible by the globalised, cross jurisdictional nature of the Internet.
The commitment to legal compliance can be complicated when relevant State law is
vague, subject to varying interpretations or inconsistent with human rights law. For
instance, laws against “extremism” which leave the key term undefined provide discretion to government authorities to pressure companies to remove content on questionable grounds.
This is pretty self explanatory, but something which is often overlooked in discussions around the responsibilities of intermediaries in relation to content regulation. We should not accept the use of terms which have not been properly defined, as this allows any actor to co-opt them for their own purposes. Tackling ‘online abuse’, for example, is a grand aim which can easily garner much support, but which remains empty and meaningless without further explanation – and thus, open to abuse in of itself.
Following on from the previous section, platforms (perhaps partly as a direct result of the contemporary political rhetoric) adopt vague descriptors of the kind of content and/or behaviour which is unacceptable, in order to cover a variety of circumstances.
Company prohibitions of threatening or promoting terrorism, supporting or praising leaders of dangerous organizations and content that promotes terrorist acts or incites violence are, like counter-terrorism legislation, excessively vague. Company policies on hate, harassment and abuse also do not clearly indicate what constitutes an offence. Twitter’s prohibition of “behavior that incites fear about a protected group” and Facebook’s distinction between “direct attacks” on protected characteristics and merely “distasteful or offensive content” are subjective and unstable bases for content moderation.
Freedom of expression laws (generally) do not apply to private entities. In other words, Facebook et al are more or less free to decide on the rules of engagement for their platform. However, as these intermediaries increasingly control the spaces in which we as a society engage, they have a responsibility to ensure that their rules are at least transparent. The increasing multi-jurisdictional legal burdens and political pressures placed upon them to moderate content reduces the likelihood of this significantly. It also provides little to no stability or protection for those who hold views outside of the generally accepted cultural norms – a category that includes political activists and dissidents. In many parts of the world, having a homosexual relationship is considered ‘distasteful’ and ‘offensive’, as are the words of the current President of the United States – which demonstrates the problem with allowing (or expecting) a technology company to make such distinctions.
‘Real name’ policies
For those not familiar, this refers to the requirement from certain platforms that you must use your actual, legal name on their service – as opposed to a username, pseudonym, nickname, or anonymity. Officially the reason is that if someone is required to use their ‘real’ name, then they are less likely to engage in abusive behaviour online. We can speculate as to the real motives for such policies, but it seems undeniable that they are often linked to more accurate (aggressive) marketing to a platform’s user base. Either way, the report notes:
The effectiveness of real-name requirements as safeguards against online abuse is
questionable. Indeed, strict insistence on real names has unmasked bloggers and activists using pseudonyms to protect themselves, exposing them to grave physical danger. It has also blocked the accounts of lesbian, gay, bisexual, transgender and queer users and activists, drag performers and users with non-English or unconventional names. Since online anonymity is often necessary for the physical safety of vulnerable users, human rights principles default to the protection of anonymity, subject only to limitations that would protect their identities.
Within traditional digital rights circles (if there is such a thing), there appears to be a growing belief that anonymity is a bad thing. I’ve even heard suggestions that the government should require some kind of official identification system before people can interact online. This is clearly a terrible idea, and may seem utterly laughable, but when you consider that this is exactly what will be law for adult websites in the UK later this year, it seems like it might not be completely out of the realms of possibility after all. We need to better educate ourselves and others on the issues before the drips become a wave.
Automated decision making
Automated tools scanning music and video for copyright infringement at the point of upload have raised concerns of overblocking, and calls to expand upload filtering to terrorist-related and other areas of content threaten to establish comprehensive and disproportionate regimes of pre-publication censorship.
Artificial intelligence and ‘machine learning’ are increasingly seen as some kind of silver bullet to the issues of moderating content at scale, despite the many and varied issues with the technology. Bots do not understand context, or the legal concept of ‘fair use’; frequently misidentify content; and are generally ineffective, yet the European Union is pressing ahead with encouraging platforms to adopt automated mechanisms in their proposed Copyright Directive. Rather than just trying to placate lawmakers, intermediaries need to recognise the problems with such an approach, and more vigorously resist such a solution, instead of treating it as a purely technological challenge to overcome.
Companies should recognize that the authoritative global standard for ensuring
freedom of expression on their platforms is human rights law, not the varying laws of States or their own private interests, and they should re-evaluate their content
This is a pretty strong statement to make, and demonstrates an approach that I strongly resonate with. In principle, at least. In practice however, companies are obliged to follow the legal obligations of the jurisdictions in which they are based (and sometimes even beyond, given the perceived reach of the GDPR). The extent and application of ‘human rights law’ varies significantly, and there are no protections for intermediaries that rely on mythical ‘global standards’ – even the UN Declaration of Human Rights.
The latest threat to both freedom of expression and the neutrality of the Internet is the proposed European ‘Copyright Directive’, and in particular, Article 13.
Muchhas been written on the dangers of Article 13, so I won’t repeat it here. Needless to say, if implemented, there would be serious consequences for how we interact online. It would be far easier for people to have content taken down from the Internet, or to prevent you from posting certain things, even if they have no real legal justification for doing so. In other words, you’d better get used to seeing this:
You can (and should) write to your MEP to express concerns about the upcoming law. You can do so using sites such as saveyourinternet.eu, but I didn’t think their template letter or MEP search was particularly good, so I wrote my own. Feel free to modify and use the below language. You can find and contact your MEPs using https://www.writetothem.com/.
David Martin MEP
David Coburn MEP
Catherine Stihler MEP
Nosheena Mobarik MEP
Ian Hudghton MEP
Alyn Smith MEP
Thursday 21 June 2018
Stephen McLeod Blythe
Dear Catherine Stihler, Alyn Smith, Ian Hudghton, Nosheena Mobarik, David Martin and David Coburn,
I am a legal academic and digital rights advocate from Glasgow, Scotland. I write with respect to the so-called ‘Copyright Directive’, and ask that you stand up against the proposal.
My main area of concern regarding the proposed Directive lies in Article 13. While it does not specifically impose a requirement on intermediaries to introduce pre-screening mechanisms, the language does explicitly refer to ‘the use of effective content recognition technologies’. As a result, this approach is clearly seen as an appropriate norm.
There are many problems with content recognition technologies, which I will not waste your time with by reciting in full. However, the bottom line is that they are expensive to implement; ineffective; easily defeated; frequently mis-identify content; and do not understand context, or the concept of ‘fair use’. In my work I already see significant abuse of copyright laws by complainants who wish to silence critics, and any kind of automated system will simply compound this problem.
Should Article 13 go ahead unchanged, intermediaries will inevitably adopt ‘dumb’ filtering systems in order to reduce their liability, and the result will be a significant chilling effect on both freedom of expression, and free enterprise. The consequences will impact heavily both on individual rights, and the economy.