I have had this project in the works for a while now, but only just got around to finishing it when I realised that all of my other mechanical keyboards had the loud-as-hell clicky style key switches. This was always fine when I worked from home in a tiny cupboard and could disturb nobody, but lately I’ve been sharing an office with my wife who is on video calls pretty constantly, and my delightfully clickety clackety Ergodox keyboard with Cherry MX Blues suddenly weren’t as charming as they once were (Well, they were for me, but probably nobody else.
Rather than bore you with all the geeky build details, here are the salient points:
1. What are the colours all about?
I had originally wanted to do one of these cool blue to pink gradients for the keycaps… but realised that the set I ordered didn’t have enough single squares to cover the full grid required – and I didn’t fancy having to get a full new set just for a few extra keys. The other problem is that while gradients look cool, they also make it a bit of a nightmare to find specific keys that you need at a glance. In the end, I decided to go with something a bit more practical. The yellow keys are modifiers like Escape, Enter, space, etc. The pink and blue rows are the letters, and the green keys are reminders of where specific keys I need for work shortcuts are.
2. Wait, why are all the keys square?
Aesthetic, innit.
This kind of grid layout is known as an ‘ortho-linear’ keyboard. There are a bunch of reasons people like this system… with the theory being that it keeps your fingers in a more natural typing position than the standard setup. To be honest though, I just think they look cool, and wanted to try out something a bit different (though this isn’t my first grid rodeo…)
3. But there’s only four rows! How does that work?
Err, yes. There is. In the mechanical keyboard world there is often a bit of an obsession to see how many keys you can strip out and still type just as fast as you would on a full size board. The Planck is the smallest board I have tried so far, with just 48 keys in total. The sharp eyed amongst you will probably have worked out that this means there isn’t enough room for a number row… and there isn’t. So how do you get access to all those keys that are missing?
The idea is pretty straightforward: Rather than have just one ‘shift’ layer which gives you capital letters and exclamation marks and all that good stuff, you have multiple ones. The blue keys to either side of the yellow space bar(s) on the bottom row let you ‘shift’ into completely different layers which have all the other keys – which you can program however you want.
For reference, here is my top layer, and then a couple of my additional ‘shifted’ layers.
So if I want to get to the number row, I press and hold down the blue key to the right of the space bar. Simple.
I am still figuring out what the perfect layout for me is (ignore that rogue right arrow on the top layer… I’m not sure what is going to end up in that space just yet) – but I already really like this board. It’s neat, and I have space for all of my weird custom modifier shortcut keys I have set up for work. The keys I use most are on the top layer, and anything I use less is just an extra press away. Of course it takes a bit of getting used to, but then all keyboard changes do – and I’ve adapted to the Planck far quicker than I have others in the past.
4. What kind of switches are in that bad boy?
Those would be the Outemu Sky 68g switches. They are ultra-tactile without being too loud to use around other folks.
5. What’s with this obsession with weird keyboards?
When you spend most of your life using one specific device, it’s good to explore different ways of interacting with it. Plus, the MacBook Pro keyboards are now so shockingly bad, that I will do almost anything to avoid having to use one. If you know, you know.
6. Nice USB cable.
Why thanks for asking. It’s a custom made one from CoolCable.co.uk.
My colleague Bryan is a productivity whizz. So much so that we often question whether he is actually human, and whether or not he would pass a Turing Test. I too am partial to finding ways to improve things that I have to do every day, and so when he gave a passionate recommendation for the To Do list app ‘Things’ from Cultured Code, I wanted to dive in headfirst, and I loved it straightaway.
No Android App
The problem with Things 3 however, is that it runs entirely within the Apple ecosystem. That means there’s no web interface, and crucially… no Android application. Having ditched the iPhone a while ago, I was left with no easy way to quickly add items to my To Do list while out and about. There is a way to send tasks via e-mail, but having to open up my mailbox, find the contact etc felt like too much friction for what should be much simpler.
Telegram and ifttt
What I do use all the time is the secure messaging app Telegram, and my dream was that I could just fire off a quick message and somehow have that shoot off an e-mail which would add the task to the Things inbox. It seemed like ifttt.com would make this simple, but it was actually much harder than expected. GMail’s ‘send’ integration no longer seems to work, and the built in ‘e-mail’ service only allows you to have one address associated with your ifttt account at any one time – restricting my workflow options as a result. This really should not be that complicated!
Integromat
I came across Integromat, which is essentially a much more powerful version of ifttt. The premise is the same though: You connect up a bunch of services, and tell them to do various tasks based on different circumstances. Unlike ifttt though, you can delve pretty deeply into the automations. It’s a bit trickier to pick up at first – especially if you aren’t familiar with programming, but gives a far greater degree of customisation.
To get my messages from Telegram into Things, I created the following ‘scenario’:
The way it works is by having a dedicated Telegram bot watch out for messages and send them via my GMail account to the special e-mail address for the Things inbox.
I decided that I might want to use this virtual helper for other things though, and didn’t want every single command I sent it to end up in Things as a To Do list item. To avoid that, I set up a filter on the scenario so that it would only send e-mails if the message began with ‘todo’ or ‘/todo’. Additionally, I used a text parser to take out those trigger words, and to add in a prefix of ‘via Telegram:’, so that when I look back on my outstanding tasks later, I have a bit of context about where they came from. In other words, if I add some bizarre things to my To Do list when intoxicated, at least I’ll know that it was down to Telegram.
For the final bit of the puzzle, I added in a step for the bot to reply when the workflow was processed successfully – including a copy of what was sent to Things:
In Telegram, that looks like this:
Finally, here it is, magically appearing in my Things inbox for parsing later:
p.s. You might be wondering what that reference to ‘operations’ at the end is all about. With Integromat, you get a certain number of resources allocated per month, depending on what kind of account you have. A free user gets about 1,000 operations per month, and each time I add a To Do list item, it takes up about 5 operations. With my awful maths that works out at about 200 To Do list items per month… which should be way more than I ever need, but I wanted to have some kind of visual indicator, just incase things started re-routing to a digital black hole somewhere.
The End?
So there you have it: How I got around the problem of adding tasks to my Things 3 To Do list when I’m not near my computer. Integromat looks very cool, and I’m going to have to think up some other commands for my bot to respond to… but really, this would be much simpler if Cultured Code would release an Android app.
There has been a recent trend of seemingly well-intentioned musicians taking to Twitter to engage with critics of the seriously flawed Copyright Directive, and in particular Article 13. Whatever the content of their arguments, it almost inevitably boils down to some kind of accusation that whoever disagrees with them is ‘just an academic’, a ‘big tech apologist’, or someone that doesn’t understand or appreciate what it’s like to be an independent musician.
I’ve been on the receiving end of these kind of claims, to the point that engaging any further became fruitless. Simply by dint of my position as a legal academic/employee of a tech company, the claim is that I must have an inherent bias that clouds my ability to critically analyse how copyright law will impact artists, because I am not a musician.
My Credentials
The thing is, I am a musician, and have been for almost 20 years. I sing and play guitar in a grunge band called Closet Organ, who successfully crowdfunded our last album, which included a vinyl LP release. I make chip-music and have played live as unexpected bowtie in places as far flung as London and Osaka. There’s also the innumerable other projects including the ‘bizarre and disturbing’ electronica of cup fungus, the scuzzy pop of Hog Wild, and the chilled out samplewave of ease and desist. I’ve personally put on a pile of gigs, been on tour as a music photographer more than a few times, was Review Editor of a fairly significant indie zine, and currently run my own underground tape label Cow Tongue Taco Records. I loved and played music long before I ever took a law class, or was employed by… well, anybody.
Safe to say, I have some investment in independent music.
Me, playing to a rapt audience
Why ContentID doesn’t work for independent artists
For those not familiar with Article 13 of the proposed EU Copyright Directive, the long and short is that it will effectively require service providers such as Facebook to implement content filtering systems to detect and remove/prevent the upload of material that belongs to another party. YouTube already has a similar system in place – by far the largest and most complicated of its kind in the world – but the Directive would massively extend its reach.
There are numerous and detailed criticisms of Article 13, but all of them seem to fall on deaf ears as they come from the perceived position of a ‘corporate shill’, so here I want to briefly outline just one major issue that independent artists experience with the current ContentID system – and why any kind of expansion will inevitably be damaging rather than of benefit.
If an independent artist wants to get their music out there into the world, to the most popular music sharing sites, they need to use some kind of recognised distributor – as direct submissions are either impossible, or extremely restricted. A pile of these have sprung up, including Amuse, RouteNote, DistroKid, etc. Some charge a subscription fee per year, some take a cut of any revenue generated, and some of them don’t even have a website – operating just from an app. The concept is simple: You send your music to them, and they distribute it digitally to the various partners. One of these partners is YouTube.
What isn’t made clear by these distribution networks is that by submitting your music to YouTube, you essentially give the distributor a licence to enforce your copyright on the platform using the ContentID system. This automatically detects any music uploaded along with a YouTube video (including short clips), and flags it up as unauthorised. To many this might sound great. Stop people stealing your stuff!
The problem of course is that there is very often no way to denote authorised uses or channels with these common distribution services. Let’s consider the following two scenarios:
Scenario A: a young singer songwriter starts to build up a decent following online, by sharing clips on SoundCloud and YouTube. With the money they’ve made from the ads on their DIY videos, they put together a full-length album and use one of the most popular distribution services to make it available on Spotify, Apple Music, Amazon, YouTube etc. As they get more and more well known, they dig deep and fund a really flashy music video to promote the album. After teasing it on Facebook and Twitter, they upload it only to find that it has immediately been flagged for a copyright violation – on behalf of the distributor. The video won’t necessarily come down, but it does mean that they won’t be able to monetise it – and will lose out on the ad revenue they were expecting to recoup the cost of the production. Panicked, they dispute the claim using YouTube’s resolution procedure, but there’s no indication of how long that might take, and it has thrown off all of the promotions they were planning. There’s no explanation of this anywhere in the distributor’s app that they used, and they can’t get a hold of anybody who understands the issue and has access to release the video for commercial use.
Scenario B: An artist (A) is asked by a fellow musician (B) if they would be interested in a collaboration. The process is simple: B will supply A with some vocal samples that A can then chop up and use however they wish. A gladly accepts, and comes up with a whole electronic composition that brings the vocals to life. B loves the track, and asks if they can use it on their upcoming DIY release. A agrees. B’s friend runs a small label who agrees to put out the album, and they use a distribution service which sends the album to all the major partners automatically – including YouTube’s ContentID system. A few years later, A is producing short video blogs and decides to use one of their old tracks as background music. It gets flagged up as a copyright violation automatically, which A disputes – but the appeal is rejected by the distributor, who has no knowledge of how the track came about in the first place.
Both of these scenarios are common, and a version of B actually happened to me personally. There are plenty of other similar situations, which are easily discoverable with a bit of Googling.
There are a few takeaways here:
Independent musicians are at the mercy of a system which locks them out from negotiating their own contracts without major label backing, and they therefore have to rely on gatekeepers which provide an inadequate level of information and control over their own music.
Artists who are starting out lack the information required in order to make informed decisions about their interaction with such services, and can inadvertently give away their ability to exploit their creations commercially due to how the systems are constructed.
The ContentID approach to copyright enforcement gives huge clout to the first entity to register a piece of work within their system – which is rarely going to be the artist themselves.
This model has no room for the ad-hoc, informal, and varying ways in which independent musicians create and share their works online.
In Summary
The current ContentID system works on a first-come, first-served basis. It puts huge power in the hands of intermediary distribution services which do not provide a service that can ever give artists the amount of control over their licenses they would require to fully exploit their creations. The nature of the beast means that informal collaborations between like-minded folks can unexpectedly tie up their creative expression years down the road. Article 13 will only expand these systems, which will inevitably be less sophisticated on other platforms than ContentID. Independent artists lose the ability to share their work even further.
So… as an academic, a tech employee, but perhaps most importantly a musician: Article 13 is a disastrous piece of law, and should be scrapped.
The Commodore 64 is a classic, and it has played an especially important role for me. It was the first computer I ever owned; given to me by my parents as a fifth birthday president, and my granda used to spend hours showing me how to program it from a big purple book he had gotten from a magazine. I credit this introduction with piquing my interest in technology early on. Staring at the rainbow loading screen, waiting for games to load from the cassette deck is also probably to blame for my terrible eyesight. I rediscovered the C64 as a teenager when I learned of its coveted SID sound chip, and I’ve been making music with it ever since under the guise of unexpected bowtie.
One of the things that always stood out in my mind was the C64’s keyboard, with its thick brown keys and symbols that I never really understood. If I could, I would use it all the time, but that was never really practical. Recently though, I fell down the rabbit hole of ‘mechanical keyboards’ online, where people build and use special keyboards with custom switches, sizes, and layouts. One of the projects I found included a re-working of the C64’s keyboard which brought it into the modern world, and I felt inspired to do something similar.
Sourcing a C64
I originally planned to source a broken Commodore online and use the keys from it, rather than defiling my beloved console. Truth be told though, mine was in a bit of a shabby state, and even broken C64s go for a pretty penny in the UK. I decided to just make use of what I had, with the belief that I could always pop the keycaps back on if I changed my mind later.
Purists, look away now.
Removing the keys
Removing the keys initially proved a bit trickier than I had anticipated… partly because I was trying to get them off while the thing was fully assembled. What you need to do is open up the breadbin itself, and then detach the keyboard from the chassis by taking out some screws. It was pretty easily done once I realised this was what was required – especially since my case had already been taken apart a fair few times to get at the SID.
Popping the keys off can be tricky, as they are far stiffer than one would expect from a more modern device. Keycap removers didn’t really work, and so I resorted to using a set of pliers, being careful to pull straight up to avoid damaging the posts underneath. Each key has a large spring underneath which can ping off easily if you move too quickly, so watch out for that.
Underneath was filthy, and it seemed like a good opportunity to give things a clean even if I didn’t end up building anything. I left both the spacebar and the Shift Lock keys in place, as I suspected they would be a bit more fiddly to deal with, and I wasn’t planning on using them anyway.
Finally, I gave the keycaps a clean with some soapy water and left them to dry.
The new board
There are lots of mechanical keyboard designs to choose from. For this project, I went with the Preonic, and got a partially built kit as part of a group buy from Massdrop, which came with a fetching orange aluminium case. It is a compact, ‘ortho-linear’ keyboard – which means the keys are arranged in a grid like pattern, rather than staggered as you would see more commonly. The idea is that your fingers have to move less, and in a more natural way when typing – which reduces strain. I opted for the Preonic (rather than the Planck, which has less keys), as I wanted to make the most of the C64 keycaps.
The switches
One of the main reasons people like mechanical keyboards is the quality and range of switches available. The switches are the bits underneath the plastic caps with the letters on them (or not, depending on what you prefer). Unlike the squishy keys you find on laptops and other modern computers that are so unsatisfying to use, mechanical keyboards feel great. They can either be smooth (linear), have a bump (tactile), or have a bump and an audible noise (clicky). Most people probably mentally associate typing experiences from yester-year with the clicky type – personified by the Cherry MX Blue switch, though the Commodore 64 actually had smooth keys. For this build I opted for the Cherry MX white (aka milky). They are clicky keys, but with a far less sharp and pronounced click than the Cherry MX blues. This means it’s a bit more socially acceptable when typing around other people (!).
Fitting the keycaps
The biggest issue with this project was that the keycaps from the Commodore 64 aren’t compatible with any of the switches that are commonly used today, and so wouldn’t just snap on. However, some wonderful person has designed a C64 to Cherry MX adaptor that can be 3d printed and shared it for free (open source is wonderful).
Preview of the adaptor showing the Cherry MX side
I don’t have a 3d printer, so obviously had to outsource this. Getting a decent price in the UK was tricky at first, as nobody would take up an order that small, but eventually I got 80 caps for £22.98 including delivery from 3DPrintDirect.co.uk. That would be more than enough to cover the 60 keys on the Preonic. The material was SLS – carbon reinforced plastic.
It took about two weeks for the adaptors to arrive. I had read that some people had trouble with their adaptors, especially if the finish on the 3d printing was rough, but mine worked out pretty well. At first I thought I might have to disassemble the switches to install them as they were very tight, but in the end I could just press them against a flat surface and push hard with the keycaps on. My fingers hurt after doing a bunch, so I did them in batches. When the board arrived, I realised that it actually made much more sense to put the adaptors into the caps first, and hit them gently with a small hammer, before pressing the whole thing onto the switch. Getting them at the right angle could be tricky, but they all turned out fine in the end.
It’s worth noting that I did also discover that removing an adaptor from a keycap isn’t really possible without destroying the soft plastic and leaving the inside of the cap gunked up, so that means that these caps are now committed to the project, and won’t be reusable on the C64 itself like I had hoped.
Building the Preonic
The Preonic kit I got came from Massdrop in a bundle, and it was packed and presented beautifully. The instructions however, weren’t exactly n00b friendly, and it took me a bit to work out exactly where the spacers and screws were meant to go; it seemed like there were extra unnecessary holes in different places, which threw me. Eventually it came together though, and things began to take shape.
I’ve done a fair amount of soldering in my time modifying Game Boys etc, so it wasn’t a difficult task to deal with the through-hole switches. I did discover that one of them was bust after putting it all together though, which meant I had to de-solder and replace it, which was a bit of a pain.
The layout
When I started looking at the actual layout of the board, I ran into a couple of issues with my plan to use the Preonic. Firstly, I had overlooked the fact that the board is built around a grid of single keys, and doesn’t really support anything larger except in the middle of the bottom row. That meant that I didn’t have enough keys from my C64 to cover the full thing, and no perfect option for the space bar.
Secondly, my choice was further restricted by the design of the C64’s keys, as the caps are ‘sculpted’ depending on where they sit, so I couldn’t just take one from the top row and put it on the bottom, even if it would work better for my purposes.
In the end, I managed to source some extra single keys online, and did the best with what I had. I had feared that it would be wildly off, or that I would need to use really inappropriate keys, but it actually worked out not too bad at all.
Some folks have commented that the layout doesn’t make much sense, and I should say that I am not religiously sticking to what is printed on the keycaps. For example, I am using the ‘Return’ key in the place of a space bar, as that was the only key from the original board which seemed appropriate and fit the slot on the PCB. I’m using an equals key for Enter. The @ key brings up a list of my bookmarks with Shiori; The £ key activates my Alfred snippets; RUN STOP is my Hyper Key; < and > activate the ‘Lower’ and ‘Raise’ layers of the Preonic, and so forth. The beauty of the software which powers these keyboards (QMK) is that you can map and re-map the layout to whatever makes sense for your own needs.
As a finishing touch, I got a special rainbow coloured USB cable made up from coolcable.co.uk as a nod to Commodore. Special thanks to them who put up my last minute changes to the connectors as I ordered the wrong thing.
I still need to set up the Preonic’s layers to suit my own custom layout, but I really like the board. It’s nice, solid and relatively compact, and now I have a personal homage to my granda and first computer. The keys feel pretty great to type on, though they could potentially have done with some heavier switches as the cap and adaptor combo mean that you end up putting more force on the keys than you usually would.
Edit: This was featured on Hackaday, which is awesome.
For the past few months I’ve been bombarded with adverts for some supposedly next generation headphone technology which adapts to your individual hearing, the ‘Nuraphone’. Various industry professionals were shown in the accompanying videos, reacting with apparent amazement at the sound quality. As a bit of an audio geek, I wanted to see for myself whether or not there was anything to the claims, and ended up picking up a pair recently – something that probably also partly confirms that online marketing campaigns really do work.
One thing that’s worth noting is that Nura released a software upgrade for the headphones at the end of July which added a bunch of features, and corrected some issues that people had reported. They even added in active noise cancellation – which wasn’t present before. That is particularly cool, and something that it’s good to see a company do. Newer stock of the Nuraphone come pre-loaded with this update.
The Concept
Custom Hearing Profiles
The main principle behind the Nuraphone is relatively simple, in theory at least. You first connect them to a smartphone app, where some quick tests result in a custom ‘hearing profile’ being created. This profile is then stored in the headphones, and playback is tailored accordingly – theoretically resulting in sound that is far more pleasing to your ears than a generic approach would provide.
Over and In-Ear Design
The other main defining characteristic of the Nuraphone is their unique over/in-ear design. The outer silicone cup encloses your ears (and doesn’t irritate any of my many cartilage piercings), and delivers bass frequencies, whereas the inner ear portion sits just inside your outer ear, delivering the mid and treble frequencies. This helps provide a nice separation, and a much fuller sound. It works very well, though does feel pretty weird at first.
Immersion Mode
Immersion mode uses drivers in the outer cups to provide ‘feeling of a live performance’. This basically makes the bass much fuller and deeper, without compromising on the quality of the mid or treble frequencies.
What Other People Say
I was intrigued by the Nuraphone, but also pretty skeptical. It didn’t help that the reviews online ranged wildly from the bizarrely enthusiastic to the dismissive. There were many puff pieces from sites who were clearly just happy to get a free pair, but there were also users who claimed that they and their friends had the quality was so good that they had actually been moved to tears on their first listen. Others weren’t as enamoured, angrily dismissing the Nuraphone as nothing but marketing spin, or even suggesting that £30 generic earbuds sounded just as good. Ouch.
In general, many of the criticisms seemed to focus on the following:
The design was too heavy and uncomfortable – especially if you have glasses.
The sound wasn’t that impressive.
The setup was complicated and confusing, resulting in different profiles every time.
The ‘generic’ profile that you compare your personalised profile with sounded worse than it should, and that something shady must be going on.
I don’t think I’ve ever seen such conflicting experiences with a single product, which only pushed me to try them out for myself.
My Experience
Setup
Setting up the headphones was as simple as downloading the app, connecting via Bluetooth, and following the on-screen instructions. I had no bother getting the cups in the right place, and despite trying the calibration process a few times, my profiles were pretty much the same each time. It couldn’t really have been much easier.
Brand new Nuraphone with the plastic wrap still intact.
The one real annoyance is that once set up is complete, you’re encouraged to switch between a generic ‘flat’ profile and your own personalised one to hear the difference. As many people have commented online, the problem is that the generic profile sounds pretty terrible on its own – far worse than you would expect even as a baseline from far cheaper headphones. It’s as if it has been heavily compressed. As a result it all seems a bit disingenuous, and Nura have probably not done themselves many favours by including this ‘feature’.
Comfort
After reading all of the negative reviews, I expected the Nuraphone to be extremely heavy and uncomfortable, but in actual fact, they slipped on with ease; the silicone of both the ear-cups and headband feeling both soft and comfortable. They weren’t particularly heavy – at least not markedly more so than other cans I’ve worn, and even the in-ear protrusions didn’t feel all that bad – going less deep than I expected. As for my glasses, they didn’t seem to make the slightest bit of difference to proceedings.
I wore the headphones for about seven to eight hours on the first day (with breaks), and my ears definitely got a bit sore by the end, though I knew to expect this from the Nura support team, who claimed that the tips of the in-ear portions would soften up with use. Despite this physical discomfort, my ears didn’t ever get hot or sweaty, which seems to be thanks to the use of the ‘TeslaValve’ air-flow technology. Not just a gimmick after all!
After listening to feedback on the issue of comfort, Nura have started to include different tip sizes with new purchases, including small and medium. There are also some third party options out there that are compatible, though whether they have a significant impact on the sound (negatively) is up for debate.
Controls
Many of the fundamental controls of the Nuraphone are contained within the accompanying smartphone app. Some of these can be mapped to a couple of touch sensitive ‘pads’ on the side of the cans – such as volume up/down, play/pause, etc. Both single and double tap gestures are supported, allowing you to choose four controls to make use of.
I personally have volume up and down configured for the single taps, with social mode and play/pause for the double taps. At first I thought this was a bit awkward, but after a day or so I found it really natural, and have even found myself trying to tap on the side of my other headphones out of a force of habit. After using the Nuraphone controls, the physical buttons on my Sennheisers feel a bit clunky and unintuitive.
Speaking of social mode, this is a nifty feature that turns off the active noise cancellation, and turns on microphones to allow you to hear what’s going on without having to take off the headphones. I’ve found this really useful in practice, though it can be a bit disconcerting to have certain ‘exterior’ sounds suddenly amplified louder than they actually are with the headphones off.
It’s not all great news though. The controls themselves are extremely sensitive, with no adjustment possible. Often I would end up accidentally triggering the pads when adjusting my glasses, or just moving my arms about, and I had to continually turn the headphones back down. There are also some functions that can only be accessed via the app, which is a bit annoying. For example, at the time of writing you can’t adjust the level of Immersion mode without being connected to the phone app – which would be handy for switching between genres. I’ve heard from Nura that they are working on a desktop app which would help alleviate this issue for those of us who regularly listen on our computers.
Finally, the controls don’t work when a cable is connected, which is understandable, but a bit irritating, breaking the continuity of experience.
Connectivity
I’ve been dubious of wireless technology for a long time, given the pretty crappy historic quality of even higher end headphones. As a result, I only recently bought into the whole arena and was pleasantly surprised. Still clinging to this, I was put off at first by the prospect of having to buy a separate, expensive analogue cable, but realised that I don’t actually need one for everyday listening.
In terms of the quality of the bluetooth connection itself, it was generally pretty good. For some reason my first pair worked fine with my laptop, but dropped out constantly when connected to my phone when I put it in my pocket. That meant they were totally useless for walking about, and I was restricted to using them when stationary at a desk, or on the couch till they got replaced. Generally though, once this was resolved, the connectivity was good, and I only had the odd blip here and there. Definitely not something that would be irritating or especially notable.
This all leads nicely to my biggest issue with the design of the Nuraphone: the proprietary nature of the cable connection. Apparently in order to allow compatibility with a bunch of separate devices, they had to create a new single port that is used for everything, including charging. I’’m not really convinced by this argument, and it means that it’s yet another cable to carry when travelling, which isn much harder and pricier to replace. The nightmare scenario would be misplacing one while on a trip.
Sound
In the past I have played with various special music EQ apps that provide extra stereo spacing and increased ‘ambience’, but always returned to listening flat, as they never did a great job of applying equally the quality across genres. In other words, what sounded good with electronica would make grunge sound utter garbage. I suspected that this would be much the same with the Nuraphone, and I was pretty disappointed when it seemed to be the case upon first listening. Techno sounded pretty good, but generally the sound was too coloured, with vocals often lost in the mix. It wasn’t until I realised that I had an additional EQ set on my Mac already that was interfering with things that I actually heard what the Nuraphone could do, and my perception completely changed. Why bother even bringing this up? To highlight that it’s important to start with a flat EQ, or you’re going to get a distorted perception of the sound – which is easily done.
While I wasn’t exactly reduced to tears, the sound from the Nuraphone really was pretty impressive, bringing a whole new feeling of dynamism to music that I knew well. It was as if my favourite tracks had been given a personal re-mastering, and it was amazing. I scoured my library to find old songs and re-discover them in a different light. Despite an overall perceivable increase in quality, a lot of this seemed to come down to the ‘Immersion’ mode. When I first read about it, I was dubious. Cranking up the bass so it rattles against your ears doesn’t really ‘capture the feeling of being at a live performance’. However, I have to honestly say that there is something in that claim. With the slider at 75% I could feel every kick of the bass drum in early Green Day albums as if Tre Cool himself was in the same room, and there were times during some songs that genuinely reminded of being at a music festival due to the interaction of instrumentation.
There have been some comments that the Immersion mode is too heavy handed, such as:
The Immersion effect might suit bassheads or movie-watchers, but you’ll get the most balanced sound with it on very low – or turned off.
That hasn’t really been my experience (though maybe I’m a basshead?). Even at a fairly high level (70%), the Immersion mode only serves to add a bit of greater depth to the tracks, and doesn’t result in distorted or farty bass – though there is a definite cut off where this changes. To be honest though, if I wanted a flatter, more ‘balanced’ sound, these are not the headphones I personally would be using in the first place.
How do they compare to my other headphones though? I don’t really have technical details here, but there’s no doubt in my mind that they offer a much more dynamic and engaging sound. My other headphones still sound and feel great, but if I want to really get lost in the music, I’ll go with the Nuraphone every time.
Other Stuff
Portability: Unfortunately, despite having pretty great passive and active noise cancellation, the Nuraphone don’t seem especially great for travelling. They don’t fold down, and the protective case is pretty chunky. I still need to decide whether or not they are worth sacrificing precious space for in my carry on.
Audio fade in: The Nuraphone have a built in ‘fade in’ to the audio when they start playing back from silence. That means that if you fire up Spotify, stick on the headphones, and start playing a track with a banging intro, it ends up lacking punch. I can see the benefit of this feature when you put the headphones back on – to avoid getting blasted mid-song at full volume, but this should 100% be an optional feature, not something that can’t be turned off.
Noise cancellation: The passive noise cancellation of these headphones is pretty decent already, but the active noise cancellation is especially good. The combination seems to block out more ambient noise than my Sennheiser HD 4.50 BTNC cans, and it’s pretty amazing to hear the difference toggling ‘social mode’ on and off makes.
Battery life: The Nuraphone boast an impressive 20 hour battery life. I haven’t tested them continuously, but what I do know is that the headphones arrived on a Monday; I used them heavily, and by Friday morning they were only down to 40%. In practice this means I’d only really need to charge them about once a week, which is pretty great.
Conclusion
I really didn’t want to like these. Despite having a glimmer of hope that they might be decent, I planned to try them out for a few weeks then send them back after discovering that they were mostly just marketing puff after all. However, that isn’t how things worked out.
The truth is that I really like the Nuraphone. The sound is different to anything I’ve heard before in a set of headphones, and the Immersion mode really helps bring certain kinds of music alive. I’m excited about listening to music again, and for that reason alone they are well worth the money for me. If they only took standard cables, and folded down to be a bit more portable, it’d be tough to find any big flaws.
They retail for £349, but you can often find 20% off discount codes on Reddit.
I’ve had this article on the back burner for almost three years now, but for the next thrilling instalment of my productivity app blogs, I’ll be turning to look at Keyboard Maestro.
Don’t let the somewhat dated website put you off, the app itself is unbelievably powerful. I have to admit to being wary when I first tried it out. The learning curve is steep, and the documentation pretty unclear – especially when compared to the other productivity apps that are available. However, after months years of sustained use, my feelings towards Keyboard Maestro have completely changed. It’s tough to get into, but so worth it. I honestly don’t know what I would do without it at this point.
So if Keyboard Maestro is so great, why did it take me so long to publish this? Well, there’s a few reasons. Firstly, there aren’t so many general use cases for Keyboard Maestro – at least not for me. Instead, it’s an app that’s best for repetitive tasks that are very specific to each user’s needs, which makes it difficult to give good examples. Secondly, it’s an app that you tend to set up and forget… before rediscovering it later on when your needs have changed, and you realise: “Oh! Keyboard Maestro could make this way easier!”. I’ve gone through that cycle a number of times, and after rediscovering just how awesome it is, I decided to finally complete this post.
What does it do?
Okay okay so Keyboard Maestro is great, but what does it actually do?. This is a good question, as it isn’t immediately obvious. Essentially, Keyboard Maestro allows you to take any task that you have to repeat, and automates it. If you’re familiar with Alfred, think of Alfred workflows, but on steroids. The key difference is that instead of having to write Applescript for every action you want to complete (which is still an option, by the way), there are a whole bunch of options baked in. Whether that’s telling the mouse to move and click on a certain point, displaying a popup message, getting an image size, filling in a field on a website, or whatever. You get a lot of control from the get go.
Some of the ‘actions’ available.
The sheer power of Keyboard Maestro is also its undoing in a way. It’s easy to look at the list of actions and wonder when you will ever use any of them. The UI is not the most intuitive, and you’d be forgiven for giving up at the beginning purely on that basis alone.
If you want to carry out simple, general tasks, then there may well be a nicer app that lets you do those things. However, that isn’t the point of Keyboard Maestro. Keyboard Maestro is there to help you automate pretty much any task that you can think of.
In addition to the automation, there is a whole host of other cool features that you can do a deep dive into – such as an extensive multi clipboard manager, application switcher, and others – but for me the real glory lies in the macros.
What can it do for me?
One of the biggest hurdles to starting off with Keyboard Maestro is working out exactly what you’ll use it for. It takes a conscious effort to work out what tasks you could automate – which isn’t necessarily something you thought was possible beforehand. Once you do sit down and give it some attention though, you’ll soon come up with plenty. Do you have to fill out specific fields on a website more than once? Use a macro. Do you need to convert HTML to markdown? Use a macro. Need to extract URLs from a big block of text? Macro. The possibilities are endless.
As part of my job, I regularly have to review and respond to reports about different websites using a helpdesk system. Each one (generally) requires me to:
Find the website URL in the e-mail and open it.
Decide what to do.
Note down the action taken in certain circumstances.
Reply by copying a specific part of the original message, and quoting it back in a certain format before providing an appropriate response.
Select a certain option to mark the issue as ‘Resolved’ or ‘On Hold’.
All of these steps are fairly straightforward, but a lot of time is taken up by clicking through the same tasks for each – even when I use a textexpander or snippet manager like Alfred. Sometimes the URLs are jumbled up and I need to fix them before opening or responding, or they are buried in huge blocks of text… etc. However, with Keyboard Maestro, I can reduce this all to a couple of key presses, with a couple of macros doing all of the following:
Extracting all of the URLs from the messages, and opening them in new windows.
Pasting the URLs in the correct quoted format at the top of the reply, along with the appropriate response.
Adding whatever notes needed to track the action taken in a specific field.
Marking the issue Resolved or On Hold as appropriate.
The only thing Keyboard Maestro doesn’t do is decide what action to take – which is just as well really, for a variety of reasons!
Like many of the examples, that one is very specific, but it demonstrates a bit of how granular and useful Keyboard Maestro macros can be – and will hopefully get you thinking about your own use cases. Here are some other more general tasks I regularly deploy macros for:
Inserting a URL wrapped in a href tags.
Pasting text with different styles of quotes depending on the situation.
Parsing blocks of text to extract URLs and/or e-mail addresses.
Getting ID numbers from long URLs.
Pasting items in a bulleted or numbered list automatically.
Filling out forms online.
Copying the current URL from my browser window (and doing stuff with it).
The most powerful and useful ones are those that have very specific, work related use cases. With a bit of imagination, you’ll come up with your own, so I’d encourage you to give it a bash.
Triggers
To wrap this up, I wanted to highlight one more feature of Keyboard Maestro that makes it stand out from other productivity apps. For those veterans amongst us who regularly make use of workflow improvements, it’s easy to run out of hotkey assignments, and Keyboard Maestro has a bunch of different ways to solve that problem. First off is the use of ‘palettes’, which lets you assign the same hotkey to different macros – and then select them from a menu – or to activate different hotkey sets depending on what you’re working on that day.
If you already use Alfred, Keyboard Maestro is a brilliant complement, rather than a replacement in this way too. There is a specific Alfred workflow that lets you search and trigger Keyboard Maestro macros from the Alfred search bar, which is incredibly useful for those that you may use occasionally, but don’t want to dedicate a precious hotkey to: Alfred Maestro.
Finally, triggers aren’t just confined to mere hotkeys. Oh no. Pretty much any event you can think of can kick off a macro. If you want certain changes to happen when you connect to a particular WiFi, you can make that happen. Execute commands remotely by running Keyboard Maestro on a server? Why not. Run certain checks when a USB device is plugged in? Easy. You can even have Keyboard Maestro react to MIDI notes and values, which opens up a whole world of interesting hardware controllers aside from the keyboard… something I’ll be exploring in the next post.
Tonight I came across an article on TechCrunch in response to an open letter from Tobias Lütke, CEO of e-commerce platform Shopify, in which he defends the company’s decision to continue hosting Breitbart’s online shop. Breitbart being the infamous far right publication of which Steve Bannon was heavily involved with.
After sustained criticism, Lütke explains in the post entitled ‘In Support of Free Speech’ that based upon a belief that ‘commerce is a powerful, underestimated form of expression’, it would be wrong to effectively censor merchants by shutting down their shops as the result of differing political views.
Reporting on the letter, TechCrunch shared their post to Facebook with the text: ‘Shopify’s CEO thinks his platform has a responsibility to continue hosting Breitbart’s store – here’s why he’s wrong.’
I was curious to see the arguments that would be proffered as to why the decision was wrong, but was ultimately left wanting. Here are the reasons given, as far as I could make out:
Lütke is grossly overestimating the role of a private e-commerce platform in providing and protecting freedom of expression.
Shopify cannot ‘censor’ anybody, as they are not an emanation of the State.
Justifying the continued hosting of merchants who have extreme views for freedom of speech reasons is wrong, as freedom of speech does not apply to private organisations.
As a private company, Shopify are not legally required to provide a platform to anybody.
Shopify’s Terms of Service allow them to terminate the account of any user at any time.
In response, here’s why TechCrunch are wrong:
None of the reasons given actually explain why Shopify shouldn’t continue to host Breitbart.
Read over them again, then check out the full article here. Despite heavily criticising Shopify, and stating that Lütke is ‘wrong’, TechCrunch don’t engage at all with the heart of the issue. No, Shopify are not legally required to host the Breitbart shop, and yes, quite obviously their Terms of Service are quite obviously worded in such a way to give them that discretion in the event of any legal challenge, but that’s hardly a surprise.
Here’s the big question that went unanswered: why should Shopify not host Breitbart?Lütke hits the nail on the head with the following challenge, which the TechCrunch article completely fails to even acknowledge:
When we kick off a merchant, we’re asserting our own moral code as the superior one. But who gets to define that moral code? Where would it begin and end? Who gets to decide what can be sold and what can’t?
Rather than attempt to address this fundamental issue, TechCrunch essentially just argue that Shopify should kick Breitbart off of their platform because, er, well, legally there’s nothing to stop them. A pretty poor argument at best.
Protecting freedom of speech isn’t just down to the State.
Firstly, I’m not sure where this idea that censorship is only something that the State can give effect to comes from. It means to forbid or to ban something; to suppress speech. The source doesn’t have anything to do with it.
Secondly, there is a lot of confusion surrounding freedom of speech and the relation to the State, even from those who purport to understand the dynamic. To clear some things up, the following are true:
Freedom of speech law (generally) only protects citizens from the acts of State actors.
Private online service providers (generally) have no obligation to protect the freedom of speech rights of their users, or to give them a platform for expression.
However, to assert that a platform cannot justify their actions based on freedom of speech considerations, or to willingly strive to uphold those principles on the basis of the above is a non sequitur. Additionally, just because you can’t threaten legal action on a freeedom of speech argument against Facebook if they take down your status update, that doesn’t mean it is wrong to argue that Facebook should be doing more to consider and protect those values.
Just as we would not expect a hotel owner to be able to refuse to allow a same sex couple to share a bed, or a pub to knock back someone based purely on the colour of their skin, it is nonsense to pretend that we have no expectations of private organisations to abide by certain shared societal values.
Without touching on the claims around the importance of e-commerce as a vehicle for expression, it seems that in a world where we are increasingly reliant on private entities to provide our virtual town square equivalents, and where we expect certain values to be upheld, arguably platforms such as Shopify have an increasing moral obligation to protect (as far as is possible) the principles that are the cornerstone of our Democracies.
Today the big story on the web is that a story leaked from a ‘British intelligence officer’ about Russia blackmailing Donald Trump, published by BuzzFeed, and then dutifully re-posted by other major established media outlets was allegedly made up by posters on 4chan.
Whilst the articles state that the claims are ‘unverified’, and ‘contain errors’, it appears that there has been very little in the way of fact checking or corroboration of sources going on. Indeed, publishing allegations without due dilligence is exactly the operational basis of other sites that don’t fall under the banner of ‘credible’ media. The fact is that the outcome in either case is the same: either willingly or blindly (through a desire to publish content first to drive advertising revenue), these sites are spreading misinformation. Looking at the Mirror’s coverage, one would be forgiven for thinking that the info was at least partially credible:
It’s all too easy to scoff at the Mirror, or BuzzFeed. Nobody takes them seriously after all; everybody knows that! That clearly isn’t actually the case, and it demonstrates the problem with the reactionary drive towards ‘banning’ or filtering sites that publish fake news from online platforms.
Of course, these claims to have made up the story could very well be made up themselves… but that doesn’t invalidate the criticism. If anything, it highlights the issue with asking or expecting third parties such as online service providers to filter out untrue content.
To echo the questions I raised in my previous post on this topic: Exactly what constitutes fake news, where do we draw the line, at what point do ‘credible’ news sources lose that credibility, and who makes those determinations? Should BuzzFeed articles be removed from Facebook? What about The Mirror? What about CNN? Maybe only articles claiming to have made up fake news should be treated as fake news. Where does it stop?
For an interesting read on this that was shared by my colleague Davide recently, check out this page:
It only gets worse when charges of fake news come from the media, which, due to the dismal economics of digital publishing, regularly run dubious “news” of their own. Take the Washington Post, that rare paper that claims to be profitable these days. What it has gained in profitability, it seems to have lost in credibility.
Edit: I published this earlier today before Trump’s press conference, and felt compelled to update it as a result of what he said. Responding to questions from the media, he apparently decided to pick up the ‘fake news’ mantle:
When Jim Acosta, Senior White House Correspondent for CNN, attempted to ask Trump a question, the President-elect refused to answer. “Not you. Your organization is terrible,” Trump said. “I’m not going to give you a question, you are fake news.”
So now Trump has appropriated the term ‘fake news’ to thwart off any criticism without response. That’s what happens when you set up an empty vessel as something that is inherently wrong with no real definition. This should have been easy to avoid. – (source)
This is precisely why setting up a straw man term such as ‘fake news’ is so dangerous, because an empty vessel that is inherently bad without any clear definition leaves the power in the hands of those who want to wield it for their own ends. If we want to try and combat ‘fake news’, we first need to understand what it is we are fighting against. Otherwise, the question becomes whether it is our version of fake news that is bad, or Donald Trump’s?
As the results of the US Presidential election began to sink in, the finger of blame swung around to focus on ‘fake news’ websites, that publish factually incorrect articles with snappy headlines that are ripe for social media dissemination.
Ironically, the age of propaganda has previously thought to have died out with the proliferation of easy access to the Internet, with people able to cross-reference and fact check claims from their bedroom, rather than having a single domestic point of information. Instead, what it appears we are seeing is the opposite; people congregating around a single funnel of sources (Facebook), which filters to the top the most widely shared (read: most attention grabbing) articles.
Almost immediately, the socially liberal-leaning technology giants Google and Facebook announced that they would be taking steps to prevent websites from making use of their services. This has sparked a ream of discussion about the ‘responsibility’ of other online platforms to take steps to prevent the spread of these so-called ‘fake news’ sites on their networks.
Here, probably for the first time I can remember, I find myself in agreement with what Zuckerberg has (reportedly) said in response:
The suggestion that online platforms should unilaterally act to restrict ‘fake news’ websites is one of the biggest threats to free speech to face the Internet.
Those are my words, not his – just to be clear. Click through to see what he actually said (well, as long as the source can be trusted).
It is unclear exactly what ‘fake news’ is supposed to be. Some sites ‘outing’ publishers that engage in this sort of activity have included The Onion in their lists, which in of itself demonstrates the problem of singling out websites that publish ‘fake’ news.
Where is the line drawn between ‘fake news’ and satire?
At what point do factually incorrect articles become ‘fake news’?
At what point do ‘trade puffs’ and campaign claims become ‘fake news’ rather than just passionate advocacy?
If the defining factor is intent, rather than content, who makes that determination, and based on what set of values?
It is not the job of online platforms to make determinations on the truth of the articles that their users either share, or the content that they themselves publish. There is no moral obligation or imperative on them to editorialise and ensure that only particular messages reach their networks. In fact, it is arguably the complete opposite: they have an ethical obligation to ensure that they do not interfere in the free speech of users, and free dissemination of ideas and information; irrespective of their own views on the ‘truth’ or otherwise of them.
The real challenge to free speech isn’t fake news; it’s the suggestion that we should ban it.
Misinformation is a real issue, and the lazy reliance culture facilitated by networks such as Facebook and Google where any article with a catchy headline is taken at face value is a huge problem, but the answer is not for these networks to take things into their own hands and decide what set of truths are acceptable for us to see, and which are not.
We have reached a position where half of our societies are voting one way, whilst the other half can’t believe that anybody would ever make such a decision, precisely because we have retreated into our own echo chambers – both in the physical world as well as the virtual. The solution to the political struggles we on the left face is not to further restrict the gamut of speech that is open to us in our shared online spaces, or to expect service providers to step up and act as over-arching publishers; it is to get out there and effectively challenge those ideas with people that we would normally avoid engaging with. Curtailing the free speech of others through the arbitrary definition of ‘fake news’ is not only not the answer, but it’s a terrifying prospect to the very freedoms that we are arguing to protect.
The real challenge to free speech isn’t fake news; it’s the suggestion that we should ban it.
Disclaimer: It should go without saying that these are my views, and not necessarily those of WordPress.com, or anybody else.
In the past I’ve mentioned how I have streamlined a lot of the everyday tasks I have to do through the use of various keyboard-centric apps such as Alfred and Keyboard Maestro. One of the linchpins of my setup is the use of something called the ‘Hyper Key’, which is essentially re-mapping the fairly useless Caps Lock to act as a super-function key, letting you trigger all sorts of shortcuts and different macros.
This particular configuration relied on two bits of software, called Karabiner and Seil. However, earlier today I was forced into upgrading from OSX El Capitan, to OSX Sierra, to fix an issue with some other apps that I was having. Of course, upon upgrade, I discovered that the Karabiner/Seil combination no longer functioned properly, and there was no real solution using the same tools. Sigh.
After a bit of digging, I discovered a way to re-enable the same functionality, albeit with a bit of jiggery pokery. Here’s how I did it:
Install Hammerspoon. This is a piece of software that allows for automation, acting as an interface between a scripting engine called lua, and the OS itself.
Install Karabiner Elements. This is a version of Karabiner that works with OSX Sierra. The latest DMG is available here.
Under OSX Keyboard System Preferences pane, change the Caps Lock Action to ‘None’, to allow Karabiner to control it.
Set up Karabiner Elements to map the caps_lock to F18. You can also do this by adding in a config file to ~/.karabiner.d/configuration/karabiner.json, but it’s so easy to do manually that it seems overkill to go that route.
How Karabiner Elements should look
Now, load up a lua config file into Hammerspoon, by copying it to ~/.hammerspoon/init.lua – see below for examples.
The config file I am using is available over on GitHub here. It will re-enable the Hyper Key function for all a-z and 0-9 keys, as well as a couple of miscellaneous ones that I use, though it should be self explanatory on how to add new ones.
One thing to watch out for is that any Hotkeys set up in Alfred to launch applications with the Hyper Key don’t seem to work any longer, so for that, one way to get them to launch is to add a specific mapping in the init.lua configuration. Here’s what I’ve done to get 1Password to launch with CAPS+O:
-- Code to launch single apps that Alfred used to handle.
-- Hat-Tip: https://gist.github.com/ttscoff/cce98a711b5476166792d5e6f1ac5907
launch = function(appname)
hs.application.launchOrFocus(appname)
k.triggered = true
end
-- Keybinding for specific single apps.
singleapps = {
{'o', '1Password 6'},
}
As you can see from the above, I obviously didn’t write the code to make all of this work. Credit for that goes to a combination of ttscoff and prenagha; I just tweaked it for my own simple use case and wrote this up in the hope that others might find it easy to follow.