Brian Willis

Observations After a Week With an Apple Watch

I’d forgotten what a giant pain in the ass it is to have a chunk of metal strapped to your body all day. Your wrist’s centre of gravity shifts. Typing is harder. The watch catches door frames as you walk through them. I knew there was a reason that I gave up wearing a watch 10 years ago, and I’ve had to rediscover it the hard way.

Apple Pay is like something out of science fiction. It doesn’t feel like it should work. They let me leave the supermarket with my groceries, but I half expected them to chase me out the door.

The fitness tracking stuff is as compelling as promised. I find myself walking more to close the activity rings. It remains to be seen how long this will stick, but I think it’s more than a gimmick. In particular, heart rate tracking is way more frequent than I’d anticipated. I’m seeing readings every 5-10 minutes, with no meaningful hit to battery life.

Speaking of battery life, I took the watch off the charger this morning at 6:30, I’m typing this at noon, and I’m at 94%. Battery life like this is practically unheard of in the Apple ecosystem.

The wrist detection is unbelievably accurate. Taking the watch off immediately locks it, and raising my wrist to check the time has only failed once.

Third party apps are mostly useless. There’s nothing here that I’d use every day. This is a real concern for the future of the product. To succeed, the watch needs to be useful and necessary, and at the moment it’s just a fun toy for early adopters.

On Humanity

Seth Godin, writing at his blog:

If the boss can write it down, she can find someone cheaper than you to do the work. Probably a robot. The best jobs are jobs where we don’t await instructions, where using good judgment and taking initiative are far more important than obedience.

…but what happens when judgement and initiative become something we can automate too?

I’ve been mulling this over since C.G.P. Grey published his video Humans Need Not Apply. I’m glossing over some of the finer points, but his central argument is that the future of work looks pretty grim, with software and robotics taking over jobs that we’ve traditionally thought only people were capable of.

He’s absolutely right by the way. I’m a Software Developer, which means I unemploy people for a living. If a pice of work can be automated it eventually will be, and when that happens yet another person ends up out of work. There’s no limit to this either—people aren’t as special as they think they are. Right now, almost all of us have a job that can either in whole or in part be replaced by a machine. This is going to be a problem when we get to the point that jobs are being automated away at a faster pace than new jobs are being created.

So what does a person do to maximise the chance that they’ll stay employable?

A person’s biggest asset in the face of automation is their humanity. It’s the one thing robotics can’t compete on. Any line of work that’s humanised remains valuable work when done by a person.

Consider a stay in hospital. I can see patients accepting a robot surgeon. Fewer mistakes, fewer side effects, faster surgeries, all positive things. But what happens in the recovery ward? It’s one thing to be operated on by a machine when you’re unconscious and unaware of the experience, but can we really expect people to accept care from robot nurses? Nursing is a line of work where humanity counts, and between the uncanny valley, and our general desire for authenticity, I don’t see patients reacting well to being taken care of by Alice from The Jetsons.

We can see inklings of this effect in other industries too. High end watches are truly terrible at timekeeping, with even the best models on the market drifting seconds each day. By comparison, a quartz watch will drift around a second a day, and smart watches sync regularly with time servers, effectively eliminating drift. So why would someone buy a mechanical watch? Because its value comes from being hand made by a person, following traditions that are in some cases centuries old. The value in a Portugieser comes from the fact that it didn’t roll off an assembly line, slapped together mechanically.

It’s not just humanity that gives us an edge over automation—it’s authenticity. It’s easy to write off hispter culture as some sort of quirky longing for a world that never really existed, but at its core hipsterdom rose from a lack of authenticity in the world. It was a whole social movement that said the plastic and formica and corporate sterility of the world was getting too much, and that we needed reclaim some of what we’d lost in our pursuit of efficiency. People care deeply about the substance of the things they buy, and how those things make them feel. Authenticity is why barista made coffee can be sold for more than coffee out of a machine, why parents consider their children’s artwork priceless, and why Emily Howell doesn’t have many fans.

I know these are transient and superficial reasons to value one kind of work over another, and after writing this I’m having difficulty reconciling my desire to remain employable with the fact that no one describes the software I make as artisanal or hand crafted. I’m not trying to say that we’ll solve the problem of automation and find work for billions of people by creating goods that are meaningfully worse. Instead, I’m suggesting that the economy oftentimes values things in counterintuitive ways, and I think because of that there’s hope for us.

Predictions for 2016

Alright team, this is the fourth time that I’ve done this, so you all know the drill. Go and read last year’s predictions to see how I did (hint: not well), and then let’s dive into what’s going to happen in 2016.

Last year I told you we’ll soon see a fatal accident involving a self driving car. When I wrote that, I was thinking it’d be a Google car but thanks to Tesla’s batshit crazy autopilot mode, we now know that it’ll be them who’s the first to kill someone. I understand that self driving cars are going to be a big part of our future whether we like it or not, but in the race to market, corners are getting cut, testing isn’t as thorough as it needs to be, and drivers need to be retrained for a generation of technology that they don’t really understand. I write software for a living, and I’m telling you that software development is still in the “pouring raw sewage into our drinking water and wondering why everyone has cholera” stage of human progress1. We have no idea if this stuff is going to work, and I’m willing to bet that there are edge cases (rain, hail, snow, fog, collisions, roadworks, unsealed roads, startups trying to make roads out of glass2, etc.) that are hard to test thoroughly and that all drivers are expected to handle. Don’t get me wrong—when this tech gets good it’ll be a great day. Human drivers are incredibly unreliable and kill each other in the thousands every year. Putting an end to that will be a net win for humanity, but there are going to be casualties along the way.

In consumer tech it’s going to be same old, same old. Skylake Macs, another round of Chromebooks, a refreshed Surface whatever. Meh. At this point the computer, tablet, and smartphone are mostly solved problems. What’s left is iterating and refining, which is great because Apple’s software quality has been all over the place (photos—excellent; music—train wreck), Microsoft’s UI design has been all over the place (why does Windows have eight types of context menu?), and Google’s whole product strategy has been all over the place (seriously, why do they make two competing operating systems? and why do they make a programming language that can’t build stuff for either one?). I know it’s not glamorous, but everybody needs to slow down, take a chill pill, and spend some time cleaning up their respective messes.

As for wearables, this year is all about the quantified self with as many new sensors as possible being crammed into devices. Blood oxygen, blood pressure, stress level, sweat gland production, ambient carbon dioxide level, ambient temperature, you name it—if it can be measured your watch will start recording it. This was a surprising thing to learn from the Apple Watch, that when you start tracking and measuring these things people will pay attention to them even if they didn’t care before. Filling in those circles becomes a daily habit.

There’s been a rumour floating around that the next iPhone will drop the 3.5mm headphone jack and do everything through the lightning port. I think that given a long enough timeline we’ll see smartphones without any ports at all, and this is just a natural extension of that. So will iPhone headphones go wireless, or will they plug in to the lightning port? My money is on lightning. Wireless bluetooth headphones have never been great; they need to have batteries (adding cost), and they need to have ports so they can be charged (adding size and weight).

Lastly, let’s talk about software development. I don’t know how many times I have to say this, but making another social network is a dumb idea (I’m looking at you beme). Nevertheless, people keep trying to make them. Mobile app development looks to be a very crowded market too. There are still opportunities there, but if you strike gold you’ll very quickly be surrounded by impersonators (case in point: Flappy Bird). So where’s the future of software development? I see it as a mix of the web on one side, and cutting edge hardware like VR and 3D printers on the other. The web hasn’t gone anywhere, and viable businesses keep springing up to fill niches I wasn’t aware existed. All that regular recurring revenue makes for a nice and sustainable business model. Business building themselves around cutting edge hardware are of course more risky, but the successful companies in that sector will push humanity forward, and make quite a bit of money in the process.

  1. Speaking of which, you should really go watch CGP Grey’s video on plagues.

  2. No really, that’s a thing.

Why You Can’t Make the Single Responsibility Principle Work

Saying a class should “only have one reason to change” is a pretty terrible explanation for what the single responsibility principle is trying to achieve. I’ve encountered one-line single-variable classes that could be caused to change for a dozen different reasons. What counts as a reason? How granular do you go? I don’t have good answers to these questions.

What I do know is that despite having a definition that I’ve never really been comfortable with, the way I was writing software before the SRP sucked. Huge monolithic classes that went on for hundreds of lines and required savant-like intelligence to understand were a regular part of my work day. Now I get to glide across the surface of a project, diving deep only when I need to. It’s great, and frees up a lot of cognitive space.

There are still times where I can’t make it work though, and I think I’ve managed to distill these failures down to three big reasons.

1. Your process doesn’t support it.

It’s Friday afternoon. You’re working your way through building something that’s taken up the bulk of your week. It has to be finished by the close of business. You’ve produced a finely-polished piece of code. No rough edges, solves the problem elegantly, something that’ll impress your peer reviewer and give you a feeling of smug satisfaction as the first beer of Friday night begins to take effect.

…and then you spot it.

There’s an edge case that you’ve missed. Dumbass. Haven’t you ever heard of an off-by-one error? Didn’t you figure this stuff out in first year comp sci? Obviously not, because your hoity-toity “finely-polished” holier-than-thou solution doesn’t work, you talentless hack.

Pause. Breathe. Repress the voice in the back of your head.

OK. We can fix this.

There are two options:

  1. Change the architecture. Instead of using one class use two, both implementing the same interface.

  2. Change the implementation. Slap another method on the bottom of the original class, and call it a day.

Ostensibly, yes, the first solution is the correct one. You’d be separating out responsibilities properly there, and had you anticipated this you’d have used this design in your original plans.

But there’s a problem. Introducing new classes and changing your design like this will require sign off from the architecture team, and that will require a two-day turnaround. You don’t have that kind of time. So now we’re talking about missing a deadline (i.e. being bad at your job in a way that others will notice), or delivering a sloppy solution (i.e. also being bad at your job, but in a way that you’ll probably get away with).

What happens next is left as an exercise for the reader.

2. Your tools don’t support it.

Back in the dim dark ages before the enlightenment, Joel Spolsky told us all to use the best tools money can buy, which for most of us meant comically enormous displays attached to the early 21st century version of a supercomputer. This was an awful lot of fun to do on the company credit card.

The thing is though, the software we use hasn’t really caught up. Take a look at this:

Visual Studio displaying Hello World in C#.

That’s Hello World in C#, on a (tiny by today’s standards) 1280×1024 display. Notice the insane amount of white space? We should be doing something better with that.

A byproduct of the SRP is that you end up creating considerably more files, each with significantly less in them. Working with multiple files displayed full screen ends up increasing your cognitive load, forcing you to store more in your short term memory every time you switch between them, which defeats the purpose of having a big display in the first place. Yes, you could split panes and shuffle things around manually, but that approach is slow and assumes you’re prepared to pay a high setup cost to view a class that you might only spend a few seconds reading through.

What we really need is an IDE that capitalises on all that white space automatically. I’m picturing classes printed on a deck of cards here, where drilling into a class causes the previous one to slide down into any available white space, instead of vanishing into the background the way they do now. You’d be able to see more of your work at once, it would be harder to lose context, and you could travel back through the call stack just by glancing your eyes downward.

While we’re on the subject, when did we all decide that one class = one file? Are files even a sensible way of segregating classes any more? Many languages do support writing code with more than one class to a file, but few developers actually embrace that idea. When comparing versions of code where responsibilities have been moved around, it’s often difficult to understand how a change works when a piece of functionality has crossed the barrier from one file to another. Traditional deltas don’t really work well in these situations. I’m not really sure what the fix is here, but I’ve got this feeling stuck in the back of my head that we need tools which don’t force you to bind your architecture quite so closely to the filesystem.

3. You’ve taken it way too far.

It should not take an interface, three classes, two DLL’s, and (so help me) reflection to load a record from a database.

Yes, I have encountered this. I had to sit quietly and think about my career choices after doing so.

It’s natural when falling in love with a new tool to embrace it in all its ways. My high school computer studies teacher called this the “Microsoft Publisher Effect”, where a friend of yours figures out how to work their desktop publishing software and you start receiving party invitations that use seven different borders and twelve different typefaces. People have a tendency to go a little nutty when they come across a tool that’s new and empowering, and I’ve definitely seen that happen with the SRP.

Interestingly, this seems to affect the experienced coders more that the newbies. Almost as if after years of monolithic classes the pendulum swings back too far, or maybe it’s just that the whippersnappers haven’t had the opportunity to build bad habits yet.

Whichever camp you find yourself in, it’s important to remember that the SRP is all about managing and minimising complexity. If your work becomes harder to understand, then you’re doing a disservice to the next person who has to work with it.

Predictions for 2015

This glimpse into the future is an annual tradition. It usually consists of a list of things that I actually think will happen in the upcoming year, followed by a prediction that Half Life 3 will be released. So far Gabe Newell has been dragging down my batting average, but one of these years I’ll be right. Looking at my previous attempts, I have a less than 50% success rate of accurately predicting things, which still strikes me as better than random chance, though maybe that’s just my ego talking. If you’re feeling really enthusiastic, go read last year’s predictions and see how I did, otherwise let’s get started with the crystal ball gazing.

The Apple Watch will not sell particularly well. Even the small model is too big, and without native apps its functionality will be pretty limited. Don’t take this to mean that I think the watch will flop—it won’t, it’s just going to take a few years and a few iterations before it’s a must-have product for a big chunk of the population in the same way that the smartphone is. People forget that it took the iPod three or four years to become a household name, and its time at the top of the pile before being cannibalised by smartphones was about half that long. Apple’s done so much right with the watch, this is an excellent first cut, but the commentary that I’ve seen online misses the fact that there’s so much more to do.

Google’s dorky looking self driving cars will become a part of everyday life. We won’t see a fatal crash occur in 2015, but rest assured that day is coming. Public acceptance will grow slowly. I don’t think consumers will line up to buy a car that looks like an obese panda bear, but they’ll grow accustomed to driving along side them. It’ll be interesting to see how Detroit responds to self-driving cars. We’ve seen very little from them on this subject, but they do seem resistant to new ideas and new ways of doing business.

Last year I predicted that Microsoft would put a guy in the CEO’s chair who had an MBA and didn’t have an engineering background. Instead they appointed a guy who has an MBA and an engineering background. Well played Microsoft. In all seriousness I think Satya Nadella was an astute choice, but I have yet to see a compelling vision of Microsoft’s future from him. So far we can say he’s not Steve Ballmer, but that’s not enough. While open sourcing big chunks of .net and launching Office on iOS was nice and all, neither action made the company a meaningful amount of money. Outside of Azure, Microsoft is still coasting along on its cash cows and this needs to change before Nadella will be seen as a success.

In the tech startup scene, it’s going to be a cynical year. Competition between Uber and Lyft will get even more ruthless, with more ethical lines getting crossed1. Facebook and Twitter will continue to be more hostile towards third party developers, users, and each other. Like I said—cynical.

Social apps like Tiiny aren’t really finding an audience anymore. We’ve seen a land grab in the last half-decade with products like Facebook and Twitter whose solutions exist entirely in software. Most of the problems in that space are solved2 so the kind of startups we’ll now see will have a presence that stretches out into the real world. I’m talking about companies like FirstMile and TaskRabbit who have actual human employees who’ll show up at your bricks-and-mortar home to perform real services that exist on more than just a hard drive.

Finally, a personal prediction: I will write more. Seriously, last year I published three posts. That’s just abysmal. It’s difficult to identify the root cause, but part of it has been my inability to make time to write, and part has been a desire to keep the quality bar high and not publish stuff that I’ll regret later. Either way, expect more of my rambling in your feed reader.

  1. I’m waiting for the day when surge pricing is met with a DDoS attack.

  2. Perhaps “claimed” would be a better word than “solved”.