Brian Willis

Why You Can’t Make the Single Responsibility Principle Work

Saying a class should “only have one reason to change” is a pretty terrible explanation for what the single responsibility principle is trying to achieve. I’ve encountered one-line single-variable classes that could be caused to change for a dozen different reasons. What counts as a reason? How granular do you go? I don’t have good answers to these questions.

What I do know is that despite having a definition that I’ve never really been comfortable with, the way I was writing software before the SRP sucked. Huge monolithic classes that went on for hundreds of lines and required savant-like intelligence to understand were a regular part of my work day. Now I get to glide across the surface of a project, diving deep only when I need to. It’s great, and frees up a lot of cognitive space.

There are still times where I can’t make it work though, and I think I’ve managed to distill these failures down to three big reasons.

1. Your process doesn’t support it.

It’s Friday afternoon. You’re working your way through building something that’s taken up the bulk of your week. It has to be finished by the close of business. You’ve produced a finely-polished piece of code. No rough edges, solves the problem elegantly, something that’ll impress your peer reviewer and give you a feeling of smug satisfaction as the first beer of Friday night begins to take effect.

…and then you spot it.

There’s an edge case that you’ve missed. Dumbass. Haven’t you ever heard of an off-by-one error? Didn’t you figure this stuff out in first year comp sci? Obviously not, because your hoity-toity “finely-polished” holier-than-thou solution doesn’t work, you talentless hack.

Pause. Breathe. Repress the voice in the back of your head.

OK. We can fix this.

There are two options:

  1. Change the architecture. Instead of using one class use two, both implementing the same interface.

  2. Change the implementation. Slap another method on the bottom of the original class, and call it a day.

Ostensibly, yes, the first solution is the correct one. You’d be separating out responsibilities properly there, and had you anticipated this you’d have used this design in your original plans.

But there’s a problem. Introducing new classes and changing your design like this will require sign off from the architecture team, and that will require a two-day turnaround. You don’t have that kind of time. So now we’re talking about missing a deadline (i.e. being bad at your job in a way that others will notice), or delivering a sloppy solution (i.e. also being bad at your job, but in a way that you’ll probably get away with).

What happens next is left as an exercise for the reader.

2. Your tools don’t support it.

Back in the dim dark ages before the enlightenment, Joel Spolsky told us all to use the best tools money can buy, which for most of us meant comically enormous displays attached to the early 21st century version of a supercomputer. This was an awful lot of fun to do on the company credit card.

The thing is though, the software we use hasn’t really caught up. Take a look at this:

Screenshot of Visual Studio displaying Hello World in C#.

That’s Hello World in C#, on a (tiny by today’s standards) 1280×1024 display. Notice the insane amount of white space? We should be doing something better with that.

A byproduct of the SRP is that you end up creating considerably more files, each with significantly less in them. Working with multiple files displayed full screen ends up increasing your cognitive load, forcing you to store more in your short term memory every time you switch between them, which defeats the purpose of having a big display in the first place. Yes, you could split panes and shuffle things around manually, but that approach is slow and assumes you’re prepared to pay a high setup cost to view a class that you might only spend a few seconds reading through.

What we really need is an IDE that capitalises on all that white space automatically. I’m picturing classes printed on a deck of cards here, where drilling into a class causes the previous one to slide down into any available white space, instead of vanishing into the background the way they do now. You’d be able to see more of your work at once, it would be harder to lose context, and you could travel back through the call stack just by glancing your eyes downward.

While we’re on the subject, when did we all decide that one class = one file? Are files even a sensible way of segregating classes any more? Many languages do support writing code with more than one class to a file, but few developers actually embrace that idea. When comparing versions of code where responsibilities have been moved around, it’s often difficult to understand how a change works when a piece of functionality has crossed the barrier from one file to another. Traditional deltas don’t really work well in these situations. I’m not really sure what the fix is here, but I’ve got this feeling stuck in the back of my head that we need tools which don’t force you to bind your architecture quite so closely to the filesystem.

3. You’ve taken it way too far.

It should not take an interface, three classes, two DLL’s, and (so help me) reflection to load a record from a database.

Yes, I have encountered this. I had to sit quietly and think about my career choices after doing so.

It’s natural when falling in love with a new tool to embrace it in all its ways. My high school computer studies teacher called this the “Microsoft Publisher Effect”, where a friend of yours figures out how to work their desktop publishing software and you start receiving party invitations that use seven different borders and twelve different typefaces. People have a tendency to go a little nutty when they come across a tool that’s new and empowering, and I’ve definitely seen that happen with the SRP.

Interestingly, this seems to affect the experienced coders more that the newbies. Almost as if after years of monolithic classes the pendulum swings back too far, or maybe it’s just that the whippersnappers haven’t had the opportunity to build bad habits yet.

Whichever camp you find yourself in, it’s important to remember that the SRP is all about managing and minimising complexity. If your work becomes harder to understand, then you’re doing a disservice to the next person who has to work with it.

Predictions for 2015

This glimpse into the future is an annual tradition. It usually consists of a list of things that I actually think will happen in the upcoming year, followed by a prediction that Half Life 3 will be released. So far Gabe Newell has been dragging down my batting average, but one of these years I’ll be right. Looking at my previous attempts, I have a less than 50% success rate of accurately predicting things, which still strikes me as better than random chance, though maybe that’s just my ego talking. If you’re feeling really enthusiastic, go read last year’s predictions and see how I did, otherwise let’s get started with the crystal ball gazing.

The Apple Watch will not sell particularly well. Even the small model is too big, and without native apps its functionality will be pretty limited. Don’t take this to mean that I think the watch will flop—it won’t, it’s just going to take a few years and a few iterations before it’s a must-have product for a big chunk of the population in the same way that the smartphone is. People forget that it took the iPod three or four years to become a household name, and its time at the top of the pile before being cannibalised by smartphones was about half that long. Apple’s done so much right with the watch, this is an excellent first cut, but the commentary that I’ve seen online misses the fact that there’s so much more to do.

Google’s dorky looking self driving cars will become a part of everyday life. We won’t see a fatal crash occur in 2015, but rest assured that day is coming. Public acceptance will grow slowly. I don’t think consumers will line up to buy a car that looks like an obese panda bear, but they’ll grow accustomed to driving along side them. It’ll be interesting to see how Detroit responds to self-driving cars. We’ve seen very little from them on this subject, but they do seem resistant to new ideas and new ways of doing business.

Last year I predicted that Microsoft would put a guy in the CEO’s chair who had an MBA and didn’t have an engineering background. Instead they appointed a guy who has an MBA and an engineering background. Well played Microsoft. In all seriousness I think Satya Nadella was an astute choice, but I have yet to see a compelling vision of Microsoft’s future from him. So far we can say he’s not Steve Ballmer, but that’s not enough. While open sourcing big chunks of .net and launching Office on iOS was nice and all, neither action made the company a meaningful amount of money. Outside of Azure, Microsoft is still coasting along on its cash cows and this needs to change before Nadella will be seen as a success.

In the tech startup scene, it’s going to be a cynical year. Competition between Uber and Lyft will get even more ruthless, with more ethical lines getting crossed1. Facebook and Twitter will continue to be more hostile towards third party developers, users, and each other. Like I said—cynical.

Social apps like Tiiny aren’t really finding an audience anymore. We’ve seen a land grab in the last half-decade with products like Facebook and Twitter whose solutions exist entirely in software. Most of the problems in that space are solved2 so the kind of startups we’ll now see will have a presence that stretches out into the real world. I’m talking about companies like FirstMile and TaskRabbit who have actual human employees who’ll show up at your bricks-and-mortar home to perform real services that exist on more than just a hard drive.

Finally, a personal prediction: I will write more. Seriously, last year I published three posts. That’s just abysmal. It’s difficult to identify the root cause, but part of it has been my inability to make time to write, and part has been a desire to keep the quality bar high and not publish stuff that I’ll regret later. Either way, expect more of my rambling in your feed reader.

  1. I’m waiting for the day when surge pricing is met with a DDoS attack.

  2. Perhaps “claimed” would be a better word than “solved”.

You Don’t Get to Set the Terms

A few months ago, someone I used to work for died. We’d fallen out of touch, as people tend to do given enough inertia and time. She had motor neurone disease, and over the course of a few months it took her ability to talk, and then her ability to function, and then it took her life.

I had no idea she was even sick.

However, thanks to regular status updates on Facebook, many people at her well-attended funeral did.

That, amongst other things, was the straw that broke the camel’s back and brought me back to Facebook. I created an account a few days ago.

This is actually my second time on Facebook. I signed up years ago, but deleted my account after a couple of weeks. I left because the site struck me as a place that turned procrastination into a group activity, and it didn’t make my life better in any meaningful way. It also became another inbox to check, with all the sense of social obligation that goes along with that.

Over the years since, when I read about Facebook’s creepy social experiments and questionable business practices, I’d roll my eyes and feel good about myself for being above all that. I became like one of those smug people who don’t own a television, confident in my own correct choices, and oblivious to how irritating I was to everyone else.

I’m starting to learn that I don’t really get to set the terms on which my relationships operate. If a friend wants to invite fifty people to a party using Facebook invites, it’s a generous and forgiving person that goes out of their way to invite me over email—and it demonstrates a sense of entitlement on my part to demand they go out of their way to do that. It’s gotten to the point where opting out of Facebook is much like refusing to own a phone. There are some people who might be able to pull it off, but I no longer can.

So far the whole Facebook experience hasn’t been great, but I’m not hating it. It seems like once you’ve signed up the default pattern of events is to have a few moments of nostalgia with everyone you lost touch with from high school, then curate your profile to pick the music and movies that best identify you as a person (i.e. provide targeting information for the Facebook advertising team), and then finally rifle through every snapshot you’ve ever posed for to find the very best one to use as your profile picture. Seriously, looking at some of these profile pictures you’d think my friends and family were the most photogenic people on earth.

So convince me it was worth the trouble and go follow me on Facebook.

The World’s Slowest Live Blogger Reviews the Google I/O 2014 Keynote

So I watched the video of Google’s I/O keynote, which has been sanitised to exclude protestors and failing demos.

It opens with some sort of incredible machine inspired contraption that had very little to do with Google, developers, or the keynote itself. I’m kind of baffled as to why they thought this would be a good idea. While I’m at it, I’ll also throw the techno-backed intro video into the cute-but-pointless pile. When you make a video that’s supposed to highlight how awesome Android is, it’s probably not a good idea to give significant screen time to Monument Valley and Flappy Bird, two games that got their start as iOS exclusives.

Thankfully, the presentation gets a lot better from there.

It’s cool to see Google highlighting the number of women in attendance. After last week’s publication of Yahoo’s diversity stats, I’m sure we’ll see more tech companies showing off these kind of numbers.

Material design looks beautiful, and I’m glad to see Google actually settle on a single set of design standards. The demos look clear and futuristic, if a little Windows Phone like. Animated touch feedback on standard UI controls really does look awesome. You’d think it’d be gimicky, but having buttons ripple and checkboxes light up when tapped really does look good. iOS’s super-flat borderless buttons look sterile and joyless by comparison.

I enjoyed the demo of personal unlocking, where a phone can automatically unlock without a passcode when it detects the presence of a paired bluetooth watch nearby. My big concern here: how does the device determine if it’s in a trusted environment? and will users understand the difference between the times their phones ask for passcodes, and the times that they don’t? The presenters made reference to detection using locations, bluetooth devices, and voice prints. I’m curious to see how that’ll work in practice.

The demo of Chrome tabs displayed in the recents view as if they were individual apps looks great. This is yet another example of Google embracing the web while Apple begrudgingly puts up with it. On iOS the web gets it’s own little sandboxed corner, whereas on Android (at least from a UI perspective) web apps look to be first class citizens.

There’s a demo of the Unreal engine running on Android, but they made no reference to what hardware the demo was running on - it could have been an x86-based supercomputer for all we know. In comparison to Apple’s demo of Metal at WWDC, this all seemed a bit suspicious. Having advanced gaming engines run on your platform is great, but it’s all for nothing if the hardware support isn’t there.

There were a few shots across the bow at Apple, aimed squarely at Tim Cook’s remarks about Android at WWDC. “Custom keyboards and widgets—those things happened in Android four to five years ago!”, cue rapturous applause from the crowd. Though “we take security very seriously”, followed by “less than a half a percent of users ever run into any malware issues” seemed a bit defensive.

They announced an SDK for Android Wear, and a few watches to go with it. Twenty bucks says that Apple has no third party developer support for the first year of the iWatch (assuming that they announce one, which many people seem to be treating as fact). The LG G and the Samsung Gear Live watches are a mixture of banal and ugly. The Moto 360 doesn’t look terrible, but it doesn’t look great either.

From here it was demos of Android Auto and Android TV. While all of this looks lovely and vaguely useful, I want to highlight one thing here that represents the biggest difference between developing for Android and developing for Apple platforms. When you’re developing for Apple devices, you make a Mac app, an iPhone app, and an iPad app; and there’s an expectation that you’ll charge for all three (or at least charge separately for Mac/iOS versions). On Android, Google is asking developers to make a single app that works on watches, phones, tablets, chromebook laptops, cars, and TVs. All for one price. That’s a big ask, and I’d argue that it’s the central reason why the third party app ecosystem on Android tablets is so lacklustre. If developers don’t have a financial incentive to make great apps for every form-factor, you’ll find that the only apps that do get made are ones by companies that have alternative financial incentives (Facebook, Yelp, et al.). In order for Android to have a worthwhile app ecosystem, Android users will have to start accepting higher prices for apps that run in all places (not likely), or Google has got to start providing tools to dramatically reduce the cost and complexity of targeting different form factors. Material design goes a small way toward this, but it’s not enough.

Congratulations on making it this far. The presentation wraps up with some new tools available for Google’s cloud infrastructure, and some incredibly uninteresting stuff about big data. The last half hour of the video can be comfortably skipped.

So will I be switching to Android? Probably not, but for the first time Android is a platform that looks like something I could use and love, rather than use and tolerate. In particular when it comes to TVs and phones, Google is really giving Apple a run for their money.

Predictions for 2014

I didn’t do too badly with last year’s set of predictions. Tesla really can’t make the Model S fast enough, and Yahoo really did get it together (go Team Marissa!). While I did guess that the Microsoft Surface would do badly, I never in a million years imagined it would fail in a show-Steve-Ballmer-the-door kind of way. That Kit-Kat name for Android also came out of nowhere.

So, here’s what we have to look forward to in 2014:

Firefox OS hits 1.0, gets some attention amongst the Slashdot crowd, and dies fairly quickly. Android has already gained a lot of traction in the developing world, and Google has a mature operating system and a thriving third-party developer community. Sorry Mozilla, but Google wants this more than you do.

A new CEO takes over at Microsoft. I really have no idea who. I’ll tell you this though - they won’t have an engineering background, they’ll probably have an MBA, and they’ll put the shareholder’s short term interests first. Take that for what you will.

Google Glass goes the way of the Zune. It just isn’t cool to wear a product that’s the target of late night comedy punch lines. Don’t worry though - the underlying technology will stick around, making its way into cars and Android devices.

For some inexplicable reason, Google will continue to push Google Plus. I think at this point we’re really just observing the sunk cost fallacy at work. In many ways it would hurt morale too much to give up on. It’s easier to just keep hoping that it’ll turn into something relevant eventually. Remember how we all thought social services were the next big wave of innovation on the web? Coupled with Facebook’s declining relevance with teenagers, the whole social space seems to be becoming less and less important.

It’s a big year for Apple. They’ll announce something wearable (I don’t want to call it a watch, because telling time will be pretty low on the list of priorities). The Mac will focus on getting smaller with new MacBook Pro models that are thinner and lighter. Conversely iOS devices will get bigger, with a 12 inch iPad Pro and a five inch iPhone.

I finished last year’s post (and the year’s before that) with a prayer that the folks at Valve would get Half Life 3 out the door. The Steam Box is great and all, but what’s it worth without great games? I’m keeping the dream alive, but it’s hard to say for how much longer. Any time now, Mr Newell.