Brian Willis

CSS Grid Changes Everything

Here’s a great talk by Morten Rand-Hendriksen from this year’s WordCamp in Paris, entitled CSS Grid Changes Everything. The jokes are bit much, but stick with it because what he has to say is worth listening to.

I’ve lived through a few “generations” of CSS tech, from gif-spacers to tables to divs to flexbox. Every time we make one of these leaps forward, I feel like we’re rearranging our prejudices, enforcing new tradeoffs in an attempt to reinvent the wheel. I’ve been concerned about this lately, especially since last week’s announcement by Adobe that Flash will be end-of-lifed. I know Flash is a dumpster fire, but it enabled so much creativity back in the early days of the web, and a small part of me is sorry to see it go. More so than any other tool, Flash made me feel like I could turn my ideas into pixels. CSS has never felt like that, especially during the days of divs and floats. Instead CSS feels like one part rigid authoritarian and one part abusive spouse. I’m always one misstep away from having my defenceless HTML get scrambled for reasons that seem downright arbitrary.

My first impression of CSS grid is that it makes layout approximately as intuitive as flexbox, but in a more elegant way. There’s nothing inherently wrong with that, but I can tell you without even using it that there are some things it does badly. Want to stack one div atop another on the z-axis, creating a three dimensional effect with a sense of depth? Not going to happen. Want to have divs that change dimensions in response to mouse over events? Not without breaking your layout.

There’s definitely a particular set of use cases being optimised for here. Namely, static pages with little interactivity, little depth, and a rigid structure1. This, more than anything, is what makes CSS grid feel like the anti-Flash.

It might just be time to concede that there’s no way to create a general purpose layout model that covers the whole scope of what we do on the web. I’m disappointed that’s where we’ve ended up, but I guess it’s better than gif-spacers.

  1. You know, like this one. 

The AP Begins Allowing “They” as a Singular Pronoun

Lauren Easton, Director of Media Relations for the Associated Press, quoting the next edition of the AP Stylebook on their blog:

They, them, their — In most cases, a plural pronoun should agree in number with the antecedent: The children love the books their uncle gave them. They/them/their is acceptable in limited cases as a singular and-or gender-neutral pronoun, when alternative wording is overly awkward or clumsy.

Well it’s about time. English desperately needs a singular gender-neutral pronoun to fill the biggest functional gap in our language. Calling people “it” is offensive and crude; while “he or she” is unwieldy, and disregards the existence of people who don’t identify as either.

Predictions for 2017

A short one this year, as we live in uncertain times.

Once again, despite my repeated insistence, one of you went and launched another social service. I warned you all not to do this, but I suppose it was inevitable that eventually someone would try and open source Twitter. We’ve had paid twitter, charity twitter, and gnu twitter, so this really is the next logical step. This one is doomed to failure for the same reason as all the others—social services aren’t about features, they’re about people, and the people are all using Twitter.

I’m calling it now: the digital crown is coming to the iPhone. It’s such a natural interaction on the Apple Watch, would fit perfectly with where you already place your fingers, and could replace the volume up/down buttons in an obvious way.

In other news: Yahoo goes out of business; Google’s next Pixel phone becomes a meaningful threat to the iPhone; Apple demonstrates that they still care about the Mac by releasing exemplary Kaby Lake desktops; Microsoft flails in all directions, doing a better job of pleasing developers than paying customers; and you still won’t be able to edit a Tweet.

It’s traditional for me to wrap this up by predicting that Half Life 3 will come out this year. While I may have started this post calling these “uncertain times”, now that we’re ten years following the previous chapter’s release I think we can say with certainty that the Half Life franchise is well and truly dead.

Observations After a Week With an Apple Watch

I’d forgotten what a giant pain in the ass it is to have a chunk of metal strapped to your body all day. Your wrist’s centre of gravity shifts. Typing is harder. The watch catches door frames as you walk through them. I knew there was a reason that I gave up wearing a watch 10 years ago, and I’ve had to rediscover it the hard way.

Apple Pay is like something out of science fiction. It doesn’t feel like it should work. They let me leave the supermarket with my groceries, but I half expected them to chase me out the door.

The fitness tracking stuff is as compelling as promised. I find myself walking more to close the activity rings. It remains to be seen how long this will stick, but I think it’s more than a gimmick. In particular, heart rate tracking is way more frequent than I’d anticipated. I’m seeing readings every 5-10 minutes, with no meaningful hit to battery life.

Speaking of battery life, I took the watch off the charger this morning at 6:30, I’m typing this at noon, and I’m at 94%. Battery life like this is practically unheard of in the Apple ecosystem.

The wrist detection is unbelievably accurate. Taking the watch off immediately locks it, and raising my wrist to check the time has only failed once.

Third party apps are mostly useless. There’s nothing here that I’d use every day. This is a real concern for the future of the product. To succeed, the watch needs to be useful and necessary, and at the moment it’s just a fun toy for early adopters.

On Humanity

Seth Godin, writing at his blog:

If the boss can write it down, she can find someone cheaper than you to do the work. Probably a robot. The best jobs are jobs where we don’t await instructions, where using good judgment and taking initiative are far more important than obedience.

…but what happens when judgement and initiative become something we can automate too?

I’ve been mulling this over since C.G.P. Grey published his video Humans Need Not Apply. I’m glossing over some of the finer points, but his central argument is that the future of work looks pretty grim, with software and robotics taking over jobs that we’ve traditionally thought only people were capable of.

He’s absolutely right by the way. I’m a Software Developer, which means I unemploy people for a living. If a pice of work can be automated it eventually will be, and when that happens yet another person ends up out of work. There’s no limit to this either—people aren’t as special as they think they are. Right now, almost all of us have a job that can either in whole or in part be replaced by a machine. This is going to be a problem when we get to the point that jobs are being automated away at a faster pace than new jobs are being created.

So what does a person do to maximise the chance that they’ll stay employable?

A person’s biggest asset in the face of automation is their humanity. It’s the one thing robotics can’t compete on. Any line of work that’s humanised remains valuable work when done by a person.

Consider a stay in hospital. I can see patients accepting a robot surgeon. Fewer mistakes, fewer side effects, faster surgeries, all positive things. But what happens in the recovery ward? It’s one thing to be operated on by a machine when you’re unconscious and unaware of the experience, but can we really expect people to accept care from robot nurses? Nursing is a line of work where humanity counts, and between the uncanny valley, and our general desire for authenticity, I don’t see patients reacting well to being taken care of by Alice from The Jetsons.

We can see inklings of this effect in other industries too. High end watches are truly terrible at timekeeping, with even the best models on the market drifting seconds each day. By comparison, a quartz watch will drift around a second a day, and smart watches sync regularly with time servers, effectively eliminating drift. So why would someone buy a mechanical watch? Because its value comes from being hand made by a person, following traditions that are in some cases centuries old. The value in a Portugieser comes from the fact that it didn’t roll off an assembly line, slapped together mechanically.

It’s not just humanity that gives us an edge over automation—it’s authenticity. It’s easy to write off hispter culture as some sort of quirky longing for a world that never really existed, but at its core hipsterdom rose from a lack of authenticity in the world. It was a whole social movement that said the plastic and formica and corporate sterility of the world was getting too much, and that we needed reclaim some of what we’d lost in our pursuit of efficiency. People care deeply about the substance of the things they buy, and how those things make them feel. Authenticity is why barista made coffee can be sold for more than coffee out of a machine, why parents consider their children’s artwork priceless, and why Emily Howell doesn’t have many fans.

I know these are transient and superficial reasons to value one kind of work over another, and after writing this I’m having difficulty reconciling my desire to remain employable with the fact that no one describes the software I make as artisanal or hand crafted. I’m not trying to say that we’ll solve the problem of automation and find work for billions of people by creating goods that are meaningfully worse. Instead, I’m suggesting that the economy oftentimes values things in counterintuitive ways, and I think because of that there’s hope for us.