The Checklist Manifesto: How to Get Things Right

I’ve just finished reading Atul Gawande’s somewhat self-assuredly titled The Checklist Manifesto: How to Get Things Right. Trepidatious about reading an entire book dedicated to the unassuming concept of checklists, it had slipped in my reading queue. The result was however a pleasantly educational and entertaining surprise – he’s a good writer, with an extensive repertoire of experience. There’s even three hours of flight time left for me to knock this post out.

The foundation is simple: we’ve entered an era that’s encumbered by our ability to apply knowledge (ineptitude), as opposed to lacking it in the first place (ignorance).

Half a century ago, heart attack treatment was non-existent; patients would be given morphine for the pain, some oxygen, then sent home as, what Atul describes, “a cardiac cripple”. In contrast, responders are now faced with a wide gamut of therapies and the new challenge of implementing the right one in each scenario. When they fail, beyond the obvious downsides, blame is frequently attributed to the professional who ‘failed’ to apply a body of knowledge they have been given. As this becomes an unachievable task, we need to adopt better solutions.

Without detracting from the value of investing a few hours to read the book yourself, I wanted to tease out a few of the points I found interesting. If you find these even vaguely interesting, I really do suggest that you grab a copy.

An Emphasis on Process

Atul cites that the master builder approach to construction has been replaced with specialized roles to such a degree that we really need to call them super-specializations. It started with dividing the architects from the builders, then splitting off the engineers, and so and so forth. As a surgeon, he jokes that in the medical world he’s expecting to start seeing left-ear surgeons and right-ear surgeons, and has to keep checking that this isn’t already the case whenever somebody mentions the idea.

In the advent of this, certification processes have also evolved. Where a building inspector may have historically re-run critical calculations themselves, modern building projects involve too many distinct engineering disciplines, drawing on too many bodies of knowledge, for this to be practical. We could build a team of specialized inspectors, except this rapidly becomes unwieldy itself. Instead, building inspectors have taken to focusing on ensuring that due process has been followed. Has a particular assessment been completed by the relevant parties? Did it have the appropriate information going in? Did it produce a satisfactory outcome? Great, move on.

An almost identical construct exists in Australian employment law. It doesn’t matter if somebody is completely incompetent (or inept?); you still have to follow due process in order to disengage them. Employment courts, despite already being a form of specialization themselves, are not interested in or capable of assessing an employee’s performance. They are however capable of asserting that the correct steps were followed in issuing warnings, conducting performance management, and so forth.

Here was my first face-palm moment: I’d made the mistake of considering a checklist as a list, with checkboxes. There’s a whole set of gate, check and review processes which I’ve always mentally separated from the concept of checklists. Beyond the semantics, I found this to be a valuable light bulb moment when considering some of the other ideas.

Communication

Atul’s passion for checklists comes from leading the World Health Organisation’s Safe Surgery Saves Lives program. In trying to solve the general problem of ‘how do we make surgery safer?’, the program ended up rolling out a 19-point check list, with amazing results. It’s no small feat to cause behavioural change across literally thousands of hospitals around the world.

There were actually two behavioural changes required. First, they had to get people to actually adopt the checklists as a useful contributor to the surgical process. They had to be short, add demonstrable value, and so forth.

The second challenge was getting people to talk to each other. Some of the statistics he quotes about the number of people involved in the surgical environment are amazing. One Boston clinic employs “some six hundred doctors and a thousand other health professionals covering fifty-nine specialties.” The result of this is that operating teams have rarely worked together prior to any particular case. Having clear specialities makes it functional to have an unacquainted collection of professionals achieve an outcome, however it doesn’t facilitate an environment of team work when something goes awry. Instead, these autonomous professionals become focussed-in on achieving their individual goals.

To combat this, one of the checklist points is actually as simple as making sure everyone in the room knows everyone’s name and role before the surgery begins.

Fly the Airplane

Some Cessna emergency checklists have an obvious first step: fly the airplane. While we wait for evolution to catch-up, our brains are still wired for a burst of physical exertion to combat panic. Otherwise common mental processes go by the way side and we do something stupid.

I like the simplicity of this point, and see it being useful in a operations environment.

Pause Points

In early trials of their new safe surgery checklist, participants found it unclear about who was meant to be completing the list and when. A similar problem plagues most development ‘done criteria’ I’ve worked with. Yes, everything is meant to be checked off eventually, but when?

Airline checklists instead occur at distinct pause points. Before starting the engines. Before taxiing. Before takeoff. In each of these scenarios, there’s a clear pause to execute the checklist. The list is kept short (less than a minute) and relevant to that particular pause point.

The next time I work on defining a done criteria, I think I’ll try and split it into distinct lists. These points must be completed before you push the code. These points must be completed before the task is closed.

“Cleared for Takeoff”

Surgical environments have a clear pecking order that starts with the surgeon. Major challenges of the safe surgery campaign were getting everyone to apply the process as a team, and ensuring individual members of the team were empowered enough to call a halt if something was about to be done incorrectly. To achieve this, nurses had to be empowered to stop a surgeon.

In one hospital, a series of metal covers were designed for the scalpels. These were engraved with “Cleared for Takeoff”. The scalpel couldn’t be handed over for an incision until the cover was removed, and that didn’t happen until the checklist was completed. This changed the conversation to again be about the process (‘we haven’t completed the checklist yet’) instead of individual actions (‘you missed a step’).

I think points like this are small but important. And definitely interesting.

Now, go and read the book.

The book is an extension of a 2007 article by Atul, published in The New Yorker. I haven’t read the article, but some Amazon reviews suggest it covers the same concepts with less text. Most of the book is just stories, but I found them all interesting nonetheless.

Code: Request Correlation in ASP.NET

I’ve been involving in some tracing work today where we wanted to make sure each request had a correlation id.

Rather than inventing our own number, I wanted to use the request id that IIS already uses internally. This allows us to correlate across even more log files.

Here’s the totally unintuitive code that you need to use to retrieve this value:

var serviceProvider = (IServiceProvider)HttpContext.Current;
var workerRequest = (HttpWorkerRequest)provider.GetService(typeof(HttpWorkerRequest));
var traceId = workerRequest.RequestTraceIdentifier;

(A major motivator for this post was to save me having to trawl back to my sent emails from 2009 the next time I need this code.)

Update 28th Feb 2013: Some people have been seeing Guid.Empty when querying this property. The trace identifier is only available if IIS’s ETW mechanism is enabled. See http://www.iis.net/configreference/system.webserver/httptracing for details on how to enable IIS tracing. Thanks to Levi Broderick from the ASP.NET team for adding this.

A Business in a Day: giveusaminute.com

Lately, my business partner and I have wanted to try some shorter ‘build days’. The idea of these are to start with a blank canvas and an idea, then deliver a working product by the end of the day. This is a very different approach to the months of effort that we generally invest to launch something.

Today we undertook our first build day and delivered Give Us A Minute, an iPad-targeted web app for managing wait lists:

image

It was a fun experience trying to achieve everything required in one day, but I think we did pretty well. We managed everything from domain name registration to deployment in just under 9 hours. One of the biggest unplanned tasks was actually building the website to advertise the app; we hadn’t even thought of factoring that in when we started the day. The photography also took up a bit of time, but we needed to do it to tell the story properly on the site. Also, it was nice to be required to go and find a beer garden with a tax-deductible beer each so we could get that bottom-left photo.

As part of staying focussed on the idea of a minimum viable product we dropped the idea of accounts very early on. At some point we’ll have to start charging for the text messages, but that then implies logins, registration, forgotten passwords, account balances and a whole host of other infrastructure pieces. In the mean time we’ll just absorb the cost of message delivery. If it starts to become prohibitive, it’ll be a pretty high quality problem to have.

The next step is to get this out on the road and into some businesses. We’ll start by approaching some businesses directly so we can be part of the on-boarding experience. Based on how that goes, we’ll start scaling out our marketing efforts.

We also need to get ourselves listed in the Apple App Store for the sake of discoverability. The ‘app’ is already designed with PhoneGap in mind, but we’re waiting on our Apple Developer enrolment to come through before we can finalise all of this.

Homomorphic Encryption + Cloud

There’s an interesting article in this month’s MIT Technology Review about ‘homomorphic encryption’. It has been around in principle for some 30 years but is seemingly back in vogue thanks to cloud computing.

The simple run down:

  • You want to use a cloud service to perform some computation (add numbers together)
  • You don’t want to give the cloud compute provider your original data (numbers) though
  • You take your original data (1 and 2), encrypt it locally (33 and 54), then upload it
  • The cloud service performs the computation on the encrypted data (33 + 54 = 87)
  • You download the encrypted result (87) and decrypt it locally to find the answer (3)

Obviously the complexity sky rockets when you start talking about something like full text indexing, document parsing, etc … and may not even be possible without influencing the encryption process to the point that it becomes predictable … but it’s a fascinating idea none-the-less.

I can see this being useful with something like table storage. If someone like MSR could scale the algorithms sufficiently to handle clustered + non-clustered indexes – you could have Azure table storage with client side encryption and all the algorithms magically buried away by the fabric. How cool would that be?

The article: http://www.technologyreview.com/computing/37197/

I work on the web.

Readify’s CTO Mitch Denny just announced Damian and myself as the first two Readify staff to receive the new title of “Technical Specialist”. This is an additional title to represent technical focus beyond our standard consulting commitments. Over the coming weeks, this will be awarded in a number of key technology areas. Obviously, for Damian and myself, it’s the web. 🙂

As part of the position we needed to do a bit of a refresh on our consulting profiles, a component of which is a blurb about why we work in the industry. Having just written mine, I felt like sharing it:

The web was never a platform I explicitly sought out; it was more so somewhere I ended up, but I ended up here for a reason. The shear breadth and power of the web, from both technical and business perspectives, made it a natural fit as I developed my career and skill sets.

I’ve always been fascinated by how a relatively simple set of building blocks designed through the 70s and 80s now underpin so much of what we do today. Video calling might look all fancy and futuristic, but it’s still sitting on much of that same technology. It’s this ability to foster evolution and innovation in an open, neutral and (mostly) democratic way that makes the web both possible and exciting.

Mass organic adoption of the web has today given us a heterogeneous environment of networks, devices and software clients that can be quite accurately described as somewhat hostile. Navigating these challenges to deliver a robust and compelling solution, while also seeking to drive the web forward, is what I do as a web specialist.

Microsoft’s early forays into web development were designed to make it an easy transition for their existing community of developers. This approach has resulted in a generation of developers who work on the web without necessarily being fully aware of its scope or potential. Microsoft’s current push is to now bring these developers across to the next iteration of the web. Engaging these audiences and encouraging them to that take that next step is a key component of what I do as an active community member.

In an ever increasingly connected world, now is the time to work on the web.

Why do you work on the web?

(While you’re thinking about it, check out iworkontheweb.com)

Missed TechEd Australia? Get the content anyway.

Close on the heels of TechEd Australia, Readify have announced their latest Dev Day event. This time, we’ve also tweaked the structure a little bit so that instead of having two tracks at the same time we’ll be running a morning track and an afternoon track. This way you get to see it all, or just pop in for the half day if you want.

Richard Banks will be presenting in the morning on Software Quality and Application Lifecycle Management, split into:

  • Gathering Quality Requirements for Agile Development Teams, and an
  • Introduction to Visual Studio Team System 2010.

In the afternoon, I’ll be covering Building for the Web with .NET through three different presentations:

  • Building Fast, Standards Compliant ASP.NET Websites,
  • ASP.NET MVC: Building for the web, and an
  • Introduction to the ASP.NET Web Forms Model-View-Presenter framework.

For my talks, you can find some teasers between my last two blog posts and CodePlex.

To see it all, you’ll just have to come along though. 🙂

More info at: http://readify.net/training-and-events/rdn-dev-days/

See you there!

Announcing: OpenSearch on ASP.NET made super easy with the OpenSearch Toolkit

OpenSearch is a technology that already has widespread support across the web and is now getting even more relevant with Internet Explorer 8’s Visual Search feature and the Federated Search feature in the upcoming Windows 7 release.

Recently I blogged about some work I’d been doing with OpenSearch and how frustrating the whole process was. By the time you build feeds for IE8, Firefox and Windows 7 you’ve touched on “standards” documented by Amazon, Yahoo, Mozilla and Microsoft. Good luck tracking down all that info and working out the discrepancies!

As a first step to making this easier, I released the OpenSearch Validator so we had a quick way to track down all the potential issues and get a clear indication of whether our OpenSearch implementation was going to work in all environments we wanted it to. I also released the source code for this on CodePlex so that you can run it in your internal dev environments or even integrate it into your build process.

Now it’s time to make it even easier. Ducas Francis, one of the other members of my team, took on the job of building out our JSON feed for Firefox as well as our RSS feed for Windows 7 Federated Search. More formats, more fiddly serialization code. Following this, he started the OpenSearch Toolkit; an open source, drop-in toolkit for ASP.NET developers to use when they want to offer OpenSearch.

Today marks our first release.

Implementing OpenSearch

First up, download the latest release of the project from CodePlex and add it to your project references:

Reference

Next, add a new Generic Handler to your project called OpenSearch.ashx:

GenericHandler

In the code behind for the handler (OpenSearch.ashx.cs), remove the autogenerated code and change the base class to OpenSearchHandler:

BaseClass

Now, just start implementing the abstract properties and methods on the base class.

The first one you’ll want to implement is the Description property. This returns the basic meta data about your provider that will be shown when users choose to add it to their browser’s list of search providers. You also need to specify the SearchPathTemplate which is a format string that the OpenSearch Toolkit will use to generate links to your site’s search page.

Description

Next, implement each of the data methods. GetResults is the most important one that you need to implement. It should return an array or collection of about 5 to 8 search results, preferably with thumbnail images.

Implementing GetSuggestions is a little bit harder, and we don’t expect everyone to do it. The idea of this method is to return suggestions for other search terms. For example, if the supplied term was “Aus” you might return “Australia” and “Austria” as suggestions. The process of generating these results from your data is naturally a bit harder.

Results

At this point you have a fully functional OpenSearch endpoint.

The last step is to tell the world about it by adding a small snippet of HTML to the <head> section of each of your pages:

<link title="My Site" rel="search" type="application/opensearchdescription+xml" href="~/OpenSearch.ashx" />

Voila! Your users will now get to experience full OpenSearch support from your website:

VisualSearch

What’s wrong with Outlook?

It has been an interesting few weeks in the world of web standards for email.

The boys from Campaign Monitor executed a successful awareness campaign in the form of fixoutlook.org which rapidly racked up over 24,000 Tweets and overtook the Iran Election in Twitter’s trending topics. Unfortunately for all of us, it has been a case of message received – but not understood.

The Core Problem

Back in 2007, Microsoft swapped the Outlook rendering engine from Internet Explorer to Word. This in itself is not a problem at all; and actually delivered some really good improvements. There was now one-to-one fidelity between the authoring and viewing experiences because they were one and the same.

I like having Word as my authoring tool. I like features such as SmartArt and the context aware picture tools.

In making this switch though, we inherited the woeful CSS support that Word has. Microsoft’s developer documentation lists Word 2007 as supporting “a subset of the standard HTML 4.01 specification, […] the Internet Explorer 6.0 HTML specification [and] a subset of the standard Cascading Stylesheet Specification, Level 1.” That’s even less support than Internet Explorer 5 had.

Why does this matter?

This isn’t just some web standards movement for the fun of it – there is real business impact here. No, it’s not something that end users will bang their head against. It’s something that affects all of us web designers.

Two of the key areas that are lacking in the rendering engine are support for the float and background-image. The former throws us back to the dark ages of table based layouts and all their inherent accessibility and layout issues. The latter means that there are some designs you just can’t do at all. Try placing today’s date on top of a graphic header in an email and let me know how you go.

In a comment that I consider a bit unfair, Microsoft’s official response referred to Campaign Monitor as makers of “email marketing campaign” software (complete with those quotes). Another thread I stumbled across described the fixoutlook.org campaign as being about the ability to deliver “bloated HTML with pixel trackers, domain redirectors and Google Ads”.

This is not a movement to aid in the delivering of spam. There are legitimate reasons for delivering automated and/or bulk emails to users. Campaign Monitor goes above and beyond the legal requirements to make sure their system is not misused.

This movement is about being better online citizens:

  • Bloated HTML? Float-based layouts are much leaner and faster to render than table-based layouts.
  • Pixel trackers? We can do that in Word already – no change here.
  • Domain redirectors? I don’t even know what they are in this context and Bing doesn’t seem to either.
  • Google Ads? We’re not talking about running scripts at all.

Why does it really matter?

Personally, I think one of the most amusing demonstrations of why this really matters comes from one of Microsoft’s own newsletters:

XBox Newsletter

Notice that message on the top? “Read this issue online if you can’t see the images or are using Outlook 2007.” The authors of this newsletter probably deemed that Outlook 2007’s rendering engine required too much extra work for them to support it that the business case just didn’t exist.

We’re going through this same experience at the moment for one of the largest online presences in Australia. Having got our templates working in all of the major email clients except Outlook 2007 and Gmail, it was time to see what we could do about these last two stubborn children. In the end, it took twice the amount of time to make it Outlook 2007 compatible than it did to develop it in the first place. (And no, Gmail is never a pretty story either but that’s not an excuse Microsoft should be using.)

It’s not all about mass marketing either.

Here’s how one of my opt-in Twitter notifications renders side-by-side in Word and IE:

Side-by-side rendering of Twitter email in Word and IE

(click for full size)

Why now?

There have been some comments floating around asking why we’re only just starting to care now. I think this is a valid question, with two answers.

First and foremost, email has always been a right pain and thus the Email Standards Project was born in 2007. This project has gone on to make head way with some of the biggest names in the email game. Unfortunately though, there has been lack lustre response from Microsoft to date (including even to this targeted campaign).

Secondly, while this problem has been present since Outlook 2007, the big concern is that there doesn’t appear to have been any recourse made in Outlook 2010. To be fair, no official builds have been released yet and thus the fixoutlook.org campaign is being driven on evidence gained from a pre-beta build. With all that in mind though, you’d think that Microsoft could have mentioned something in their reply if they were working in this area. They didn’t. Also, now is our last chance to try and make an impact on Outlook 2010 before it gets locked down into the full testing regime.

Standard? What standard?

Microsoft’s official response correctly identifies that “there is no widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability.” They are also correct in identifying that “the Email Standards Project does not represent a sanctioned standard or an industry consensus in this area.”

As I highlighted at the start of this post, Microsoft have explicitly stated that the HTML and CSS support in Word 2007 is but a subset of existing standards. It is also interesting to note that they refer to the Internet Explorer 6.0 HTML Specification, another document which is not a sanctioned standard or an industry consensus in this area (or any, really).

It should be recognised that Email Standards Project is not about developing a new standard, or even a subset of an existing one. It does not portray itself to be a standards organisation at all.

This is demonstrated on their homepage by the clear mission statement:

Our goal is to help designers understand why web standards are so important for email, while working with email client developers to ensure that emails render consistently. This is a community effort to improve the email experience for both designers and readers alike.

In doing so, they have developed an acid test that they can use to measure the relative performance of each of the clients. This test is a subset of the existing standards, and a subset that they have arbitrarily agreed upon, however it is simply a tool for providing relative comparisons in the same way that we use the ACID1, ACID2 and ACID3 tests for web browsers. In fact, the IE 8 team considered passing ACID2 to be a milestone for their product’s development.

Meet in the middle?

The rendering comparison provided on fixoutlook.org does include one little morsel of hope. At the top of the Outlook 2010 rendering, you’ll notice an information bar that says “If there are problems with how this message is displayed, click here to view it in a web browser”. We can fairly safely assume that this will either flick the rendering engine to IE for that message only, or save it out to a temporary location and fire up the user’s default browser. The latter will have some challenges around embedded MIME data, however I imagine this is something they would have already solved in the pre-Word days of Outlook’s rendering.

Let’s get back to the original issue for a second: Word is being used to ensure a congruent authoring and rendering experiences, and a side effect of this is that emails authored with specific HTML and CSS do not render well.

Sound familiar? Web browsers solved this problem years ago with the introduction of multiple rendering modes driven by doctype switching. This has been adopted by every major browser manufacturer, including Microsoft, as a way of ensuring wide compatibility with varying levels of standards support.

The idea is by no means unique, but what’s stopping us from having Word as the rendering engine for emails received from another Outlook instance and IE as the rendering engine for emails the rest of the time?

  1. There is evidently code there already to detect when an IE-based render would produce better quality results.
  2. MIME already includes enough information for one to determine what application authored a message.
  3. I don’t think anybody is currently too concerned about the amount of rendering that is preserved when one attempts to forward an EDM. This is troublesome ground across every email client out there.

We don’t need to radically change Word to support a whole bunch of new renderings. We don’t need to tear Word out of Outlook (despite what some of the campaign supporters have been saying).

All we’re asking for is a reliable and consistent way for the web developers of the world to deliver styled emails to Oulook, one of the best messaging platforms out there.

Updates

6th July, 1501: John Liu accurately brought up the anti-trust restrictions around the packaging of Internet Explorer. These restrictions apply to the shipping of Internet Explorer as a product, and do not relate to the underlying rendering engine (mshtml.dll). In fact, The Help & Support interface in Windows 7 relies on this rendering engine itself:

image