A Career in Business

This is the transcript of a presentation I’ll be giving at a joint schools careers night in Sydney tomorrow. The hosting school are my old high school, SHORE.

I’m here this evening to discuss small business.

It’s my personal opinion that a business degree will prepare you for running your own business little more than your HSC already will.

Your prospective universities may disagree.

Running a business is an expression of entrepreneurialism, and is not something that can be taught or learnt in a class room. They may be able to teach you financial ratios, an obscure set of legal structures and some basic marketing techniques, however each of these are somewhat superfluous to the core of running a business.

Many of the business people I value the most have either never attempted university, or more often, never completed it. The success of these individuals and their lack of (completed) formal tertiary educations should not however be interpreted as a free pass for anybody to skip uni, jump in head first and expect an equivalent outcome. Nor should it devalue the relevance of finance, law, marketing and the myriad of other business-related professions that I have skipped over. Instead, they each highlight what is at the very core of a successful business person; the individual themselves. If business success was a simple as following as defined set of parameters that you were taught in a classroom, the free market economy would simply fail to exist in the first place.

Partnerships

Running a business is a tough road. It requires one to act in many roles and demonstrate a diverse range of skills over an extended period, all whilst maintaining balance and focus. Larry Page and Sergey Brin suspended their PhDs to found Google. The pair’s success serves to highlight what I see as a very important aspect of the individual; the partner. All too often, starting a business is seen as something done by an individual.

Running a business demands a set of behaviours that one person will often struggle to deliver consistently. Particularly during the start-up phase, it can be a struggle to maintain focus in the right areas and avoid the wrong areas, particularly when you’re so passionate about getting it perfect instead of getting it done. Having a business partner can deliver the balance required, as well as a sounding board for ideas and the always needed set of helping hands. I know that I wouldn’t have achieved but half of what I have today without my business partner Tom.

In our business, Tom also provides a complementary set of skills. He’ll leave a lunch with three business leads and I’ll be asking him what the host’s name was again. Our partnership allows me to focus on building out product, and Tom to focus on the capitalization, two processes that require distinct skill sets.

There’s another description I quite like, which is that if you can’t convince one person to join you, you probably want to revisit your idea. I can’t remember exactly where I heard that, but I suspect it may have actually been one of my teachers here.

Someone who I see as being in dire need of a business partner is Mark Zuckerberg, the founder of Facebook. In 2006 Microsoft tried to buy Facebook and arranged an 8am breakfast catch-up. His office responded that “this simply would not be possible, as Mr Zuckerberg would still be in bed at that time.” Around this same time he was still attending trade fairs handing out a business card that read ‘I’m a CEO … bitch’.

Whilst we can all appreciate the humour, Facebook has failed to achieve anywhere near the profits required to match the US$2.2 billion that Microsoft laid on the table. I have some suspicions that Mark would be better served building the website we know and love and leaving someone else to make the deals.

Risk Capacity

Another business you might be familiar with from your computer lab periods is CollegeHumor.com. Launched by two US college mates spreading fliers around their campus, traffic grew organically – and quickly. At the three-month mark, the site was attracting 600,000 visitors per week, largely thanks to a video of a man sitting on a tree stump being hit on the back of a head with a shovel.

What started out as a joking search for advertisers to pay for their beer ended up as an online media business attracting advertisers like Coca Cola and Dreamworks. Had the business failed, Josh and Ricky would have lost little more than their initial $200 investment and their $30 per month internet bill. No doubt, they would have held on to their now extensive collection of funny videos.

Josh Abramson, the site’s co-creator once commented that “the greatest thing about starting a business in college is that there is very little risk.”

He’s right; and you’re in that sweet spot right now. You don’t go broke when you’re 18; you just go back to your parent’s place for dinner. You might miss a night out on the town; but you won’t miss a mortgage payment.

You’re all sitting at the peak of your risk capacity, and shouldn’t be afraid to take advantage of that. I’ve certainly had my fair share of failed ideas along the way. Most just lost time, some lost money and I certainly had varying amounts of cash in my wallet along the way. All of them were good fun though, and all of them prepared me better for the subsequent, and successful, attempts.

Skills Crossover

All of the businesses I have mentioned so far have been web based, but this does not make them purely technology businesses.

Lots of people author good content on the web, but that doesn’t make them good at promoting it. Google does that for them.

Plenty of people can make a funny video, YouTube demonstrates that. The simple act of publishing the video doesn’t make it popular with college students though. College Humor does that.

In their simplest sense, each of these businesses operates on the divide between two different skill sets. Generally, I find this to be the space where the most opportunity exists.

Brendan Powning is a 42 year old rigger on construction sites, with a side hobby of electronics. With 20 years of experience on construction sites, he’s seen countless tools go missing and no great solutions. Using his basic electronics skills he developed an inexpensive motion sensor that could be buried alongside the entrances to construction sites and activated from a mobile phone. Now, every time somebody drives on to one of his constructions sites out of hours, he gets a text message and can investigate.

Having being featured on the ABC’s New Inventors, he’s now developing a successful business around this device. Whilst not an overly complex or unique idea, it took the rare crossover of construction site experience and basic electronics skills for the idea to eventuate and be practical.

Think about the activities, sports and hobbies you participate in today, and where the frustrations exist in each of them. As its simplest level, business is about providing solutions and finding the frustration is generally a good place to start.

Enablement

This can all sound a bit ominous though; finding an idea, finding a business partner, and then (hopefully) finding your first customer. Picking the trifecta is hard, and it should sound at least a little bit ominous because if you’re successful in doing so you’ve done bloody well.

You don’t always have to do that though. There are plenty of useful ideas out there already; more often than not from people who will fail to execute them. Similarly, there are plenty of people ready to achieve, but still looking for that golden idea.

Think about which group you are in now; then find the other.

One of the businesses I run is a based in male fashion PR and export. I don’t know any designers, I don’t know any fashion editors, and I don’t know any fashion trends until they turn up in Chatswood Westfield. I do know how to build a website, my business partner knows how to market it, and our strategic partners know how to make it look good. Our involvement is silent, based around enabling someone else’s idea using our skills.

Closing

Stepping back for a moment, you may have noticed that I haven’t actually used the phrase “small business” since the very beginning of this talk. The techniques, behaviours and skills required in running a small business are equally applicable to operating larger businesses. The experience gained from making those front-line business decisions along the way has given me a respect for the wider organisational context. What may seem right and justified to you at one time is but a single point of view in the tug-of-war that is running an enterprise. Attempting to understand this wider context without having experienced it is an exercise in futility. The consulting role that I am currently engaged in may be one of technical strategy, but I won that role more so from my capacity to engage with the business as opposed to technical flair.

Closing on that statement, I’d like you to leave this evening thinking of business as both an end-game, as well as a flexible opportunity to develop a highly valuable set of skills.

If you’d like to review this talk at another time, the transcript is available from tatham.oddie.com.au, and I’ll ask Mr Scouller to circulate that address.

Thank you.

Testing the world (and writing better quality code along the way)

Working with the awesome team on my current project, I’ve come to the realisation that I never really did understand automated testing all that well. Sure, I’d throw around words like “unit test”, then write a method with a [TestMethod] attribute on it and voila, I was done; right? Hell no I wasn’t!

Recently, I challenged myself to write an asynchronous TCP listener, complete with proper tests. This felt like a suitable challenge because it combined the inherent complexities of networking with the fun of multi-threading. This article is about what I learnt. I trust you’ll learn something too.

What type of test is that?

The first key thing to understand is exactly what type of test you are writing. Having fallen in to the craze of unit testing as NUnit hit the ground, I naturally thought of everything as a unit test and dismissed the inherent differences of integration tests.

  • A unit test should cover the single, smallest chunk of your logic possible. It must never touch on external systems like databases or web services. It should test one scenario, and test it well. Trying to cover too many scenarios in a single test introduces fragility into the test suite, such that one breaking change to your logic could cascade through and cause tens or even hundreds of tests to fail in one go.
  • An integration test tests the boundaries and interactions between your logic and its external systems. It depend on external services and will be responsible for establishing the required test data, running the test, then cleaning up the target environment. This citizenship on the test’s behalf allows it to be rerun reliably as many times as you want, a key component of a test being considered valuable.

I always dismissed the differences as being subtle elements of language and left it for the TDD hippies to care about. Unfortunately, they were right – it does matter. Now, let’s spend the rest of the article building a TCP listener that is testable without having to use an integration test. Yes, you heard me right.

The Problem Space

As a quick introduction to networking in .NET, this is how you’d accept a connection on port 25 and write a message back:

var listener = new TcpListener(IPAddress.Any, 25);

using (var client = listener.AcceptTcpClient())
using (var stream = client.GetStream())
using (var streamWriter = new StreamWriter(stream))
{
   streamWriter.Write("Hello there!");
}

The first line attaches to the port, then AcceptTcpClient() blocks the code until we have a client to talk to.

In our challenge, we want to be able to talk to two clients at once so we need to take it up a notch and accept the connection asynchronously:

static void Main(string[] args)
{
  var listener = new TcpListener(IPAddress.Any, 25);
  listener.BeginAcceptTcpClient(new AsyncCallback(AcceptClient), listener); 

  Console.ReadLine();
} 

static void AcceptClient(IAsyncResult asyncResult)
{
  var listener = (TcpListener)asyncResult.AsyncState; 

  using (var client = listener.EndAcceptTcpClient(asyncResult))
  using (var stream = client.GetStream())
  using (var streamWriter = new StreamWriter(stream))
  {
    streamWriter.Write("Hello there!");
  }
}

If you’ve looked at asynchronous delegates in .NET before, this should all be familiar to you. We’re using a combination of calls to BeginAcceptTcpClient and EndAcceptTcpClient to capture the client asynchronously. The AcceptClient method is passed to the BeginAcceptTcpClient method as our callback delegate, along with an instance of the listener so that we can use it later. When a connection becomes available, the AcceptClient method will be called. It will extract the listener from the async state, then call EndAcceptTcpClient to get the actual client instance.

Already, we’re starting to introduce some relatively complex logic into the process by which we accept new connections. This complexity is exactly why I want to test the logic – so that I can be sure it still works as I continue to add complexity to it over the life of the application.

Split ‘Em Down The Middle

To start cleaning this up, I really need to get my connection logic out of my core application. Keeping the logic separate from the hosting application will allow us to rehost it in other places, like our test harness.

Some basic separation can be introduced using a simple wrapper class:

class Program
{
  static void Main(string[] args)
  {
    var listener = new TcpListener(IPAddress.Any, 25); 

    var smtpServer = new SmtpServer(listener);
    smtpServer.Start(); 

    Console.ReadLine();
  }
}

class SmtpServer
{
  readonly TcpListener listener; 

  public SmtpServer(TcpListener listener)
  {
    this.listener = listener;
  } 

  public void Start()
  {
    listener.BeginAcceptTcpClient(new AsyncCallback(AcceptClient), listener);
  } 

  static void AcceptClient(IAsyncResult asyncResult)
  {
    var listener = (TcpListener)asyncResult.AsyncState; 

    using (var client = listener.EndAcceptTcpClient(asyncResult))
    using (var stream = client.GetStream())
    using (var streamWriter = new StreamWriter(stream))
    {
      streamWriter.Write("Hello there!");
    }
  }
}

Now that we’ve separated the logic, it’s time to start writing a test!

Faking It ‘Till You Make It

The scenario we need to test is that our logic accepts a connection, and does so asynchronously. For this to happen, we need to make a client connection available that our logic can connect to.

Initially this sounds a bit complex. Maybe we could start an instance of the listener on a known port, then have our test connect to that port? The problem with this approach is that we’ve ended up at an integration test and the test is already feeling rather shaky. What happens if that port is in use? How do we know that we’re actually connecting to our app? How do we know that it accepted the connection asynchronously? We don’t.

By faking the scenario we can pretend to have a client available and then watch how our logic reacts. This is called ‘mocking’ and is typically achieved using a ‘mocking framework’. For this article, I’ll be using the wonderful Rhino Mocks framework.

This is how we could mock a data provider that normally calls out to SQL:

var testProducts = new List<Product>
{
  new Product { Title = "Test Product 123" },
  new Product { Title = "Test Product 456" },
  new Product { Title = "Test Product 789" }
}; 

var mockDataProvider = MockRepository.GenerateMock<IDataProvider>();
mockDataProvider.Expect(a => a.LoadAllProducts()).Return(testProducts); 

var products = mockDataProvider.LoadAllProducts();
Assert.AreEqual(3, products.Count());

mockDataProvider.VerifyAllExpectations();

This code doesn’t give any actual test value, but it does demonstrate how a mock works. Using the interface of IDataProvider, we ask the mock repository to produce a concrete class on the fly. Defining an expectation tells mock repository how it should react when we call LoadAllProducts. Finally, on the last line of the code we verify that all of our expectations held true.

In this case, we are dynamically creating a class that implements IDataProvider and returns a list of three products when LoadAllProducts is called. On the last line of the code we are verifying that LoadAllProducts has been called as we expected it to be.

Artificial Evolution

Now, this approach is all well and good when you have an interface to work with, but how do we apply that to System.Net.Sockets.TcpListener? We need to modify the structure of the instance such that it implements a known interface; this is exactly what the adapter pattern is for.

First up, we need to define our own interface. Because we need to mock both the listener and the client, we’ll actually define two:

public interface ITcpListener
{
  IAsyncResult BeginAcceptTcpClient(AsyncCallback callback, object state);
  ITcpClient EndAcceptTcpClient(IAsyncResult asyncResult);
} 

public interface ITcpClient
{
  NetworkStream GetStream();
  IPEndPoint RemoteIPEndPoint { get; }
}

To apply these interfaces to the existing .NET Framework implementations, we write some simple adapter classes like so:

public class TcpListenerAdapter : ITcpListener
{
  private TcpListener Target { get; set; } 

  public TcpListenerAdapter(TcpListener target)
  {
    Target = target;
  } 

  public IAsyncResult BeginAcceptTcpClient(AsyncCallback callback, object state)
  {
    return Target.BeginAcceptTcpClient(callback, state);
  } 

  public ITcpClient EndAcceptTcpClient(IAsyncResult asyncResult)
  {
    return new TcpClientAdapter(Target.EndAcceptTcpClient(asyncResult));
  }
}

public class TcpClientAdapter : ITcpClient
{
  private TcpClient Target { get; set; } 

  public TcpClientAdapter(TcpClient target)
  {
    Target = target;
  } 

  public NetworkStream GetStream()
  {
    return Target.GetStream();
  } 

  public IPEndPoint RemoteIPEndPoint
  {
    get { return Target.Client.RemoteEndPoint as IPEndPoint; }
  }
}

These classes are solely responsible for implementing our custom interface and passing the actual work down to an original target instance which we pass in through the constructor. You might notice that line 17 of the code uses an adapter itself.

With some simple tweaks to our SmtpServer class, and how we call it, our application will continue to run as before. This is how I’m now calling the SmtpServer:

static void Main(string[] args)
{
  var listener = new TcpListener(IPAddress.Any, 25);
  var listenerAdapter = new TcpListenerAdapter(listener);

  var smtpServer = new SmtpServer(listenerAdapter);
  smtpServer.Start();

  Console.ReadLine();
}

The key point to note is that when once we have created the real listener, we are now wrapping it in an adapter before passing it down to the SmtpServer constructor. This satisfies the SmtpServer which would now be expecting an ITcpListener instead of a concrete TcpListener as it did before.

Talking The Talk

At this point in the process we have:

  1. Separated the connection acceptance logic into its own class, outside of the hosting application
  2. Defined an interface for how a TCP listener and client should look, without requiring concrete implementations of either
  3. Learnt how to generate mock instance from an interface

The only part left is the actual test:

[TestMethod]
public void ShouldAcceptConnectionAsynchronously()
{
  var client = MockRepository.GenerateMock<ITcpClient>();
  var listener = MockRepository.GenerateMock<ITcpListener>();
  var asyncResult = MockRepository.GenerateMock<IAsyncResult>();

  listener.Expect(a => a.BeginAcceptTcpClient(null, null)).IgnoreArguments().Return(asyncResult);
  listener.Expect(a => a.EndAcceptTcpClient(asyncResult)).Return(client); 

  var smtpServer = new SmtpServer(listener);
  smtpServer.Start();

  var arguments = listener.GetArgumentsForCallsMadeOn(a => a.BeginAcceptTcpClient(null, null));
  var callback = arguments[0][0] as AsyncCallback;
  var asyncState = arguments[0][1];
  asyncResult.Expect(a => a.AsyncState).Return(asyncState);

  callback(asyncResult);

  client.VerifyAllExpectations();
  listener.VerifyAllExpectations();
  asyncResult.VerifyAllExpectations();
}

Ok, lets break that one down a step at a time, yeah?

The first three lines are just about generated some mocked instances for each of the objects we’re going to need along the way:

var client = MockRepository.GenerateMock<ITcpClient>();
var listener = MockRepository.GenerateMock<ITcpListener>();
var asyncResult = MockRepository.GenerateMock<IAsyncResult>();

Next up, we define how we expect the listener to work. When the BeginAcceptTcpClient method is called, we want to return the mocked async result. Similarly, when EndAcceptTcpClient is called, we want to return the mocked client instance.

listener.Expect(a => a.BeginAcceptTcpClient(null, null)).IgnoreArguments().Return(asyncResult);
listener.Expect(a => a.EndAcceptTcpClient(asyncResult)).Return(client);

Now that we’ve done our setup work, we run our usual logic just like we do in the hosting application:

var smtpServer = new SmtpServer(listener);
smtpServer.Start();

At this point, our logic will have spun up and run called the BeginAcceptTcpClient method. Because it is asynchronous, it will be patiently waiting until a client becomes available before it does any more work. To kick it along we need to fire the async callback delegate that is associated with the async action. Being internal to the implementation, we can’t (and shouldn’t!) just grab a reference to it ourselves but we can asking the mocking framework:

var methodCalls = listener.GetArgumentsForCallsMadeOn(a => a.BeginAcceptTcpClient(null, null));
var firstMethodCallArguments = methodCalls.Single();
var callback = firstMethodCallArguments[0] as AsyncCallback;
var asyncState = firstMethodCallArguments[1];
asyncResult.Expect(a => a.AsyncState).Return(asyncState);

The RhinoMocks framework has kept a recording of all the arguments that have been passed in along the way, and we’re just querying this list to find the first (and only) method call. While we have the chance, we also push our async state from the second argument into the async result instance.

Armed with a reference to the callback, we can fire away and simulate a client becoming available:

callback(asyncResult);

Finally, we ask RhinoMocks to verify that everything happened under the covers just like we expected. For example, if we had defined any expectations that never ended up getting used, RhinoMocks would throw an exception for us during the verification.

client.VerifyAllExpectations();
listener.VerifyAllExpectations();
asyncResult.VerifyAllExpectations();

Are We There Yet?

We are!

Taking a quick score check, we have:

  1. Separated the connection acceptance logic into its own class, outside of the hosting application
  2. Defined an interface for how a TCP listener and client should look, without requiring concrete implementations of either
  3. Used mocking to write a unit test to validate that our logic correctly accepts a new client asynchronously

Having done so, you should now:

  1. Understand the difference between a unit test and an integration test
  2. Understand the importance of separation of concerns and interfaces when it comes to writing testable (and maintainable!) code
  3. Understand how the adapter pattern works, and why it is useful
  4. Understand the role of a mocking framework when writing tests

Was this article useful? Did you learn something? Tell me about it!

Accessing ASP.NET Page Controls During PreInit

If you’ve read my previous post explaining a common pitfall with view state, I’d hope you’re preparing all your controls in the Init event of the page/control lifecycle.

Even if I’m not reusing them through my application much, I like to factor elements like a drop down list of countries into their own control. This centralizes their logic and allows us to write clear, succinct markup like this:

<tat:CountriesDropDownList ID="AddressCountry" runat="server" />

The code for a control like this is quite simple:

[ToolboxData("<{0}:CountriesDropDownList runat=\"server\" />")]
public class CountriesDropDownList : DropDownList
{
    protected override void OnInit(EventArgs e)
    {
        DataSource = Countries;
        DataBind();

        base.OnInit(e);
    }
}

The Problem

Once you start using this encapsulation technique, it won’t be long until you want to pass in a parameter that affects the data you load. Before we do, we need to be aware that the Init event is fired in reverse order. That is, the child controls have their Init event fired before that event is fired at the parent. As such, the Page.Init event is too late for us to set any properties on the controls.

The natural solution is to try and use the Page.PreInit event, however when you do you’ll often find that your control references are all null. This happens when your page is implemented using a master page, and it relates to how master pages are implemented. The <asp:ContentPlaceHolder /> controls in a master page use the ITemplate interface to build their contents. This content (child controls) is not usually prepared until the Init event is called, which means the control references are not available. For us, this represents a problem.

The Solution

The fix is remarkably simple; all we need to do is touch the Master property on our Page and it will cause the controls to become available. If we are using nested master pages, we need to touch each master page in the chain.

I often create a file called PageExtensions.cs in my web project and add this code:

public static class PageExtensions
{
    /// <summary>
    /// Can be called during the Page.PreInit stage to make child controls available.
    /// Needed when a master page is applied.
    /// </summary>
    /// <remarks>
    /// This is needed to fire the getter on the top level Master property, which in turn
    /// causes the ITemplates to be instantiated for the content placeholders, which
    /// in turn makes our controls accessible so that we can make the calls below.
    /// </remarks>
    public static void PrepareChildControlsDuringPreInit(this Page page)
    {
        // Walk up the master page chain and tickle the getter on each one
        MasterPage master = page.Master;
        while (master != null) master = master.Master;
    }
}

This adds an extension method to the Page class, which then allows us to write code like the following:

protected override void OnPreInit(EventArgs e)
{
    this.PrepareChildControlsDuringPreInit();

    MyCustomDropDown.MyProperty = "my value";

    base.OnPreInit(e);
}

Without the call to the extension method, we would have received a NullReferenceException when trying to set the property value on the MyCustomDropDown control.

You now have one less excuse for preparing your controls during the Load event. 🙂

How I Learned to Stop Worrying and Love the View State

(If you don’t get the title reference, Wikipedia can explain. A more direct title could be: Understanding and Respecting the ASP.NET Page Lifecycle.)

This whole article needs a technical review. Parts of it are misleading. I’ll get back to you Barry.

Page lifecycle in ASP.NET is a finicky and rarely understood beast. Unfortunately, it’s something that we all need to get a handle on.

A common mishap that I see is code like this:

protected void Page_Load(object sender, EventArgs e)
{
    if (!Page.IsPostBack)
    {
        AddressCountryDropDown.DataSource = CountriesList;
        AddressCountryDropDown.DataBind();
    }
}

The problem here is that we’re clogging our page’s view state. Think of view state as one of a page’s core arteries, then think of data like cholesterol. A little bit is all right, but too much is crippling.

To understand the problem, lets investigate the lifecycle that’s in play here:

  1. The Page.Init event is being fired, however we are not subscribed to that.
  2. Immediately after the Init event has fired, view state starts tracking. This means that any changes me make from now on will be saved down to the browser and re-uploaded on the next post back.
  3. The Page.Load event is being fired in which we are setting the contents of the drop down list. Because we are doing this after the view state has started tracking, every single entry in the drop down is being written to both the HTML and the view state.

There’s yet another problem here as well. By the time the Page.Load event is fired, all of the post back data has been loaded and processed.

To investigate the second problem, let’s investigate the lifecycle that’s in play during a post back of this same page:

  1. The user triggers the post back from their browser and all of the post back data and view state is uploaded to the server.
  2. The Page.Init event is fired, however we are not subscribed to that.
  3. Immediately after the Init event has fired, view state starts tracking. This means that any changes me make from now on will be saved down to the browser and re-uploaded on the next post back.
  4. The view state data is loaded for all controls. For our drop down list example, this means the Items collection is refilled using the view state that was uploaded from the browser.
  5. Post back data is processed. In our example, this means the selected item is set on the drop down list.
  6. The Page.Load event is fired however nothing happens because the developer is checking the Page.IsPostBack property. Usually, they say this is a “performance improvement” however it is also required in this scenario otherwise we would lose the selected item when we rebound the list.
  7. The contents of the drop down list are once again written to both the HTML and the view state.

How do we do this better? Removing the IsPostBack check and placing the binding code into the Init event is all we need to do:

protected override void OnInit(EventArgs e)
{
    AddressCountryDropDown.DataSource = CountriesList;
    AddressCountryDropDown.DataBind();

    base.OnInit(e);
}

What does this achieve?

  • We are filling the contents of the drop down before the Init event is fired; therefore a redundant copy of its contents is not written to the view state.
  • We are filling the contents of the drop down before the postback data is processed, so our item selection is successfully loaded without it being overridden later.
  • We have significantly reduced the size of the page’s view state.

This simple change is something that all ASP.NET developers need to be aware of. Unfortunately so many developers jumped in and wrote their first ASP.NET page using the Page_Load event (including myself). I think this is largely because it’s the one and only event handler presented to us when we create a new ASPX page in Visual Studio. While this makes the platform appear to work straight away, it produces appalling results.

How to: Decrypt SQL 2005/2008 database master keys on other servers

SQL 2005 and 2008 both have what’s referred to as an encryption hierarchy. The details of this are beyond the scope of this post, but in essence: we encrypt our data using a key. We need to protect our key somehow, and we don’t want to litter our stored procs with key passwords, so we use a certificate. We then protect the certificate with a database master key. This is in turn protected by the service master key which is finally protected by DPAPI, an operating system provided store.

The keys and certificates are stored within the database itself, but when we move the database to another server they can’t be accessed. This is because the new server doesn’t know how to decrypt the database master key and in turn can’t decrypt the keys and certificates we need to use.

Usually…

The database master key is always protected by a password. You would have had to provide this when you first created the key:

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'mypassword'

When you create the key like this, SQL generates the key value and encrypts it with the supplied password before storing it. It also makes a second copy which is encrypted using the service master key, and this is the copy that is normally used. When you move your database to another server, the service master key protected copy can’t be loaded but the password protected copy can be.

 OPEN MASTER KEY DECRYPTION BY PASSWORD = 'mypassword' 

With the key now decrypted and loaded into memory, we can ask the new server to make a copy that is protected using the service master key.

 ALTER MASTER KEY ADD ENCRYPTION BY SERVICE MASTER KEY 

Finally, we close the key to take it out of memory.

 CLOSE MASTER KEY 

Voila, you can now access your keys and certificates on the new server with automatic key management (that is, with SQL automatically opening and closing the database master key for you as required).

Read-only databases

The approach described above is dependent upon the database being in a writable state as it makes modifications of the database master key itself. What happens when we want to work with a read-only version of a database such as a snapshot or a mirror?

With automatic key management, SQL Server will first attempt to decrypt the database master key using the service master key. In a read-only database, we are unable to create a copy of the key that is protected in this way.

After attempting that, SQL Server will look in the credential store (sys.credentials) for any credentials related to the master key. It will attempt each credential it finds.

Adding our credential to the store is easy:

USE [master]
GO

EXEC sp_control_dbmasterkey_password
    @db_name   = N'mydatabase',
    @password  = N'mypassword',
    @action    = N'add';

You can see evidence of the new credential in both sys.master_key_passwords and sys.credentials:

SELECT  d.name as database_name,
        c.*,
        mkp.family_guid

FROM    master.sys.credentials c

        INNER JOIN master.sys.master_key_passwords mkp
            ON c.credential_id = mkp.credential_id

        INNER JOIN master.sys.database_recovery_status drs
            ON mkp.family_guid = drs.family_guid

        INNER JOIN master.sys.databases d
            ON drs.database_id = d.database_id

Voila, just like above, you can now access your keys and certificates on the new server with automatic key management (that is, with SQL automatically opening and closing the database master key for you as required).

You might also notice that the sys.master_key_passwords view ties a master key password to a family id as opposed to a database id. A family id is assigned when a database is first created and it stays the same even if the database is detached, moved, reattached, mirrored, etc. As a result of this behavior, you could have multiple databases on the one server that share the same family id. In contrast, a database id is created every time a database is attached and is therefore unique for every database instance on the server. In the context of our master keys, the outcome is that adding a credential against one database will actually add it for all of the databases which have come from the same original instance. Even if you detach and reattach your databases, or drop and restart replication, the credential will still be kept in the store and work with the new database instance. Attaching any new instances which share the same family id will also automatically inherit the credential. (Unless of course you change the database master key password in which case the credential will still be attempted but will fail and cause the next one to be attempted instead.)

Why light text on dark background is a bad idea

Update, Oct 2014: This post was written in 2008, based on me scrounging together some complementary links at the time. It’s now 2014, and accessibility is a well thought-out problem, which is generally well solved. Use the colour scheme that makes you happy. I use a black background on my Windows Phone, a dark navy in Sublime Text, a mid-grey chrome around my Office documents, and a bright white background through Outlook and my email.


As this is a suggestion which comes up quite regularly, I felt it valuable to document some of the research I have collected about the readability of light text on dark backgrounds.

The science of readability is by no means new, and some of the best research comes from advertising works in the early 80s. This information is still relevant today.

First up is this quote from a paper titled “Improving the legibility of visual display units through contrast reversal”. In present time we think of contrast reversal meaning black-on-white, but remember this paper is from 1980 when VDUs (monitors) where green-on-black. This paper formed part of the research that drove the push for this to change to the screen formats we use today.

However, most studies have shown that dark characters on a light background are superior to light characters on a dark background (when the refresh rate is fairly high). For example, Bauer and Cavonius (1980) found that participants were 26% more accurate in reading text when they read it with dark characters on a light background.

Reference: Bauer, D., & Cavonius, C., R. (1980). Improving the legibility of visual display units through contrast reversal. In E. Grandjean, E. Vigliani (Eds.), Ergonomic Aspects of Visual Display Terminals (pp. 137-142). London: Taylor & Francis

Ok, 26% improvement – but why?

People with astigmatism (aproximately 50% of the population) find it harder to read white text on black than black text on white. Part of this has to do with light levels: with a bright display (white background) the iris closes a bit more, decreasing the effect of the “deformed” lens; with a dark display (black background) the iris opens to receive more light and the deformation of the lens creates a much fuzzier focus at the eye.

Jason Harrison – Post Doctoral Fellow, Imager Lab Manager – Sensory Perception and Interaction Research Group, University of British Columbia

The “fuzzing” effect that Jason refers to is known as halation.

It might feel strange pushing your primary design goals based on the vision impaired, but when 50% of the population of have this “impairment” it’s actually closer to being the norm than an impairment.

The web is rife with research on the topic, but I think these two quotes provide a succinct justification for why light text on a dark background is a bad idea.

Tech.Ed AU 08: The Ugly, The Bad, The Good

Bugger it. Despite being ridiculously exhausted, I’m going to write this now because I can’t sleep. I’m also doing it in reverse order to finish up on a vaguely positive incline.

The Ugly

I got rightfully slammed in my presentation evals for ARC402. 44% of the audience were Very Dissatisfied. That placed me with the 6th worst scoring session at the conference.

The general trends were:

  • I represented a knowledge of the subject (42% very satisfied – a positive here!)
  • My presentation skills were satisfactory (47% satisfied – neutral)
  • The information presented was bad (42% dissatisfied with usefulness)
  • The presentation was ineffective (42% dissatisfied with effectiveness)

Armed with an array of comments to analyse, what did I do wrong? Thinking out loud, this is what I’ve come up with:

  • I was put off by the noise from the neighbouring room and the mobile smackdown. I shouldn’t have been affected by this as much as I was.
  • I rushed the content, when I was by no means under time pressure. I generally covered this content as a 20 minute segment at the end of a more holistic ASP.NET MVC presentation. While I had added additional content, and that is generally a rushed 20 minutes, I certainly shouldn’t have been rushing here.
  • I lost the structure. I didn’t introduce myself (which people highlighted in the comments) and somehow I even forgot to ask for Q+A at the end, even though there’s a whole slide that prompts me to do just that.
  • I focused my content too much on the blurb which came from Tech.Ed US instead of thinking myself about the wider architectural considerations. There’s a lot more too it than IoC and some attributes.
  • Despite being crowned the Australian Annual IT Demonstration Champion this same week, my demo crashed and burned. Massive fail here.
  • I’m still not good at dealing with non-developer audiences. This was something that also affected me at Web on the Piste, and is something I need to actively work on. As much as I am a fan of minimal slides + heaps of live code, if the people ask for high level content in an architecture track, it’s what you’ve got to give them.

Summarising:

I failed to identify the key differences between the demands of this session and those demands of previous talks I had done in this technology space. I was over confident in the content and thus failed to properly prepare and update my content for the latest release, the audience and the timing. I’d like to apologize to those who attended and expected more, the content owners who trusted me to be there and the community who supported me in getting there in the first place.

– Tatham Oddie, not-so-demo-champion

The Bad

I’m forever fighting with a balance between helping and helping too much. I was a key person on the Dev.Garten project this year, having done a significant amount of work pre-event including meeting with the client and developing infrastructure. Once the event actually started I began to realise the shear number of things I’d committed to doing throughout the week and that I was being stretched. While there were plenty of great people to keep the project moving, I could have done a better job of documenting the directions I had started and ensuring a smoother handover.

The Good

Despite this post starting on a decidedly (and deservedly) sour note, there were some amazing this that happened during the week.

My other session (TOT352) about Software+Services had a particularly small audience, however came out with 100% of the evals saying the demos were effective and 100% saying the technical content was just right. Ok, so the data is only working off 2 evaluations because there were only about 12 people in the room, but it’s better results than above either way.

I won the national final of the Demos Happen Here comp. Among other things, this means I’m off to Tech.Ed Los Angeles in 2009 and will shortly receive a shiny new Media Centre PC. When I made the original entry video it was an 8 minute demo, however by the national final I had it down to 4 mins 50 seconds which is a real testament to the quality of Windows Server 2008.

I built a Surface application. Amnesia own the only two Surface devices currently outside of the US and were kind enough to let me spend a day and a half playing on it before they took one to the event. It was my first time ever compiling a line of WPF or seeing the Surface SDK but in that 1.5 days I managed to get an application working which would pull session data out of CommNet and display it in response to a conference pass being placed on the table. The Surface team should be really proud of the quality of SDK that they have achieved to make that possible and I look forward to when we finally get to see a widespread public release of the bits.

The table achieved quite a bit of interest throughout the week:

surface5

Photo: Ry Crozier

On Wednesday I had lunch with Amit Mital who is the GM of Windows Live Mesh. Six of us (him, 2x MS, 2x others, me) spent a good 90 minutes discussing some of the longer term visions for Mesh. The original plan was for us to ask questions and him to answer them, but it became more of a discussion between ourselves about scenarios we wanted to see / achieve and him (relatively) quietly taking notes. In the end this was a better approach because it allowed him to walk away with some real world scenarios and didn’t result in us constantly asking him questions he wasn’t allowed to answer yet. PDC sounds set to deliver some exciting changes as we see the release of the Mesh SDK.

Friday lunchtime I was invited to present with Lawrence Crumpton about open source at Microsoft. We were presenting to a lunch of open source alliance and higher education administrators trying to demonstrate that Microsoft aren’t actually evil. Lawrence’s full time job at Microsoft Australia lies around open source and it was amazing to hear some of the things he’s involved in. I jumped on stage after his talk to demonstrate PHP on IIS7 as a first class citizen and talked about leveraging the platform with functionality like NLB. (This may or may not sound very similar to my DHH demo.)

Tech.Ed week is also a big week for Readify because it’s the only time we get to have almost all of our people in the one place. It’s a strange feeling knowing a whole group of people but then meeting them for the “first” time. It was particularly good meeting our new WA gang (Hadley Willan, Jeremy Thake and Graeme Foster) as well as catching up with the out of towners and management teams again.

Friday night was the Readify Kick-off party followed by a company conference / meeting on Saturday.

Who’d have thought I’d get to see my Principal Consultant gyrating his hips on stage with Kylie? I’ve had a quick look around Flickr and Facebook but I haven’t found any photos of the night online yet. I look forward to our resident photographers catching up on their uploads early this week. Update: Links at end of post.

Rog42 came along as a guest speaker on Saturday and delivered a great presentation about some new approaches for community. In a demonstration of how a little information goes a long way, the pizza thing is now pretty superfluous having seen his presentation but I think we can keep the jokes going for a little bit yet. 😉 It was encouraging to see the level of Readify involvement in Tech.Ed.

Overall it was a great week and another well executed Tech.Ed on Microsoft’s behalf. I was privileged to be invited to participate in lots of different ways, albeit with different qualities of outcome. It’s been an eye opening week which has highlighted needed work on my behalf, but also being rewarding for work I’ve already done. I look forward to the next event, and all of the other things that will need to be tackled between now and then.

Update 7-Sep-08: Photos from Thursday night courtesy of Catherine Eibner:

Update 8-Sep-08:

Solution: IIS7 WebDAV Module Tweaks

I blogged this morning about how I think WebDAV deserves to see some more love.

I found it somewhat surprising that doing a search for iis7 webdav “invalid parameter” only surfaces 6 results, of which none are relevant. I found this particularly surprising considering “invalid parameter” is the generic message you get for most failures in the Windows WebDAV client.

I was searching for this last night after one of my subfolders stopped being returned over WebDAV, but was still browsable over HTTP. After a quick visit to Fiddler, it turned out that someone had created a file on the server with an ampersand in the name and the IIS7 WebDAV module wasn’t encoding this character properly.

It turns out that this issue, along with some other edge cases, has already been fixed. If you’re using the IIS7 WebDAV module, make sure to grab the update:

Update for WebDAV Extension for IIS 7.0 (KB955137) (x86)
Update for WebDAV Extension for IIS 7.0 (KB955137) (x64)

Because  the WebDAV module is not a shipping part of Windows, you won’t get this update through Windows Update. I hope they’ll be able to start publishing auto-updates for components like this soon.