How I Learned to Stop Worrying and Love the View State

(If you don’t get the title reference, Wikipedia can explain. A more direct title could be: Understanding and Respecting the ASP.NET Page Lifecycle.)

This whole article needs a technical review. Parts of it are misleading. I’ll get back to you Barry.

Page lifecycle in ASP.NET is a finicky and rarely understood beast. Unfortunately, it’s something that we all need to get a handle on.

A common mishap that I see is code like this:

protected void Page_Load(object sender, EventArgs e)
{
    if (!Page.IsPostBack)
    {
        AddressCountryDropDown.DataSource = CountriesList;
        AddressCountryDropDown.DataBind();
    }
}

The problem here is that we’re clogging our page’s view state. Think of view state as one of a page’s core arteries, then think of data like cholesterol. A little bit is all right, but too much is crippling.

To understand the problem, lets investigate the lifecycle that’s in play here:

  1. The Page.Init event is being fired, however we are not subscribed to that.
  2. Immediately after the Init event has fired, view state starts tracking. This means that any changes me make from now on will be saved down to the browser and re-uploaded on the next post back.
  3. The Page.Load event is being fired in which we are setting the contents of the drop down list. Because we are doing this after the view state has started tracking, every single entry in the drop down is being written to both the HTML and the view state.

There’s yet another problem here as well. By the time the Page.Load event is fired, all of the post back data has been loaded and processed.

To investigate the second problem, let’s investigate the lifecycle that’s in play during a post back of this same page:

  1. The user triggers the post back from their browser and all of the post back data and view state is uploaded to the server.
  2. The Page.Init event is fired, however we are not subscribed to that.
  3. Immediately after the Init event has fired, view state starts tracking. This means that any changes me make from now on will be saved down to the browser and re-uploaded on the next post back.
  4. The view state data is loaded for all controls. For our drop down list example, this means the Items collection is refilled using the view state that was uploaded from the browser.
  5. Post back data is processed. In our example, this means the selected item is set on the drop down list.
  6. The Page.Load event is fired however nothing happens because the developer is checking the Page.IsPostBack property. Usually, they say this is a “performance improvement” however it is also required in this scenario otherwise we would lose the selected item when we rebound the list.
  7. The contents of the drop down list are once again written to both the HTML and the view state.

How do we do this better? Removing the IsPostBack check and placing the binding code into the Init event is all we need to do:

protected override void OnInit(EventArgs e)
{
    AddressCountryDropDown.DataSource = CountriesList;
    AddressCountryDropDown.DataBind();

    base.OnInit(e);
}

What does this achieve?

  • We are filling the contents of the drop down before the Init event is fired; therefore a redundant copy of its contents is not written to the view state.
  • We are filling the contents of the drop down before the postback data is processed, so our item selection is successfully loaded without it being overridden later.
  • We have significantly reduced the size of the page’s view state.

This simple change is something that all ASP.NET developers need to be aware of. Unfortunately so many developers jumped in and wrote their first ASP.NET page using the Page_Load event (including myself). I think this is largely because it’s the one and only event handler presented to us when we create a new ASPX page in Visual Studio. While this makes the platform appear to work straight away, it produces appalling results.

Why light text on dark background is a bad idea

Update, Oct 2014: This post was written in 2008, based on me scrounging together some complementary links at the time. It’s now 2014, and accessibility is a well thought-out problem, which is generally well solved. Use the colour scheme that makes you happy. I use a black background on my Windows Phone, a dark navy in Sublime Text, a mid-grey chrome around my Office documents, and a bright white background through Outlook and my email.


As this is a suggestion which comes up quite regularly, I felt it valuable to document some of the research I have collected about the readability of light text on dark backgrounds.

The science of readability is by no means new, and some of the best research comes from advertising works in the early 80s. This information is still relevant today.

First up is this quote from a paper titled “Improving the legibility of visual display units through contrast reversal”. In present time we think of contrast reversal meaning black-on-white, but remember this paper is from 1980 when VDUs (monitors) where green-on-black. This paper formed part of the research that drove the push for this to change to the screen formats we use today.

However, most studies have shown that dark characters on a light background are superior to light characters on a dark background (when the refresh rate is fairly high). For example, Bauer and Cavonius (1980) found that participants were 26% more accurate in reading text when they read it with dark characters on a light background.

Reference: Bauer, D., & Cavonius, C., R. (1980). Improving the legibility of visual display units through contrast reversal. In E. Grandjean, E. Vigliani (Eds.), Ergonomic Aspects of Visual Display Terminals (pp. 137-142). London: Taylor & Francis

Ok, 26% improvement – but why?

People with astigmatism (aproximately 50% of the population) find it harder to read white text on black than black text on white. Part of this has to do with light levels: with a bright display (white background) the iris closes a bit more, decreasing the effect of the “deformed” lens; with a dark display (black background) the iris opens to receive more light and the deformation of the lens creates a much fuzzier focus at the eye.

Jason Harrison – Post Doctoral Fellow, Imager Lab Manager – Sensory Perception and Interaction Research Group, University of British Columbia

The “fuzzing” effect that Jason refers to is known as halation.

It might feel strange pushing your primary design goals based on the vision impaired, but when 50% of the population of have this “impairment” it’s actually closer to being the norm than an impairment.

The web is rife with research on the topic, but I think these two quotes provide a succinct justification for why light text on a dark background is a bad idea.

Video: The best web stack in the world

This is the demo I made for the Microsoft Demos Happen Here competition. The criteria was that it had to be under 10 minutes, and show one or more of the features from the launch wave technologies – Visual Studio 2008, SQL Server 2008 and Windows Server 2008.
I chose to demonstrate PHP running on IIS7 using FastCGI, then integrating this with an ASP.NET application before finally load balancing the whole application between two servers in a high availability, automatic failover cluster.
I managed to do all that with 2 minutes to spare, so I think it’s pretty clear that Windows Server 2008 is the best web stack in the world.

http://vimeo.com/1439786

Update 5-Sep-08: I won. 🙂 

Tearing down the tents (and moving them closer together)

Being fairly focused on Microsoft technologies myself, I see a lot of the “us vs. them” mentality where you either use Microsoft technologies, or you’re part of “the other group”. Seeing Lachlan Hardy at Microsoft Remix was awesome – he was a Java dude talking about web standards at a Microsoft event. The more we can focus on the ideas rather than which camp you’re from, the more we’ll develop the inter-camp relationships and eventually destroy this segmentation. Sure, we’ll still group up and debate the superfluous crap like which language is better (we’re nerds – we’ll always do that) but at least these will be debates between the sub-camps of one big happy web family. (It’s not as cheesy as it sounds – I hope.)

What’s the first step in making this happen? Meet people from “the other group”!

The boys and girls at Gruden and Straker Interactive have put together Web on the Piste for the second year running. It’s a vendor neutral conference about rich internet technologies – so you’ll see presentations about Adobe Flex and Microsoft Silverlight at the same event (among lots of other cool technologies of course). These types of events are a perfect way to meet some really interesting people and cross pollinate some sweet ideas.

It’s coming up at the end of August, and I understand that both conference tickets and accommodation are getting tight so I’d encourage you to get in soon if you’re interested (Queenstown is crazy at this time of year).

And of course, yours truly will be there evangelising the delights of Windows Live as well as ASP.NET AJAX to our Flash using, “fush and chups” eating friends. 🙂

Will you be there?

Video: ASP.NET MVC Preview 3

Last night I gave an introduction to MVC at the Wollongong .NET User Group. We had a bit of time at the end, so I also covered off Inversion of Control (IoC) and how it can be used with the MVC framework.

The talk assumed a working knowledge of ASP.NET, but required no existing knowledge about ASP.NET MVC or IoC.

You can watch it on Vimeo:

Tip: Watching on the actual Vimeo site instead of this embedded player will give you a bigger and clearer video.

Or download it as a WMV:

http://tatham.oddie.com.au/presentations/20080709-WDNUG-AspNetMvcPreview3-TathamOddie.wmv (64MB, 68min)

Video: Architectural Considerations for the ASP.NET MVC framework

Update (16th July 2008): There’s a better version of this presentation available at http://blog.tatham.oddie.com.au/2008/07/10/video-aspnet-mvc-preview-3/

This talk doesn’t actually assume any prior knowledge of the ASP.NET MVC Framework, so it goes through the whole intro at the start (albeit quickly). I then delve into some IoC concepts, and finally mash it all together. The IoC framework used is Castle Windsor.

This was recorded at the VIC.NET usergroup in Melbourne, Australia on 10th June 2008.

You can grab the Windows Media file directly from:

http://tatham.oddie.com.au/presentations/20080610-VICNETMelbourne-ArchitecturalConsiderationsForAspNetMvc-TathamOddie.wmv (48MB, 47min)

Or stream it from Google Video:

The Tech.Ed "Web" Track

Late this afternoon, the Aussie twittersphere had a brief flurry of activity around the “web” track announcement for Tech.Ed this year.

The problem is that there are 12 sessions to cover everything “web” and UX – that’s one session on AJAX, two on Silverlight, nothing on ASP.NET 3.5 SP1 content, one IIS admin talk ….

As a result of this anorexic track size, lots of great sessions missed out. It’s great seeing people like Scott Hanselman and Jefferson Fletcher locked in. Alongside the official IE8 announcements though, it’d be good to something lik Damian and Lachlan’s boots-on-the-ground web standards talk. Jose Fajardo has been doing some really innovative work with Live Mesh, but there’s not even a single session scheduled around the Live platform yet. There *might* be something as a lunch time chalk talk.

I think the wider issue here is the track design – there are way too many web-related technologies to fit into a single track. As an idea for future years, I don’t think there should be a web track at all – in the same way there’s no explicit “Winforms” track. Web sessions should be distributed between the others. AJAX in developer tools and technologies, MVC in architecture, Silverlight in a new UX track, Dynamic Data in the Database + BI track, IIS in the server track, etc.

To allow “web developers” to find “web” sessions, sessions could be tagged with the technologies they include and ASP.NET would just be a common tag.

I use terms like “web developers” in quotes as I think this is now a very old-world view of developers and the systems they are building. Unfortunately I think it is this view that has driven the problems that we as a community are perceiving this year.

I’m still excited about Tech.Ed this year, I just don’t think they ‘got it’ with this track.

Video: Developing great applications with ASP.NET MVC and ASP.NET AJAX

Here goes my first attempt at putting one of my talks online. This is a screencast from my recent Remix talk in Sydney.

You can grab the Windows Media file directly from:

http://tatham.oddie.com.au/presentations/20080520-RemixSydney-MVC-TathamOddie.wmv (40MB, 42min)

I also experimented with some online services.

Viddler (http://www.viddler.com/explore/Tatham/videos/1/) is ok, except that in the standard player it’s scaling my video down which make it very hard to read (not a very good scaling algorithm). If you watch it full screen it’s fine, or you can download the original video from their servers too.

Google Video (http://video.google.com/videoplay?docid=-7676497915408440399&hl=en) has a massive player, but the resolution is crap. The embedded version looks ok:

YouTube was out of the question because they only support videos up to 10 minutes long.

None of the three services seemed that great. Next up I’m going to try Screencast.com. I have a feeling that there’s a reason why they exist …

Update: Screencast seems to be the best (although still not perfect). It kept my video in WMV format which means there was no scaling or transcoding in play. Unfortunately this also means that you have to have the Windows Media Plugin installed. I have a feeling it will host an SWF for me, as long as I make them SWF myself direct from Camtasia. The interface is pretty ugly (shows you success messages in big, scary, bold, red CAPITALS!) .

http://www.screencast.com/t/C5CnoFy4zH2

Location, Location, Location: My plan for location awareness, and the GeographicLocationProvider object

I know where I am. My phone knows where it is. Why doesn’t the webpage know where I am?

Think about these scenarios which will become more and more prominent as “the mobile web” starts to prevail:

  1. I visit a cinema website on my mobile. Rather than having to choose my cinema first, the website already knows which suburb I’m in so it defaults to the nearest cinema first.
  2. I need a taxi. I don’t know where I am, but my phone does. I want to be able to book a taxi and have the website discover my location automatically.

The key idea that I’m exploring here is the ability for a webpage to access this information through the standard browser interface.

I have a plan for making this a reality.

Windows Mobile has already taken a step towards baking location awareness into the OS with their GPS Intermediate Driver. The idea is that the operating system handles communication with the GPS unit, including all the various protocols. Applications then have a unified API for accessing GPS data. This proxy effect also facilitates non-exclusive access to the GPS.

But this doesn’t go far enough. Even with this unified API, very few applications are actually location aware. More importantly, I don’t want to have to download and install a piece of software on my device just to be able to see movie times. It’s just not going to happen.

We’ve also been making the assumption that location data comes from a GPS. Enter GeoPriv.

With the continuing rollout of VOIP, there are obvious challenges about the loss of location awareness. The current analog network makes call tracing relatively easy. It’s a fixed line of copper and the phone company knows where it terminates. This is a legal requirement for emergency call routing, as well as being immensely useful for scenarios such as a national number auto-routing to your nearest local store. Both of these scenarios become immensely difficult when you can’t even rely on there being a physical phone anymore – a piece of software with a network connection is now a fully fledged communication device that needs to support these scenarios somehow.

There’s an IETF working group tasked to solve this exact problem. The privacy impacts of sharing location data are so important that it’s in the name. They are the “Geographic Location/Privacy working group”, or “GeoPriv“. The best part is, they are living in a reality and delivering useful technology – and fast.

There are a number of key concepts they have identified:

  • We can’t go jamming a GPS chip in every single device we manufacture. We need to be able to capitalize on the ecosystem surrounding our existing devices to surface the information we already have.
  • Privacy is a critical element of any solution expecting wide spread adoption and trust.
  • There are two possible types of location you could need:
    • civic location (level, street number, street, suburb, etc)
    • geographic location (latitude, longitude, elevation)

Lets step away from mobile devices briefly and consider the laptop I’m writing this post on. My laptop doesn’t know where it is. Neither does my WiFi router, or my DSL modem. My ISP does though.

At some stage in the future, my modem will start receiving an extra DHCP option. In the same  way that my ISP supplies me with network settings like DNS when I connect, they will also start pushing out the address of their Location Information Server. My DSL modem will then push this setting out across my network. Finally, my laptop will be able to query this service to find out my current civic and/or geographic location. The privacy controls around this are beyond the scope of this post.

By asking the service provider for the information, these same techniques also works for mobile devices, 3G data connections, and all those other wonderful wireless technologies. Cell-based triangulation is already in use by phone companies around the world, including our national carrier here in Australia, however the interfaces are in no way standardized. The Location Information Server (LIS) and the HTTP Enabled Location Delivery protocol (HELD) solve this problem.

Now that our device is capitalising on the network ecosystem, getting it into the browser is the easy part. All that’s left is a thin veneer of JavaScript.

Location awareness is only becoming an increasing demand. I want to start the process of rolling in the JS layer now, so that as the supporting technologies come to fruition, we have the access layer to make them useful.

Inline with the XMLHttpRequest object that we’ve all come to know and love, I’ve started writing a spec for a GeographicLocationProvider object.

With XMLHttpRequest, we can write code like this:

var client = new XMLHttpRequest();
client.onreadystatechange = function()
{
    if(this.readyState == 4 && this.status == 200)
        alert(this.responseXML);
}
client.open("GET", "http://myurl.com/path");
client.send();

I want to be able to write code like this:

var provider = new GeographicLocationProvider();
provider.onreadystatechange = function()
{
    if(this.readyState == 2)
        alert(this.geographic.latitude);
}
provider.start();

Again, usage is conceptually similar to the XMLHttpRequest object:

  1. Initialize an instance of the object
  2. Subscribe to the state change event
  3. Set it free

The potential states are:

  • STOPPED. This is the state the the object is initialized in, and the state that it returns to if stop() is called.
  • RESOLVING. The object has been started, but not location information is available yet. In this state the browser could be:
    • prompting the user for permission,
    • searching for location sources (like GPS hardware, or an LIS endpoint), or
    • waiting for the location source to initialize (like connecting to satellites, or talking to the LIS)
  • TRACKING. A location source has been found and location data is ready to be queried from the provider.
  • UNAVAILABLE. No location data is available, and none is likely to become available. The user may have denied a privacy prompt, their security settings may have automatically denied the request, or there may be no location sources available. It is possible for the provider to return to the RESOLVING state if a location source become available later.

In more complex scenarios, the provider can be primed with a specific request to aid in evaluation of privacy policies and selection of location sources. For example, browsers may choose to hand over state-level civic location data without a privacy prompt. This data could also be obtained from an LIS, without needing to boot up a GPS unit. If the webpage requested highly accurate geographic location data, the browser would generally trigger a privacy prompt and boot up the most accurate location source available.

While we’ve now simplified the developer experience, the complexity of the browser implementation has mushroom clouded. How do we reign this in so that it’s attractive and feasible enough for browser implementers? How do we demonstrate value today?

You might have noticed that in my discussion of the JS layer I drifted away from the GeoPriv set of technologies. While any implementation should be harmonious with the concepts developed by the GeoPriv working group, we aren’t dependent upon their technology to start delivering browser-integrated location awareness today.

There are numerous location sources which can be used:

  • Statically configured location – with the network fingerprinting technology already in Vista, it would be relatively easy to prompt users for their civic location the first time location data is needed on a particular network.
  • GPS Intermediate Driver – already rolled into the Windows Mobile platform.
  • Location Information Servers – can be added to the mix later as LIS deployments become prevalent. This is the only one that is GeoPriv dependant.

The civic and geographic schemas have already been delivered by the working group as RFC 4119. There has been an incredible amount of discussion involved in developing a unified schema that can represent civic addresses for anywhere in the world, and this schema should be adopted for consistency. (Do you know the difference between states, regions, provinces, prefectures, counties, parishes, guns, districs, cities, townships, shis, divisions, boroughs, wards, chous and neighbourhoods? They do.)

Who is responsible for delivering this unified location layer?

I keep talking about the browser being responsible for managing all these location sources. Other than the JS layer, all of this infrastructure is client independent, so why don’t we just make the browser a dumb proxy to a unified location service. This service should be a component of the operating system, accessible by software clients (like Skype) and webpages via the browser proxy.

Windows Mobile has already started in the right direction with their GPS Intermediate Driver, however this is only one element of a wider solution.

What do I want?

  1. I want to see a “Location” icon in the settings page of my mobile device, the Control Panel of my Vista laptop and the System Preferences panel of my Mac.
  2. I want the browsers to expose the GeopgraphicLocationProvider object for JS clients. (The start of this specification is online now already.)
  3. I want the browsers to proxy location requests to the operating system store, along with hints like which zone the website is in.
  4. I want the operating system store to be extensible, implementing its own provider model which allows 3rd party developers to supply their own location data from software or hardware services as new discovery mechanisms are developed. We shouldn’t have to wait for widespread adoption before sources are surfaced into the store, and this should be open to software developers as well as driver developers.
  5. I want the operating system store to be accessible by any application, including 3rd party browsers.

How am I going to make this happen?

Dunno.

Right now, I’m documenting the start of what will hopefully be a fruitful conversation. Participate in the conversation.