World Wide Telescope Backchannel Chat

Thanks to an invite from Nick Hodge, I spent an hour this morning in a chat with the creators of WWT. I was broadcasting notes of my chat to the Readify internal tech list, and this is a post-chat dump of that discussion.

10:00am

For those interested, I’ve been sitting on a backchannel chat around WWT.

There are some good questions being raised.

I asked about the similarities between it and terraserver (which was worked on by the same guys). Similar concept of a a tile-based system to pull down the images, but it uses a projection called Toast instead of Mercator. Mercator doesn’t work properly at the poles, and as you pan around the north/south axis actually change direction. (Try looking at the poles in VE and you’ll see this pretty clearly).

Toast is basically … Imagine the world as an octohedron. Take a single square sheet of paper and wrap it around that octohedron. Make the middle of the piece of paper the equator. Fold down the corners to the north and south pole. Then tile it for multi-resolution… They use a similar quadtree system for adressing these tiles.

I asked if Toast was something MSR developed. It’s been around for a while now to the point that they forgot who first came up with it. 🙂 NASA + CalTech have been developing it further and using it for their full sky projections. There are 50 full sky surveys in WWT.

Social networking on top of WWT. One guy has suggested a social network on top of it for amateur astronomers to upload and share their own data. There seem to be some plans around this coming which sounds cool.

Indigenous constellation data – american indian, aboriginal, etc. You can already create your own star maps with constellation figures – first step for the above idea. They’re also currently researching and working with indigenous groups to develop new tours. So like the current “Visit the centre of the Milky Way” tour and that kinda stuff, you’ll soon be able to do a tour of the Aboriginal Dreaming stories and stuff like that.

Infrastructure. Tile data is served off the VE servers (there’re already 1,000 servers there – might as well use them). The app servers run separately and are funded by MS External Technology Group (whoever they are).

Still going … will reply with mroe details in a min.

10:09am

First release of a dev kit in about a month. WTML. Communities (add menu items and data links to websites and stuff), data model, how to present your images, etc.

v2 they might release an object model for driving it yourself, embedding it and writing plugins that do more than just inject web links. “But that’s down the track for now”

Oh cool – you can right click on any point of the sky surface and copy a link to it. Send it to a friend and when they click it, WWT will open and sleuth to that point. You can integrate your own links there to your own sites too … with the communities API.

10:21am

Original prototype of the engine developed in 2001. Some parts of the model as far back as 1993. Client built over the last 18 months by essentially a 1 man team. Was done in Managed DirectX for development productivity. He thinks if he’d done it in C++ and OpenGL it would have taken him a team of 8 people and 2 – 2.5 years.

The main digital sky surface is one terapixel – one million pixels by one million pixels. They’re about to load in an IR based surface which is four terapixels!

They have more than a dozen TB of data directly available or connected to their databases. Summer release to would add an interface to the Hubble archive (another 400TB). That will be stored by the Hubble people still, but WWT as the client. They’re plugging in more and more sources, so answering the question of “how much data?” is becoming harder and if they can answer it accurately they consider they have failed.

They have two clusters they use for processing data. They had a third one which has now been transitioned into their app tiers. Serving millions of hits a day, so third cluster exclusively used to serve that.

Almost exclusively C# based applications.

Some partner called DigiPede (?) provided their grid solution.

Somebody asked about Easter Eggs. Apparently that’s a fireable offence at Microsoft now, so unlikely apparently. 🙂 The list of contributors is very publically listed instead.

The only secret code is a big ugly Close button on the about dialog which only appears if your username is “dinos”. Apparently a tester wanted a close button and kept reopening the bug so he put it there just for her. 🙂

Some things like the moon are dynamic – if you speed up time you can see it move across the sky. If you speed up more you can see it rotate and move through phases.

Man-made objects like ISS are hard to track because they move too fast across the sky so users would keep losing them. It’s more appropriate to do point-in-time type solutions. I asked about them providing a feed of man made objects so that when new shuttles launch we can track them – no answer as yet.

10:20am

Andrew Matthews sent me:

Hi Tatham,

I’d like to see the following on WWT. All of which is based on the idea that WWT is not just a platform for armchair astronomers, but a souped up interactive star chart that can be used by amateur observers.

  • Night sight mode with everything in red. Allows night observers to use it without getting after images
  • Telescope control software
  • CCD image registration software
  • Catalog identification of stars
  • Move to stars by catalog id
  • Ephemeris
  • Observing logs
  • RSS feeds with notable happenings coming up (ie. Transits, meteor shows, newly discovered comets, supernovae etc)

10:25am

Ooo – answer to my man made object feed question. Everytime you click into communities it’s reloaded, so if somebody exposed a community for the ISS for example, it’d be trackable and up to date. In the 2.0 timeframe communities would be able to describe orbital elements too (so they get animated in real time). A few months until that functionality.

Asking about night sight mode right now…

Telescope control software already in there.

CCD image registration? explain quickly for me

Catelog identification – already there. Right click on any point in the visual and it tells you what it is.

Move to stars by catalog ID – does that too I think.

Ephemeris? explain quickly for me

Observing logs … hmpf – will ask

RSS feeds for notable events – this is covered by the communities system. Communities API uses WTML to describe space stuff and integrate that data into the UI, add menu options to them and that kinda thing.

10:27am

They’re talking about an ISV who builds telescope control software who’s developed somethign that lets you stream imagery of live events into it! Cool …

Night sight:

“good suggesstion. the way I personally use this – i have many different astronomy apps and all their different night sight modes are unusable. in wwt you can adjust the colours in wwt and windows to pick ones with decent contrast and stuff. essentially, thats the way i use it. i used it a year ago at a star party to do all my imaging and control my telescope and putting a filter over it, it works fell.”

(Filters are a UI feature in WWT which allows you to adjust the hue of the whole interface – under the view tab at the top)

10:29am

Andrew Matthews sent me:

Image Registration: Take multiple pictures through CCD (or webcam) and then do image analysis to remove images blurred by atmospheric turbulence

10:32am

> Ephemeris? explain quickly for me

I googled it to find a definition. 🙂

A lot of objects in there already do that. The whole UI is point-in-time based, and everything moves accordingly. It plays out in real time, or you can accelarate it.

Some objects, like the moon, even rotate and move through their phases.

The communities support will be upgraded in v2.0 to include orbital object support. So if you publish a community, you’ll be able to markup your own orbital objects and it’ll position them and animate them accordingly.

10:33am

Missed out on the CCD image registration question … most likely a plug in you would develop yourself.

WWt isn’t meant to replace software like the $500 planetarium in a box solutions.

[11:45am Updated the username for the easter egg. ;)]

The Tech.Ed "Web" Track

Late this afternoon, the Aussie twittersphere had a brief flurry of activity around the “web” track announcement for Tech.Ed this year.

The problem is that there are 12 sessions to cover everything “web” and UX – that’s one session on AJAX, two on Silverlight, nothing on ASP.NET 3.5 SP1 content, one IIS admin talk ….

As a result of this anorexic track size, lots of great sessions missed out. It’s great seeing people like Scott Hanselman and Jefferson Fletcher locked in. Alongside the official IE8 announcements though, it’d be good to something lik Damian and Lachlan’s boots-on-the-ground web standards talk. Jose Fajardo has been doing some really innovative work with Live Mesh, but there’s not even a single session scheduled around the Live platform yet. There *might* be something as a lunch time chalk talk.

I think the wider issue here is the track design – there are way too many web-related technologies to fit into a single track. As an idea for future years, I don’t think there should be a web track at all – in the same way there’s no explicit “Winforms” track. Web sessions should be distributed between the others. AJAX in developer tools and technologies, MVC in architecture, Silverlight in a new UX track, Dynamic Data in the Database + BI track, IIS in the server track, etc.

To allow “web developers” to find “web” sessions, sessions could be tagged with the technologies they include and ASP.NET would just be a common tag.

I use terms like “web developers” in quotes as I think this is now a very old-world view of developers and the systems they are building. Unfortunately I think it is this view that has driven the problems that we as a community are perceiving this year.

I’m still excited about Tech.Ed this year, I just don’t think they ‘got it’ with this track.

Video: Developing great applications with ASP.NET MVC and ASP.NET AJAX

Here goes my first attempt at putting one of my talks online. This is a screencast from my recent Remix talk in Sydney.

You can grab the Windows Media file directly from:

http://tatham.oddie.com.au/presentations/20080520-RemixSydney-MVC-TathamOddie.wmv (40MB, 42min)

I also experimented with some online services.

Viddler (http://www.viddler.com/explore/Tatham/videos/1/) is ok, except that in the standard player it’s scaling my video down which make it very hard to read (not a very good scaling algorithm). If you watch it full screen it’s fine, or you can download the original video from their servers too.

Google Video (http://video.google.com/videoplay?docid=-7676497915408440399&hl=en) has a massive player, but the resolution is crap. The embedded version looks ok:

YouTube was out of the question because they only support videos up to 10 minutes long.

None of the three services seemed that great. Next up I’m going to try Screencast.com. I have a feeling that there’s a reason why they exist …

Update: Screencast seems to be the best (although still not perfect). It kept my video in WMV format which means there was no scaling or transcoding in play. Unfortunately this also means that you have to have the Windows Media Plugin installed. I have a feeling it will host an SWF for me, as long as I make them SWF myself direct from Camtasia. The interface is pretty ugly (shows you success messages in big, scary, bold, red CAPITALS!) .

http://www.screencast.com/t/C5CnoFy4zH2

The case of the missing ICS file – Remix for Outlook addicts

Being an Outlook addict, I’m always pleased when I see an “Add to Outlook” or “Download as ICS” button on a website. Everyone from Virgin Blue to Facebook do it.

Unfortunately there was no such file for Remix so instead of just loading it into my private calendar, I thought I’d make the ICS feed.

Here they are:

http://tatham.oddie.com.au/files/RemixSydney.ics

http://tatham.oddie.com.au/files/RemixMelbourne.ics

See you tomorrow / Thursday!

Update 21/5/08: Just in time for tomorrow, I’ve added the room assignments to the Melbourne file.

Location, Location, Location: My plan for location awareness, and the GeographicLocationProvider object

I know where I am. My phone knows where it is. Why doesn’t the webpage know where I am?

Think about these scenarios which will become more and more prominent as “the mobile web” starts to prevail:

  1. I visit a cinema website on my mobile. Rather than having to choose my cinema first, the website already knows which suburb I’m in so it defaults to the nearest cinema first.
  2. I need a taxi. I don’t know where I am, but my phone does. I want to be able to book a taxi and have the website discover my location automatically.

The key idea that I’m exploring here is the ability for a webpage to access this information through the standard browser interface.

I have a plan for making this a reality.

Windows Mobile has already taken a step towards baking location awareness into the OS with their GPS Intermediate Driver. The idea is that the operating system handles communication with the GPS unit, including all the various protocols. Applications then have a unified API for accessing GPS data. This proxy effect also facilitates non-exclusive access to the GPS.

But this doesn’t go far enough. Even with this unified API, very few applications are actually location aware. More importantly, I don’t want to have to download and install a piece of software on my device just to be able to see movie times. It’s just not going to happen.

We’ve also been making the assumption that location data comes from a GPS. Enter GeoPriv.

With the continuing rollout of VOIP, there are obvious challenges about the loss of location awareness. The current analog network makes call tracing relatively easy. It’s a fixed line of copper and the phone company knows where it terminates. This is a legal requirement for emergency call routing, as well as being immensely useful for scenarios such as a national number auto-routing to your nearest local store. Both of these scenarios become immensely difficult when you can’t even rely on there being a physical phone anymore – a piece of software with a network connection is now a fully fledged communication device that needs to support these scenarios somehow.

There’s an IETF working group tasked to solve this exact problem. The privacy impacts of sharing location data are so important that it’s in the name. They are the “Geographic Location/Privacy working group”, or “GeoPriv“. The best part is, they are living in a reality and delivering useful technology – and fast.

There are a number of key concepts they have identified:

  • We can’t go jamming a GPS chip in every single device we manufacture. We need to be able to capitalize on the ecosystem surrounding our existing devices to surface the information we already have.
  • Privacy is a critical element of any solution expecting wide spread adoption and trust.
  • There are two possible types of location you could need:
    • civic location (level, street number, street, suburb, etc)
    • geographic location (latitude, longitude, elevation)

Lets step away from mobile devices briefly and consider the laptop I’m writing this post on. My laptop doesn’t know where it is. Neither does my WiFi router, or my DSL modem. My ISP does though.

At some stage in the future, my modem will start receiving an extra DHCP option. In the same  way that my ISP supplies me with network settings like DNS when I connect, they will also start pushing out the address of their Location Information Server. My DSL modem will then push this setting out across my network. Finally, my laptop will be able to query this service to find out my current civic and/or geographic location. The privacy controls around this are beyond the scope of this post.

By asking the service provider for the information, these same techniques also works for mobile devices, 3G data connections, and all those other wonderful wireless technologies. Cell-based triangulation is already in use by phone companies around the world, including our national carrier here in Australia, however the interfaces are in no way standardized. The Location Information Server (LIS) and the HTTP Enabled Location Delivery protocol (HELD) solve this problem.

Now that our device is capitalising on the network ecosystem, getting it into the browser is the easy part. All that’s left is a thin veneer of JavaScript.

Location awareness is only becoming an increasing demand. I want to start the process of rolling in the JS layer now, so that as the supporting technologies come to fruition, we have the access layer to make them useful.

Inline with the XMLHttpRequest object that we’ve all come to know and love, I’ve started writing a spec for a GeographicLocationProvider object.

With XMLHttpRequest, we can write code like this:

var client = new XMLHttpRequest();
client.onreadystatechange = function()
{
    if(this.readyState == 4 && this.status == 200)
        alert(this.responseXML);
}
client.open("GET", "http://myurl.com/path");
client.send();

I want to be able to write code like this:

var provider = new GeographicLocationProvider();
provider.onreadystatechange = function()
{
    if(this.readyState == 2)
        alert(this.geographic.latitude);
}
provider.start();

Again, usage is conceptually similar to the XMLHttpRequest object:

  1. Initialize an instance of the object
  2. Subscribe to the state change event
  3. Set it free

The potential states are:

  • STOPPED. This is the state the the object is initialized in, and the state that it returns to if stop() is called.
  • RESOLVING. The object has been started, but not location information is available yet. In this state the browser could be:
    • prompting the user for permission,
    • searching for location sources (like GPS hardware, or an LIS endpoint), or
    • waiting for the location source to initialize (like connecting to satellites, or talking to the LIS)
  • TRACKING. A location source has been found and location data is ready to be queried from the provider.
  • UNAVAILABLE. No location data is available, and none is likely to become available. The user may have denied a privacy prompt, their security settings may have automatically denied the request, or there may be no location sources available. It is possible for the provider to return to the RESOLVING state if a location source become available later.

In more complex scenarios, the provider can be primed with a specific request to aid in evaluation of privacy policies and selection of location sources. For example, browsers may choose to hand over state-level civic location data without a privacy prompt. This data could also be obtained from an LIS, without needing to boot up a GPS unit. If the webpage requested highly accurate geographic location data, the browser would generally trigger a privacy prompt and boot up the most accurate location source available.

While we’ve now simplified the developer experience, the complexity of the browser implementation has mushroom clouded. How do we reign this in so that it’s attractive and feasible enough for browser implementers? How do we demonstrate value today?

You might have noticed that in my discussion of the JS layer I drifted away from the GeoPriv set of technologies. While any implementation should be harmonious with the concepts developed by the GeoPriv working group, we aren’t dependent upon their technology to start delivering browser-integrated location awareness today.

There are numerous location sources which can be used:

  • Statically configured location – with the network fingerprinting technology already in Vista, it would be relatively easy to prompt users for their civic location the first time location data is needed on a particular network.
  • GPS Intermediate Driver – already rolled into the Windows Mobile platform.
  • Location Information Servers – can be added to the mix later as LIS deployments become prevalent. This is the only one that is GeoPriv dependant.

The civic and geographic schemas have already been delivered by the working group as RFC 4119. There has been an incredible amount of discussion involved in developing a unified schema that can represent civic addresses for anywhere in the world, and this schema should be adopted for consistency. (Do you know the difference between states, regions, provinces, prefectures, counties, parishes, guns, districs, cities, townships, shis, divisions, boroughs, wards, chous and neighbourhoods? They do.)

Who is responsible for delivering this unified location layer?

I keep talking about the browser being responsible for managing all these location sources. Other than the JS layer, all of this infrastructure is client independent, so why don’t we just make the browser a dumb proxy to a unified location service. This service should be a component of the operating system, accessible by software clients (like Skype) and webpages via the browser proxy.

Windows Mobile has already started in the right direction with their GPS Intermediate Driver, however this is only one element of a wider solution.

What do I want?

  1. I want to see a “Location” icon in the settings page of my mobile device, the Control Panel of my Vista laptop and the System Preferences panel of my Mac.
  2. I want the browsers to expose the GeopgraphicLocationProvider object for JS clients. (The start of this specification is online now already.)
  3. I want the browsers to proxy location requests to the operating system store, along with hints like which zone the website is in.
  4. I want the operating system store to be extensible, implementing its own provider model which allows 3rd party developers to supply their own location data from software or hardware services as new discovery mechanisms are developed. We shouldn’t have to wait for widespread adoption before sources are surfaced into the store, and this should be open to software developers as well as driver developers.
  5. I want the operating system store to be accessible by any application, including 3rd party browsers.

How am I going to make this happen?

Dunno.

Right now, I’m documenting the start of what will hopefully be a fruitful conversation. Participate in the conversation.

Mark Pesce to keynote Remix Australia 2008

It was great to see this afternoon’s announcement on the Remix website that Mark Pesce will be taking the top presentation slot.

Mark did the locknote at last year’s Web Directions conference in a Sydney with his “Mob Rules” talk. He discussed some amazing social aspects of the internet and, in a more general sense, the unprecedented communications in poor communities. Over the last decade we’ve gone from more than 50% of the worlds population never having made a phone call, to over half the population now owning a mobile phone. The social impacts of this are quite surprising in ways that people never could have predicted.

I highly recommend watching the Mob Rules talk when you get a chance. You can watch it all online at http://www.webdirections.org/resources/mark-pesce.

He’s an amazing presenter, whos presentation technique reminds me a lot of Al Gore (another speaker I have immense respect for).

I look forward to seeing what he’ll deliver at Remix.

Using cookies in ASP.NET

For a comparison of various ASP.NET state management techniques (including cookies) and their inherit advantages and disadvantages, see my previous post – Managing State in ASP.NET – which bag to use and when. This post covers the purely technical side of dealing with cookies in ASP.NET


I’m glad the ASP.NET team gave us access as raw as they did, but it also means that you need to have an understanding of how cookies work before you use them. As much as it might seem, you can’t just jump in and use them straight away.

First of all, you need to understand how they are stored and communicated. Cookies are stored on the user’s local machine, actually on their hard-drive. Every time the browser performs a request, it sends the relevant cookies up with the request. The server can then send cookies back with the response, and these are saved or updated on the client accordingly.

Now you need to remember that cookies are uploaded with every request, so don’t go storing anything too big in there particularly as most users have uplinks which are much slower than their downlinks. With this in mind, you should generally try and just store an id in the cookie and persist the actually data in a database on the server or something like that.

Finally, you only need to send cookies that have changed back with the response, somewhat like a delta. This introduces some intricacies, like “how do you delete a cookie” which I’ll discuss in a sec.

Reading Cookies

This is the easy part. Because cookies are uploaded with every request, you’ll find them in the Request.Cookies bag. You can access cookies by a string key (Request.Cookies[“mycookie”]) or enumerate them. Cookies can contain either a single string value, or a dictionary of string values all within the one cookie. You can access theses by Request.Cookies[“mycookie”].Value and Request.Cookies[“mycookie”].Values[“mysubitem”] respectively.

Creating Cookies

The Request.Cookies bag is writeable, but don’t let that deceive you. Adding cookies here isn’t going to help you, because this is just the upload side. To create a new cookie, we need to add it to the Response.Cookies bag. This is the bag that is written back to the client with the page, and thus it plays the role of our “diff”.

The HttpCookie constructor exposes very simple name and value properties:

new HttpCookie(“mycookie”,”myvalue”)

To get some real control over your cookies though, take a look at the properties on the HttpCookie before you call Response.Cookies.Add.

  • Domain lets you restrict cookie access to a particular domain.
  • Expires sets the absolute expiry date of the cookie as a point in time. You can’t actually create a cookie which lives indefinitely, so if that’s what you’re trying to achieve just set it to DateTime.UtcNow.AddYears(50).
  • HttpOnly lets you restrict the cookie to server side access only, and thus prevent locally running JavaScript from seeing it.
  • Path lets you restrict cookie access to a particular request path.
  • Secure lets you restrict cookie access to HTTPS requests only.

If a cookie is not accessible to a request, it will just not be uploaded with the request and the server will never know it even existed.

Updating Cookies

Any cookie that exists in Response.Cookies will get sent back to the client. The browser will then save each of those cookies, and override any cookies that have the same key and belong to the same domain, path, etc.

This means that to update the value of a cookie, you actually have to create a whole new cookie in the Response.Cookies collection that will then override the original value.

Updating the value of cookies is rare though, because as suggested early you should only be aiming to save an id in the cookie, and ids don’t generally have to change.

Deleting Cookies

As with updating, this is rare, but sometime you gotta do it.

We know that we only have to send back cookies that we want to update, so how do we delete a cookie? Not specifying it in the response just tells the browser it hasn’t changed.

Well, this is fun. 🙂

Create a new cookie in the Response.Cookies bag with the same key as the one you are trying to delete, then set the expiry date of this cookie to be in the past. When the client browser attempts to save the cookie it will override what was there with an expired cookie, thus effectively deleting the original cookie.

Remember that your user’s time maybe offset from your servers, so play it safe and set the expiry date to at least DateTime.Now.AddHours(-24);

Managing State in ASP.NET – which bag to use and when

There’s been some discussion on the Readify mailing lists lately about all the different types of ASP.NET state mechanisms. There didn’t seem to be a good comparison resource online, so I thought it’d be my turn to write one.

Session State

The most commonly used and understood state bag is the good old Session object. Objects you stored here are persisted on the server, and available between requests made by the same user.

  • Security isn’t too much of an issue, because the objects are never sent down to the client. You do need to think about session hijacking though.
  • Depending on how ASP.NET is configured, the objects could get pushed back to a SQL database or an ASP.NET state server which means they’ll need to be serializable.
  • If you’re using the default in-proc storage mode you need to think carefully about the amount of RAM potentially getting used up here.
  • You might lose the session on every request if the user has cookies disabled, and you haven’t enabled cookie-less session support, however that’s incredibly rare in this day and age.

Usage is as simple as:

Session[“key”] = “yo!”;

Application State

Application state is also very commonly used and understood because it too is a hangover from the ASP days. It is very similar to session state, however it is a single state bag shared by all users across all requests for the life of the application.

  • Security isn’t really an issue at all here because once again, the objects are never sent over the wire to the client. With application state, you also don’t have the risk of session hijacking.
  • Everything in the bag is shared by everyone, so don’t put anything user specific here.
  • Anything you put here will hang around in memory like a bad smell until the application is recycled, or you explicitly remove it so be conscious of what you’re jamming in to memory.

I can’t remember ever seeing a legitimate use of application state in ASP.NET. Generally using Cache is a better solution – as described below, it too is shared across all requests, but it does a very good job of managing its content lifecycle.

I’d love to know why the ASP.NET team included application state, other than to pacify ASP developers during their migration to the platform.

Usage is a simple as:

Application[“key”] = “yo!”;

HttpContext / Request State

Next up we have HttpContext.Current.Items. I haven’t come across a good name for this anywhere, so I generally call it “Request State”. I think that name clearly indicates its longevity – that is, only for the length of the request.

It is designed for passing data between HTTP modules and HTTP handlers. In most applications you wouldn’t use this state bag, but its useful to know that it exists. Also, because it doesn’t get persisted anywhere you don’t need to care about serialization at all.

Usage is as simple as:

HttpContext.Current.Items.Add(“key”, “yo!”);

View State

Ah … the old view state option that sends chills down the spine of any semantic web developer who longs for the days when the web worked like the web instead of winforms hacked into HTML. (Don’t worry – ASP.NET MVC lets us return to those glory days!) But enough with my whining …

View state is used to store information in a page between requests. For example, I might pull some data into my page the first time it renders, but when a user triggers a postback I want to be able to reuse this same data.

While it makes life easier for us young drag-n-drop developers, it is a force to be reckoned with carefully.

  • View state gets stored into the page, and if you save the wrong content into it you’ll rapidly be in for some big pages. I’ve seen ASP.NET pages with 10KB of HTML and 1.2MB of view state. Have a think about how long that page took to load!
  • It’s generally used for controls to be able to remember things between requests, so that they can rebuild themselves after a postback. It’s not very often that I see developers using view state directly, but there are some legitimate reasons for doing so.
  • Each control has its own isolated view state bag. Remember that pages and master pages each inherit from Control, so they have their own isolated bags too. View state is meant to support the internal plumbing of a control, and thus if you find that the bags being isolated is an issue for you then it’s a pretty good indicator that you’ve taken the wrong approach with your architecture.
  • It can be controlled on a very granular level – right down to enabling or disabling it per control. There’s an EnableViewState property on every server control, every page (in the page directive at the top) every master page (also in the directive at the top), and an application wide setting in web.config. These are all on my default, but the more places you can disable it in your app, the better.

A full explanation of ViewState is beyond the scope of this article, but I highly recommend that every ASP.NET developer read TRULY Understanding ViewState by Dave Reed.

If you want a simpler discussion, be sure to take a look at my previous post – Writing Good ASP.NET Server Controls.

Usage is as simple as:

ViewState[“key”] = “yo!”;

Control State

Control state is somewhat similar to view state, except that you can’t turn it off.

The idea here is that some controls need to persist values across requests no matter what (for example, if it’s hard to get the same data a second time ’round).

I’m a bit hesitant about the idea of control state. It was only added in ASP.NET 2.0 and in many ways I wish they hadn’t. Sure, some controls will break completely if you do a postback without view state having being enabled. What if I never expect my page to postback though? Maybe I want to be able to turn it off still. Unfortunately I think this comes from the arrogance that is ASP.NET not trusting the browser to even wipe its own ass … even the most personal of operations must go via a server side event, so you’ll always do a postback – right? Wrong.

If you’re a control developer, please be very very conscious about your usage of control state.

Usage is a bit more complex … you need to override the LoadControlState and SaveControlState methods for your control. MSDN is a good place to find content for this – take a look at their Control State vs. View State Example.

Cache

Cache is cool. As a general rule, it’s what you should be using instead of Application.

Just like application, it’s shared between all requests and all users for the entire life of your application.

What’s cool about Cache is that it actually manages the lifecycle of its contents rather than just letting them linger around in memory for ever ‘n ever. It facilitates this in a number of ways:

  • absolute expiry (“forget this entry 20 minutes from now”)
  • sliding expiry (“forget this entry if it’s not used for more than 5 minutes”)
  • dependencies (“forget this entry when file X changes”)

Even cooler yet, you can:

  • Combine all of these great features to have rules like “forget this entry if it’s not used for more than 5 minutes, or if it gets to being more than 20 minutes after we loaded the data, or if the file we loaded it from changes”.
  • Handle an event that tells you when something has been invalidated and thus is about to be removed from the cache. This event it is per cache item, so you subscribe to it when you create the item.
  • Set priorities per item so that it can groom the lower priority items from memory first, as memory is needed.
  • With .NET 2.0, you can point a dependency at SQL so when a particular table is updated the cache automatically gets invalidated. If you’re targeting SQL 2005 it maintains this very intelligently through the SQL Service Broker. For SQL 2000 it does some timestamp polling, which is still pretty efficient but not quite as reactive.

Even with all this functionality, it’s still pathetically simple to use.

Check out the overloads available on Cache.Items.Add();

Profile

I don’t really think of profile as state. It’s like calling your database “state” – it might technically be state, but who actually calls it that?! :p

The idea here is that you can store personalisation data against a user’s profile object in ASP.NET. The built in framework does a nice job of remembering profiles for anonymous users as well as authenticated users, as well as funky things like migrating an anonymous user’s state when they signup, etc.

By default you’d run the SQL script they give you create a few tables, then just point it at a SQL database and let the framework handle the magic.

I don’t like doing this because it stores all of the profile content in serialized binary objects making them totally opaque in SQL and non-queryable. I like the idea of being able to query out data like which theme users prefer most. There’s a legitimate business value in being able to do so, as trivial as it may sound. (If you think it sounds trivial, go read Super Crunchers – Why Thinking-by-Numbers Is The New Way To Be Smart by Ian Ayres.)

This problem is relatively easily resolved by making your own provider. You still get all the syntactic and IDE sugar that comes with ASP.NET Profiles, but you get to take control of the storage.

Cookies

Cookies are how the web handles state, and can often be quite useful to interact with directly from ASP.NET. ASP.NET uses cookies itself to store values like the session ID (used for session state) and authentication tokens. That doesn’t stop us from using the Request.Cookies and Response.Cookies collections ourselves though.

  • Security is definitely an issue because cookies are stored on the client, and thus can be very easily read and tampered with (they are nothing more than text files).
  • Beware the cookies can often be access from JavaScript too, which means that if you’re hosting 3rd party script then it could steal cookie contents directly on the client = major XSS risk. To avoid this, you can flag your cookies as “HTTP only”.
  • They are uploaded to the server with every request, so don’t go sticking anything of substantial size in there. Even on my broadband connection, my uplink is 1/24th the speed of my downlink. Typically you will just store an id or a token in the cookie, and the actual content back on the server.
  • Cookies can live for months or even years on a user’s machine (assuming they don’t explicitly clear them) meaning they’re a great way of persisting things like shopping carts between user visits.

I’m glad the ASP.NET team gave us access as raw as they did, but it also means that you need to have an understanding of how cookies work before you use them. As much as it might seem, you can’t just jump in and use them straight away.

For a rather in-depth look at exactly how cookies work, and how to use them in ASP.NET, look at my post: Using cookies in ASP.NET.

Query Strings

The query string is about as simple as you can get for state management. It lets you pass state from one page, to another, even between websites.

I’m sure you’re all familiar with query strings on the end of URLs like ShowProduct.aspx?productId=829.

Usage is as simple as:

string productId = Request.QueryString[“productId”];


I hope that’s been a useful comparison for you. If you think of any other ways of storing state in ASP.NET that you think I’ve missed, feel free to comment and I’ll add them to the comparison. 🙂

Update 15Apr08: Added cookies and query strings. Hidden form fields still to come.

Remix 2008: Developing great applications using ASP.NET MVC and ASP.NET AJAX

Continuing my recent focus on ASP.NET MVC, I’ll now be presenting it at the upcoming Remix conference.

Microsoft Remix 2008 is being held in Sydney on the 20th May and Melbourne on the 22nd May.

My talk is:

Developing great applications using ASP.NET MVC and ASP.NET AJAX: Learn how to use ASP.NET MVC to take advantage of the model-view-controller (MVC) pattern in your favourite .NET Framework language for writing business logic in a way that is de-coupled from the views of the data. Then add ASP.NET AJAX for a highly interactive front end.

Get all the details from the Remix site and get your ticket soon. ($199 is very cheap for a full copy of Expression Suite!)