Dead vs. Done

Back through 2009 and 2010, Damian Edwards and I worked on a project called Web Forms MVP.

It grew out of a consistent problem that we saw across multiple consulting engagements. We got tired of solving it multiple times, so we wrote it as a framework, released it open source, and then implemented it on client projects (with client understanding). Clients got free code. We got to use it in the wild, with real load and challenges. We got to re-use it. The community got to use it too.

It has been downloaded 20k+ times, which is pretty big considering it was around before NuGet was. (Although, we were one of the first 25 packages on the feed too.)

In the last 12 – 18 months, I’ve started seeing “Is Web Forms MVP dead?” being asked. This blog post both answers that question directly in the context of Web Forms MVP, but also discusses the idea of dead vs. done.

Here’s a specific question I was asked:

I am a little bit worried about the fact there is not code commit since sept 2011. Will you continue the project or will it fall in the forgotten ones?

And here was the answer I wrote:

I have a mixed answer for you here.

On the one hand, we cut seven CTP builds, then a v1.0, then a 1.1, 1.2, 1.3 and 1.4. That means we developed the library, tested hundreds of millions of production web requests through it, reached a feature point we wanted to call 1.0, then iterated on top of it. In the 15 months since cutting 1.0, there are only six issues on the Codeplex site: one is a question, one a misunderstanding, one a demo request, and one a feature request that I don’t really agree with anyway.

At this point, Web Forms MVP 1 is “done”, not “dead”. I’m just slack about closing off issues.

Now, that opens up some new questions:

If you find a bug with 1.4 that you can’t workaround, are you left out in the cold? No. First up, you have all the code and build scripts (yay for open source!) so there’s nothing we can do to prevent you from making a fix even if we wanted to (which we never would). Secondly, if you send a pull request via Codeplex we’ll be happy to accept your contribution and push it to the official package.

Will there be a Web Forms MVP 2? At this time, from my personal perspective, I’ll say ‘highly unlikely’. As a consultant, I haven’t been on a Web Forms engagement in over 2 years. That’s not to say there isn’t still a place for Web Forms and Web Forms MVP, but that I’m just not personally working in that area so I’m not well placed to innovate on the library. Damian has lots of great ideas of things to do, and since starting Web Forms MVP has actually become the Program Manager for ASP.NET Web Forms at Microsoft. That being said, his open source efforts of late are heavily focussed on SignalR.

Should there be a Web Forms MVP 2? Maybe. It’d be nice to bring it in line with ASP.NET 4.5, but I’m hard placed to know what is needed in this area considering I’m not on a Web Forms engagement. Without a clear need, I get rather confused by people calling for a new version of something just so they can feel comfortable that the version number incremented.

I hope that gives you some clarity and confidence around what is there today, what will stay, and where we’re going (or not going).

Some projects definitely die. They start out as a great idea, and never make it to a release. I find it a little sad that that’s the only categorisation that seems to be available though.

I hope I’m not just blindly defending my project, but I do genuinely believe that we hit ‘done’.

From here, Web Forms MVP might disappear into the background (it kind of has). The community might kick off a v2. A specific consumer might make their own fork with a bug fix they need. Those are all next steps, now that we’ve done a complete lifecycle.

In the meantime, people are asking if the project is dead, yet not raising any bugs or asking for any features. This just leaves me confused.

New Talks: “Neo4j in a .NET World (Graph DBs)” and “You’re in production. Now what?”

Last week I was lucky enough to join an array of great speakers at the NDC Oslo conference.

The recordings of both of my talks are now online, along with 141 other excellent talks you should watch.

Neo4j in a .NET World (Graph DBs)

This year, a small team of developers delivered a ASP.NET MVC app, with a neo4j backend, all running in Azure. This isn’t in POC; it’s a production system. Also, unlike most graph DB talks, it’s not a social network!

https://vimeo.com/43676873

You’re in production. Now what?

A tiny subset of your users can’t login: they get no error message yet have both cookies and JavaScript enabled. They’ve phoned up to report the problem and aren’t capable of getting a Fiddler trace. You’re serving a million hits a day. How do you trace their requests and determine the problem without drowning in logs?

Marketing have requested that the new site section your team has built goes live at the same time as a radio campaign kicks off. This needs to happen simultaneously across all 40 front-end web servers, and you don’t want to break your regular deployment cadence while the campaign gets perpetually delayed. How do you do it?

Users are experiencing 500 errors for a few underlying reasons, some with workarounds and some without. The customer service call centre need to be able to rapidly triage incoming calls and provide the appropriate workaround where possible, without displaying sensitive exception detail to end users or requiring synchronous logging. At the same time, your team needs to prioritize which bugs to fix first. What’s the right balance of logging, error numbers and correlations ids?

These are all real scenarios that Tatham Oddie and his fellow consultants have solved on large scale, public websites. The lessons though are applicable to websites of all sizes and audiences.

https://vimeo.com/43624434

5 Minute Screencast: Stop helping your users. Help yourself.

A few weeks ago I spoke at Web Directions’ What Do You Know event. The event consisted of 10 speakers each doing a 5 minute presentation about some technique or idea that they find useful in web development.

Here’s my talk:

Ros Hodgekis won the night with her awesome email-related talk (all delivered with champagne glass still in hand):

.NET Rocks! #687: ‘Tatham Oddie Makes HTML 5 and Silverlight Play Nice Together’

I spoke to Carl + Richard on .NET Rocks! last week about using HTML5 and Silverlight together. We also covered a bit of Azure toward the end.

The episode is now live here:

http://www.dotnetrocks.com/default.aspx?showNum=687

JT – Sorry, I referred to you as “the other guy I presented with” and never intro-ed you. 😦

Everyone else – JT is awesome.

Released: FormsAuthenticationExtensions

What it does

Think about a common user table. You probably have a GUID for each user, but you want to show their full name and maybe their email address in the header of each page. This commonly ends up being an extra DB hit (albeit hopefully cached).

There is a better way though! A little known gem of the forms authentication infrastructure in .NET is that it lets you embed your own arbitrary data in the ticket. Unfortunately, setting this is quite hard – upwards of 15 lines of rather undiscoverable code.

Sounds like a perfect opportunity for another NuGet package.

How to get it

Library: Install-Package FormsAuthenticationExtensions

(if you’re not using NuGet already, start today)

Source code: formsauthext.codeplex.com

How to use it

Using this library, all you need to do is add:

 using FormsAuthenticationExtensions; 

then change:

 FormsAuthentication.SetAuthCookie(user.UserId, true); 

to:

 var ticketData = new NameValueCollection {
    { "name", user.FullName },
    { "emailAddress", user.EmailAddress }
 };
new FormsAuthentication().SetAuthCookie(user.UserId, true, ticketData);

Those values will now be encoded and persisted into the authentication ticket itself. No need to store it in any form of session state, custom cookies or extra DB calls.

To read the data out at a later time:

 var ticketData = ((FormsIdentity) User.Identity).Ticket.GetStructuredUserData();
var name = ticketData["name"];
var emailAddress = ticketData["emailAddress"];

If you want something even simpler, you can also just pass a string in:

 new FormsAuthentication().SetAuthCookie(user.UserId, true, "arbitrary string here"); 

and read it back via:

 var userData = ((FormsIdentity) User.Identity).Ticket.UserData; 

Things to Consider

Any information you store this way will live for as long as the ticket.

That can be quite a while if users are active on your application for long periods of time, or if you give out long-term persistent sessions.

Whenever one of the values stored in the ticket needs to change, all you need to do is call SetAuthCookie again with the new data and the cookie will be updated accordingly. In our user name / email address example, this is actually quite advantageous. If the user was to update their display name or email address, we’d just update the ticket with new values. This updated ticket would then be supplied for future requests. In web farm environments this is about as perfect as it gets – we don’t need to go back to the DB to load this information for each request, yet we don’t need to worry about invalidating the cache across machines. (Any form of shared, invalidatable cache in a web farm is generally bad.)

Size always matters.

The information you store this way is embedded in the forms ticket, which is then encrypted and sent back to the users browser. On every single request after this, that entire cookie gets sent back up the wire and decrypted. Storing any significant amount of data here is obviously going to be an issue. Keep it to absolutely no more than a few simple values.

Twavatar – coming to a NuGet server near you

Yet another little micro-library designed to do one thing, and do it well:

twavatar.codeplex.com

Install-Package twavatar

I’ve recently been working on a personal project that lets me bookmark physical places.

To avoid having to build any of the authentication infrastructure, I decided to build on top of Twitter’s identity ecosystem. Any user on my system has a one-to-one mapping back to a Twitter account. Twitter get to deal with all the infrastructure around sign ups, forgotten passwords and so forth. I get to focus on features.

The other benefit I get is being able to easily grab an avatar image and display it on the ‘mark’ page like this:

image

(Sidenote: You might also notice why I recently built relativetime and crockford-base32.)

Well, it turns out that grabbing somebody’s Twitter avatar isn’t actually as easy as one might hope. The images are stored on Amazon S3 under a URL structure that requires you to know the user’s Twitter Id (the numeric one) and the original file name of the image they uploaded. To throw another spanner in the works, if the user uploads a new profile image, the URL changes and the old one stops working.

For most Twitter clients this isn’t an issue because the image URL is returned as part of the JSON blob for each status. In our case, it’s a bit annoying though.

Joe Stump set out to solve this problem by launching tweetimag.es. This service lets you use a nice URL like http://img.tweetimag.es/i/tathamoddie_n and let them worry about all the plumbing to make it work. Thanks Joe!

There’s a risk though … This is a free service, with no guarantees about its longevity. As such, I didn’t want to hardcode too many dependencies on it into my website.

This is where we introduce Twavatar. Here’s what my MVC view looks like:

 @Html.TwitterAvatar(Model.OwnerHandle) 

Ain’t that pretty?

We can also ask for a specific size:

 @Html.TwitterAvatar(Model.OwnerHandle, Twavatar.Size.Bigger) 

The big advantage here is that if / when tweetimag.es disappears, I can just push an updated version of Twavatar to NuGet and everybody’s site can keep working. We’ve cleanly isolated the current implementation into its own library.

It’s scenarios like this where NuGet really shines.

Update 1: Paul Jenkins pointed out a reasonably sane API endpoint offered by Twitter in the form of http://api.twitter.com/1/users/profile_image/tathamoddie?size=bigger. There are two problems with this API. First up, it issues a 302 redirect to the image resource rather than returning the data itself. This adds an extra DNS resolution and HTTP round trip to the page load. Second, the documentation for it states that it “must not be used as the image source URL presented to users of your application” (complete with the bold). To meet this requirement you’d need to call it from your application server-side, implement your own caching and so forth.

The tweetimag.es service most likely uses this API under the covers, but they do a good job of abstracting all the mess away from us. If the tweetimag.es service was ever to be discontinued, I imagine I’d update Twavatar to use this API directly.

Node.js on Windows

Thanks to Sharkie’s ongoing organisation efforts, SydJS is a thriving monthly JavaScript meeting here in Sydney. This evening they welcomed me along to talk about Node.js on Windows. Afraid of a mostly non-Microsoft crowd I rocked up with all the anti-Unix jokes I had but they turned out to be all quite friendly and it was a fun little talk.

Here’s what I ran through…

Update 18th July 2011: The latest official builds of node.js now come with a Windows executable. This is thanks to support from Microsoft.

Cygwin

Cygwin gives you a full POSIX environment on Windows. It’s great for running apps designed for Unix, but it’s pretty heavy and not very … Windows-ey. It’d be like creating a “My Documents” folder on Ubuntu.

All that being said, it’s the simplest and most reliable way of getting node running on Windows.

Works for 0.2.6 -> 0.3.1 and 0.4.0+. Anything between 0.3.1 and 0.4.0 won’t compile.

The steps (and common pitfalls) are well documented at https://github.com/joyent/node/wiki/Building-node.js-on-Cygwin-(Windows)

Once you’ve got it running in Cygwin, if you jump out to a standard Windows command prompt and run c:\Cygwin\usr\local\bin\node.exe you’ll get a nice big error. More on this later.

MinGW

The next step up from Cygwin is to compile it under MinGW. MinGW (Minimal GNU for Windows) provides the bare minimum set of libraries required to make it possible to compile Unix-y apps on Windows, avoiding the full POSIX strangehold environment that Cygwin provides.

Works for 0.3.6+.

Again, there are well documented steps for this: https://github.com/joyent/node/wiki/Building-node.js-on-mingw

Once you’ve got it running in MinGW, jump out to a standard Windows command prompt again and run c:\wherever-you-put-your-git-clone\node.exe you’ll get a nice big error.

Standalone

Now that we’ve compiled it with MinGW (you did that in the last step, right?) we’re ready to run it on Windows natively.

From a native Windows command prompt:

  1. Create a new folder (mkdir node-standalone)
  2. Copy in the node.exe you compiled in MinGW (xcopy c:\wherever-you-put-your-git-clone\node.exe node-standalone)
  3. Copy in the MinGW helper libraries (xcopy c:\mingw\bin\lib*.dll node-standalone)
  4. Run node-standalone\node
  5. Voila! It works!

Running as a Service

Next up, I wanted to host node as a service, just like IIS. This way it’d start up with my machine, run in the background, restart automatically if it crashes and so forth.

This is where nssm, the non-sucking service manager, enters the picture. This tool lets you host a normal .exe as a Windows service.

Here are the commands I used to setup an instance of the SydJS website as a service:

nssm.exe install sydjs-node c:\where-i-put-node-standalone\node.exe c:\code\SydJS\server.js
net start sydjs-node

What We Achieved

We now have node.js, running natively on Windows, as a service. Enjoy!

(Just please don’t use this in production – it’s really not ready for that yet.)