Podcast: Graph databases and Neo4j, with Richard and Carl from .NET Rocks!

Listen to .NET Rocks! #949

This is what happens when you phone me up at 6am on a Saturday morning: you get a surprisingly energetic Tatham babbling about discussing Neo4j and graph databases for an hour. There are even some nice bird sounds in the background towards the end, as they finally woke up too.

We discussed what graph databases are (as opposed to charts!), some domains where you might use one, how they relate to other stores like document and relational databases, query performance, bad database jokes, my obsession with health data (go.tath.am/weight, go.tath.am/fitbitgo.tath.am/cyclinggo.tath.am/blood), the Cypher query language, ASCII art as a job, Readify/Neo4jClient for the .NET folks, and some Neo4j+NewRelic.

Useful? Help the Children.

If this podcast was useful to you, perhaps you could donate to my next charity cycle? I’ll be cycling the first leg of the 26 day, 3,716km Variety cycle. Variety, the Children’s Charity is a a national not-for-profit organisation dedicated to transforming the lives of children in need. Variety’s vision is for all children to attain their full potential; regardless of ability or background – and their mission is to enrich, improve, and transform the lives of seriously ill, disadvantaged and special needs children.

httpstat.us, and good ways to learn new tech stacks


Besides just being a cool domain name, it’s actually quite a useful tool. Wondering how your code responds to an HTTP 418 response? Point it at httpstat.us/418 and find out.

Aaron and I originally built this as a way of learning Ruby, Sinatra, HAML, Sass, Git, Heroku and other funky tool chains. Several years on, left unmaintained, something about our app finally made it incompatible with Heroku’s hosting layer and it all shutdown. Our singular name really rang true here: every request returned a 503. Oops. We already had an ASP.NET-based re-write waiting in the wings, so we pushed that up to Windows Azure Websites and everything came humming back to life.

Want to learn a new tech stack or tool chain? Skip the contrived examples and actually build something simple. It’s amazing how often these turn into cool little tools that stick around for time to come.

Web Forms MVP: Now with less cobwebs

TL,DR: http://webformsmvp.com is now just a redirect to https://github.com/webformsmvp/webformsmvp.

Back in early 2009, Damian and I released our first builds of Web Forms MVP.

GitHub and BitBucket were each less than 6 months old. CodePlex was the place to be for .NET devs, and I think our code was originally in either Subversion or TFS. We needed a wiki, but CodePlex was pretty clunky for that, so we set up a MediaWiki instance on a tiny VM somewhere, running inside Microsoft Virtual Server 2005. Funnily enough, this was all getting pretty unstable. Our wiki has been down almost 50% of the time in recent weeks.

Personally, I was actually a little bit surprised about how many people cared that the wiki was unavailable. This was a promising sign, so we needed to fix the problem.

Today, we migrated to GitHub.

  1. Code was converted from Hg to Git, then pushed to GitHub
  2. Wiki content was converted from MediaWiki to Markdown, then pushed to GitHub wiki
  3. Release notes were migrated to GitHub releases, against the same tags
  4. http://webformsmvp.com was redirected to https://github.com/webformsmvp/webformsmvp
  5. CodePlex was gutted of content wherever possible, and changed to link to GitHub

The project is still classed as “done” for Damian and I (see my Dead vs. Done post). While we’re not actively investing time in any further versions, publishing it to GitHub gives more reliable access to the content, and makes it easier for the community to fork the project as they see fit.

An Approach for More Structured Enums

The Need

I encountered a scenario today where a team need to increase the structure of their logging data. Currently, logging is unstructured text – log.Error("something broke") – whereas the operations team would like clearer information about error codes, descriptions and accompanying guidance.

The first proposed solution was a fairly typical one: we would define error codes, use them in the code, then document them in a spreadsheet somewhere. This is a very common solution, and demonstrated to work, but I wanted to table an alternative.

This blog post is written in the context of logging, but you can potentially extend this idea to anywhere that you’re using an enum right now.

My Goals

I wanted to:

  • support the operations team with clear guidance
  • keep the guidance in the codebase, so that it ages at the same rate as the code
  • keep it easy for developers to write log entries
  • make it easy for developers to invent new codes, so that we don’t just re-use previous ones

A Proposed Solution

Instead of an enum, let’s define our logging events like this:

public static class LogEvents
    public const long ExpiredAuthenticationContext = 1234;
    public const long CorruptAuthenticationContext = 5678;

So far, we haven’t added any value with this approach, but now let’s change the type and add some more information:

public static class LogEvents
    public static readonly LogEvent ExpiredAuthenticationContext = new LogEvent
        EventId = 1234,
        ShortDescription = "The authentication context is beyond its expiry date and can't be used.",
        OperationalGuidance = "Check the time coordination between the front-end web servers and the authentication tier."
    public static readonly LogEvent CorruptAuthenticationContext = new LogEvent
        EventId = 5678,
        ShortDescription = "The authentication token failed checksum prior to decryption.",
        OperationalGuidance = "Use the authentication test helper script to validate the raw tokens being returned by the authentication tier."

From a consumer perspective, we can still refer to these individual items akin to how we would enums – logger.Error(LogEvent.CorruptAuthenticationContext), however we can now get more detail with simple calls like LogEvent.CorruptAuthenticationContext.EventId and LogEvent.CorruptAuthenticationContext.OperationalGuidance.

More Opportunities

Adding some simple reflection code, we can expose a LogEvents.AllEvents property:

public static IEnumerable<LogEvent> AllEvents
        return typeof(LogEvents)
            .GetFields(BindingFlags.Static | BindingFlags.Public | BindingFlags.DeclaredOnly)
            .Where(f => f.FieldType == typeof(LogEvent))
            .Select(f => (LogEvent)f.GetValue(null));

This then allows us to enforce conventions as unit tests, like saying that all of our log events should have at least a sentence of so of operational guidance:

[TestCaseSource(typeof(LogEvents), "AllEvents")]
public void AllEventsShouldHaveAtLeast50CharactersOfOperationalGuidance(LogEvents.LogEvent logEvent)
    Assert.IsTrue(logEvent.OperationalGuidance.Length >= 50);

Finally, it’s incredibly easy to either list the guidance on an admin page, or generate it to static documentation during build: just enumerate the LogEvents.AllEvents property.

The Code

I’ve posted some sample code to https://github.com/tathamoddie/LoggingPoc

Something interesting things in that repository are:

  • I’ve split the ‘framework’ code like the AllEvents property into a partial class so that LogEvents.cs stays cleaner.
  • I’ve written some convention tests that cover uniqueness of ids and validation of operational guidance.

Wrap Up

There’s absolutely nothing about this solution that is technically interesting. It’s flat out boring, but sometimes those are the most elegant solutions. Jimmy already wrote about enumeration classes 5 years ago.

Dead vs. Done

Back through 2009 and 2010, Damian Edwards and I worked on a project called Web Forms MVP.

It grew out of a consistent problem that we saw across multiple consulting engagements. We got tired of solving it multiple times, so we wrote it as a framework, released it open source, and then implemented it on client projects (with client understanding). Clients got free code. We got to use it in the wild, with real load and challenges. We got to re-use it. The community got to use it too.

It has been downloaded 20k+ times, which is pretty big considering it was around before NuGet was. (Although, we were one of the first 25 packages on the feed too.)

In the last 12 – 18 months, I’ve started seeing “Is Web Forms MVP dead?” being asked. This blog post both answers that question directly in the context of Web Forms MVP, but also discusses the idea of dead vs. done.

Here’s a specific question I was asked:

I am a little bit worried about the fact there is not code commit since sept 2011. Will you continue the project or will it fall in the forgotten ones?

And here was the answer I wrote:

I have a mixed answer for you here.

On the one hand, we cut seven CTP builds, then a v1.0, then a 1.1, 1.2, 1.3 and 1.4. That means we developed the library, tested hundreds of millions of production web requests through it, reached a feature point we wanted to call 1.0, then iterated on top of it. In the 15 months since cutting 1.0, there are only six issues on the Codeplex site: one is a question, one a misunderstanding, one a demo request, and one a feature request that I don’t really agree with anyway.

At this point, Web Forms MVP 1 is “done”, not “dead”. I’m just slack about closing off issues.

Now, that opens up some new questions:

If you find a bug with 1.4 that you can’t workaround, are you left out in the cold? No. First up, you have all the code and build scripts (yay for open source!) so there’s nothing we can do to prevent you from making a fix even if we wanted to (which we never would). Secondly, if you send a pull request via Codeplex we’ll be happy to accept your contribution and push it to the official package.

Will there be a Web Forms MVP 2? At this time, from my personal perspective, I’ll say ‘highly unlikely’. As a consultant, I haven’t been on a Web Forms engagement in over 2 years. That’s not to say there isn’t still a place for Web Forms and Web Forms MVP, but that I’m just not personally working in that area so I’m not well placed to innovate on the library. Damian has lots of great ideas of things to do, and since starting Web Forms MVP has actually become the Program Manager for ASP.NET Web Forms at Microsoft. That being said, his open source efforts of late are heavily focussed on SignalR.

Should there be a Web Forms MVP 2? Maybe. It’d be nice to bring it in line with ASP.NET 4.5, but I’m hard placed to know what is needed in this area considering I’m not on a Web Forms engagement. Without a clear need, I get rather confused by people calling for a new version of something just so they can feel comfortable that the version number incremented.

I hope that gives you some clarity and confidence around what is there today, what will stay, and where we’re going (or not going).

Some projects definitely die. They start out as a great idea, and never make it to a release. I find it a little sad that that’s the only categorisation that seems to be available though.

I hope I’m not just blindly defending my project, but I do genuinely believe that we hit ‘done’.

From here, Web Forms MVP might disappear into the background (it kind of has). The community might kick off a v2. A specific consumer might make their own fork with a bug fix they need. Those are all next steps, now that we’ve done a complete lifecycle.

In the meantime, people are asking if the project is dead, yet not raising any bugs or asking for any features. This just leaves me confused.

Upcoming Manila Presentation: Your website is in production. It’s broken. Now what?

Next month I’ll be spending some time in the Readify Manila office catching up with some local colleagues. While there, I’ll be presenting a user group session at the local Microsoft office. If you’re in the area, I’d love to see you there. (Did you know that we’re hiring in Manila too?)

Manila, Philippines, Wed 12th Dec 2012. Free. Register now.

Big websites fail in spectacular ways. In this session, five-time awarded ASP.NET/IIS MVP and ASP Insider Tatham Oddie will share the problems that he and fellow Readify consultants have solved on large scale, public websites. The lessons are applicable to websites of all sizes and audiences, and also include some funny stories. (Not so funny at the time.)

A tiny subset of your users can’t login: they get no error message yet have both cookies and JavaScript enabled. They’ve phoned up to report the problem and aren’t capable of getting a Fiddler trace. You’re serving a million hits a day. How do you trace their requests and determine the problem without drowning in logs?

Marketing have requested that the new site section your team has built goes live at the same time as a radio campaign kicks off. This needs to happen simultaneously across all 40 front-end web servers, and you don’t want to break your regular deployment cycle while the marketing campaign gets perpetually delayed. How do you do it?

Users are experiencing HTTP 500 responses for a few underlying reasons, some with workarounds and some without. The customer service call centre need to be able to rapidly evaluate incoming calls and provide the appropriate workaround where possible, without displaying sensitive exception detail to end users. At the same time, your team needs to prioritize which bugs to fix first. What’s the right balance of logging, error numbers and correlations ids?

Your application is running slow in production, causing major delays for all users. You don’t have any tools on the production servers, and aren’t allowed to install any. How do you get to the root of the problem?