Peer Code Reviews in a Mercurial World

Mercurial, or Hg, is a brilliant DVCS (Distributed Version Control System). Personally I think it’s much better than Git, but that’s a whole religious war in itself. If you’re not familiar with at least one of these systems, do yourself a favour and take a read of hginit.com.

Massive disclaimer: This worked for our team. Your team is different. Be intelligent. Use what works for you.

The Need for Peer Code Reviews

I’ve previously worked in a number of environments which required peer code reviews before check-ins. For those not familiar, the principle is simple – get somebody else on the team to come and validate your work before you hit the check-in button. Now, before anybody jumps up and says this is too controlling, let me highlight that this was back in the unenlightened days of centralised VCSs like Subversion and TFS.

This technique is just another tool in the toolbox for finding problems early. The earlier you find them, the quicker and cheaper they are to fix.

  • If you’ve completely misinterpreted the task, the reviewer is likely to pick up on this. If they’ve completely misinterpreted the task, it spurs a discussion between the two of you that’s likely to help them down the track.
  • Smaller issues like typos can be found and fixed immediately, rather than being relegated to P4 bug status and left festering for months on end.
  • Even if there aren’t any issues, it’s still useful as a way of sharing knowledge around the team about how various features have been implemented.

On my current project we’d started to encounter all three of these issues – we were reworking code to fit the originally intended task, letting small issues creep into our codebase and using techniques that not everybody understood. We identified these in our sprint retrospective and identified the introduction of peer code reviews as one of the techniques we’d use to counter them.

Peer Code Reviews in a DVCS World

One of the most frequently touted benefits for DVCS is that you can check-in anywhere, anytime, irrespective of network access. Whilst you definitely can, and this is pretty cool, it’s less applicable for collocated teams.

Instead, the biggest benefit I perceive is how frictionless commits enables smaller but more frequent commits. Smaller commits provide a clearer history trail, easier merging, easier reviews, and countless other benefits. That’s a story for a whole other post though. If you don’t already agree, just believe me that smaller commits are a good idea.

Introducing a requirement for peer review before each check-in would counteract these benefits by introducing friction back into the check-in process. This was definitely not an idea we were going to entertain.

The solution? We now perform peer reviews prior to pushing. Developers still experience frictionless commits, and can pull and merge as often as possible (also a good thing), yet we’ve been able to bring in the benefits of peer reviews. This approach has been working well for us for 3 weeks so far (1.5 sprints).

It’s a DVCS. Why Not Forks?

We’ve modelled our source control as a hub and spoke pattern. BitBucket has been nominated as our ‘central’ repository that is the source of truth. Generally, we all push and pull from this one central repository. Because our team is collocated, it’s easy enough to just grab the person next to you to perform the review before you push to the central repository.

Forks do have their place though. One team member worked from home this week to avoid infecting us all. He quickly spun up a private fork on BitBucket and started pushing to there instead. At regular intervals he’d ask one of us in the office for a review via Skype. Even just using the BitBucket website, it was trivial to review his pending changesets.

The forking approach could also be applied in the office. On the surface it looks like a nice idea because it means you’re not blocked waiting on a review. In practice though, it just becomes another queue of work which the other developer is unlikely to get to in as timely a manner. “Sure, I’ll take a look just after I finish this.” Two hours later, the code still hasn’t hit the central repository. The original developer has moved on to other tasks. By the time a CI build picks up any issues, ownership and focus has long since moved on. An out-of-band review also misses the ‘let’s sit and have a chat’ mentality and knowledge sharing we were looking for.

What We Tried

To kick things off, we started with hg out. The outgoing command lists all of the changesets that would be pushed if you ran hg push right now. By default it only lists the header detail of each changeset, so we’d then run though hg exp 1234, hg exp 1235, hg exp 1236, etc to review each one. The downsides to this approach were that we didn’t get colored diff outputs, we had to review them one at a time and it didn’t exclude things like merge changesets.

Next we tried hg out -p. This lists all of the outgoing changesets, in order, with their patches and full colouring. This is good progress, but we still wanted to filter out merges.

One of the cooler things about Mercurial is revsets. If you’re not familiar with them, it’d pay to take a look at hg help revsets. This allows us to use the hg log command, but pass in a query that describes which changesets we want to see listed: hg log -pr "outgoing() and not merge()".

Finally, we added a cls to the start of the command so that it was easy to just scroll back and see exactly what was in the review. This took the full command to cls && hg log -pr "outgoing() and not merge()". It’d be nice to be able to do hg log -pr "outgoing() and not merge()" | more but the more command drops the ANSI escape codes used for coloring.

What We Do Now

To save everybody from having to remember and type this command, we added a file called review.cmd to the root of our repository. It just contains this one command.

Whenever we want a review we just type review and press enter. Too easy!

One Final Tweak

When dealing with multiple repositories, you need to specify which path outgoing() applies to in the revset. We updated the contents of review.cmd to cls && hg log -pr "outgoing(%1) and not merge()". If review.cmd is called with an argument, %1 carries it through to the revset. That way we can run review or review myfork as required.

Accessing ASP.NET Page Controls During PreInit

If you’ve read my previous post explaining a common pitfall with view state, I’d hope you’re preparing all your controls in the Init event of the page/control lifecycle.

Even if I’m not reusing them through my application much, I like to factor elements like a drop down list of countries into their own control. This centralizes their logic and allows us to write clear, succinct markup like this:

<tat:CountriesDropDownList ID="AddressCountry" runat="server" />

The code for a control like this is quite simple:

[ToolboxData("<{0}:CountriesDropDownList runat=\"server\" />")]
public class CountriesDropDownList : DropDownList
{
    protected override void OnInit(EventArgs e)
    {
        DataSource = Countries;
        DataBind();

        base.OnInit(e);
    }
}

The Problem

Once you start using this encapsulation technique, it won’t be long until you want to pass in a parameter that affects the data you load. Before we do, we need to be aware that the Init event is fired in reverse order. That is, the child controls have their Init event fired before that event is fired at the parent. As such, the Page.Init event is too late for us to set any properties on the controls.

The natural solution is to try and use the Page.PreInit event, however when you do you’ll often find that your control references are all null. This happens when your page is implemented using a master page, and it relates to how master pages are implemented. The <asp:ContentPlaceHolder /> controls in a master page use the ITemplate interface to build their contents. This content (child controls) is not usually prepared until the Init event is called, which means the control references are not available. For us, this represents a problem.

The Solution

The fix is remarkably simple; all we need to do is touch the Master property on our Page and it will cause the controls to become available. If we are using nested master pages, we need to touch each master page in the chain.

I often create a file called PageExtensions.cs in my web project and add this code:

public static class PageExtensions
{
    /// <summary>
    /// Can be called during the Page.PreInit stage to make child controls available.
    /// Needed when a master page is applied.
    /// </summary>
    /// <remarks>
    /// This is needed to fire the getter on the top level Master property, which in turn
    /// causes the ITemplates to be instantiated for the content placeholders, which
    /// in turn makes our controls accessible so that we can make the calls below.
    /// </remarks>
    public static void PrepareChildControlsDuringPreInit(this Page page)
    {
        // Walk up the master page chain and tickle the getter on each one
        MasterPage master = page.Master;
        while (master != null) master = master.Master;
    }
}

This adds an extension method to the Page class, which then allows us to write code like the following:

protected override void OnPreInit(EventArgs e)
{
    this.PrepareChildControlsDuringPreInit();

    MyCustomDropDown.MyProperty = "my value";

    base.OnPreInit(e);
}

Without the call to the extension method, we would have received a NullReferenceException when trying to set the property value on the MyCustomDropDown control.

You now have one less excuse for preparing your controls during the Load event. 🙂

How I Learned to Stop Worrying and Love the View State

(If you don’t get the title reference, Wikipedia can explain. A more direct title could be: Understanding and Respecting the ASP.NET Page Lifecycle.)

This whole article needs a technical review. Parts of it are misleading. I’ll get back to you Barry.

Page lifecycle in ASP.NET is a finicky and rarely understood beast. Unfortunately, it’s something that we all need to get a handle on.

A common mishap that I see is code like this:

protected void Page_Load(object sender, EventArgs e)
{
    if (!Page.IsPostBack)
    {
        AddressCountryDropDown.DataSource = CountriesList;
        AddressCountryDropDown.DataBind();
    }
}

The problem here is that we’re clogging our page’s view state. Think of view state as one of a page’s core arteries, then think of data like cholesterol. A little bit is all right, but too much is crippling.

To understand the problem, lets investigate the lifecycle that’s in play here:

  1. The Page.Init event is being fired, however we are not subscribed to that.
  2. Immediately after the Init event has fired, view state starts tracking. This means that any changes me make from now on will be saved down to the browser and re-uploaded on the next post back.
  3. The Page.Load event is being fired in which we are setting the contents of the drop down list. Because we are doing this after the view state has started tracking, every single entry in the drop down is being written to both the HTML and the view state.

There’s yet another problem here as well. By the time the Page.Load event is fired, all of the post back data has been loaded and processed.

To investigate the second problem, let’s investigate the lifecycle that’s in play during a post back of this same page:

  1. The user triggers the post back from their browser and all of the post back data and view state is uploaded to the server.
  2. The Page.Init event is fired, however we are not subscribed to that.
  3. Immediately after the Init event has fired, view state starts tracking. This means that any changes me make from now on will be saved down to the browser and re-uploaded on the next post back.
  4. The view state data is loaded for all controls. For our drop down list example, this means the Items collection is refilled using the view state that was uploaded from the browser.
  5. Post back data is processed. In our example, this means the selected item is set on the drop down list.
  6. The Page.Load event is fired however nothing happens because the developer is checking the Page.IsPostBack property. Usually, they say this is a “performance improvement” however it is also required in this scenario otherwise we would lose the selected item when we rebound the list.
  7. The contents of the drop down list are once again written to both the HTML and the view state.

How do we do this better? Removing the IsPostBack check and placing the binding code into the Init event is all we need to do:

protected override void OnInit(EventArgs e)
{
    AddressCountryDropDown.DataSource = CountriesList;
    AddressCountryDropDown.DataBind();

    base.OnInit(e);
}

What does this achieve?

  • We are filling the contents of the drop down before the Init event is fired; therefore a redundant copy of its contents is not written to the view state.
  • We are filling the contents of the drop down before the postback data is processed, so our item selection is successfully loaded without it being overridden later.
  • We have significantly reduced the size of the page’s view state.

This simple change is something that all ASP.NET developers need to be aware of. Unfortunately so many developers jumped in and wrote their first ASP.NET page using the Page_Load event (including myself). I think this is largely because it’s the one and only event handler presented to us when we create a new ASPX page in Visual Studio. While this makes the platform appear to work straight away, it produces appalling results.

Solution: IIS7 WebDAV Module Tweaks

I blogged this morning about how I think WebDAV deserves to see some more love.

I found it somewhat surprising that doing a search for iis7 webdav “invalid parameter” only surfaces 6 results, of which none are relevant. I found this particularly surprising considering “invalid parameter” is the generic message you get for most failures in the Windows WebDAV client.

I was searching for this last night after one of my subfolders stopped being returned over WebDAV, but was still browsable over HTTP. After a quick visit to Fiddler, it turned out that someone had created a file on the server with an ampersand in the name and the IIS7 WebDAV module wasn’t encoding this character properly.

It turns out that this issue, along with some other edge cases, has already been fixed. If you’re using the IIS7 WebDAV module, make sure to grab the update:

Update for WebDAV Extension for IIS 7.0 (KB955137) (x86)
Update for WebDAV Extension for IIS 7.0 (KB955137) (x64)

Because  the WebDAV module is not a shipping part of Windows, you won’t get this update through Windows Update. I hope they’ll be able to start publishing auto-updates for components like this soon.

Shout Out: WebDAV – a protocol that deserves more love.

I’m a massive fan of WebDAV.

At Fuel Advance (the parent company behind projects like Tixi), we operate a small but highly mobile work force. We don’t have an office, and we need 24/7 access to our business systems from any Internet connection. VPN links are not an option for us – they suck over 3G and don’t work through most public networks.

Enter WebDAV. It’s a set of HTTP verbs which give you read/write access to a remote folder and its files, all over standard HTTP. The best part is that Windows has native support for connecting to these shares. Now, we all have drive letter access to our corporate data over the public Internet. It’s slim and fast without all the management overheads that something like Sharepoint would have dealt us. It’s also cross platform, allowing us to open the same fileshares from our machines running Mac OS X.

IIS6 had reasonable support for WebDAV, but for various (and good!) reasons, this was dropped from the version that shipped as IIS7. In March this year, the team published a brand new WebDAV module as a separate download. This module is built using the new integrated pipeline in IIS7 and is much more nicely integrated into the management tool.

Kudos to Keith Moore, Robert McMurray and Marchel Cohn (no blog) for delivering this high quality release!

Tip: Learn through sharing

Finula sent out an email yesterday asking a group of us to supply some tips for being a successful developer. The short versions will get included in this afternoon’s MSDN Flash newsletter, and we’re each blogging our full response.

My personal advice is to recognize the power of learning through sharing. Coatesy touched on the idea at the end of his own tip.

Attending user groups is one thing, but getting up and contributing your own knowledge is what really drives these groups. You’ll be pleasantly surprised at how much you’ll actually learn about a technology if you step back and prepare a presentation for your peers which explains what that technology is, and how it works.

One of my work mates, Steve Godbold, recently delivered a presentation about LINQ. In prepping for the talk he knew he’d need to explain expression trees, and he guessed he’d get some questions about LINQ to SQL vs LINQ to Entities. Pleasantly shocked about how much more there was to know about LINQ, he’s know turned this prep into a series of blog posts too.

Presenting isn’t for everyone though, but this is where blogging steps in. It might seem a bit egotistical at first to think that people want to read what you have to say, but the reality is people genuinely do! Think about the number of times you’ve ended up reading somebody’s blog post before to help you solve a problem. Posts don’t need to be technical wizardry to warrant publishing either – it’s often the simple little tricks that people find real value in. One of my more popular posts describes how to do a hover effect in CSS. It also opens up your ideas for others to comment on, which might prompt something you’d never thought of before.

Hopefully this will give you a bit of inspiration to get up and really participate in the vibrant technical community around you.

What now? If you haven’t already, subscribe to MSDN Flash, submit your 10 minute demo for the chance to win big prizes, contribute to your local user group, and start a blog.

What’s your tip for being a great developer?

Sorting TimeZoneInfo.GetSystemTimeZones() properly

The documentation for System.TimeZoneInfo.GetSystemTimeZones() states:

Returns a sorted collection of all the time zones about which information is available on the local system.

Unfortunately, while the result is technically sorted, it’s a silly sort which shows GMT, then the eastern hemisphere, then the western hemisphere. It seems to be sorted by TimeZoneInfo.DisplayName instead of TimeZoneInfo.BaseUtcOffset.

Luckily this is easily fixed (and concise too thanks to anonymous methods!).

List<TimeZoneInfo> timeZones = new List<TimeZoneInfo>(TimeZoneInfo.GetSystemTimeZones());
timeZones.Sort(delegate(TimeZoneInfo left, TimeZoneInfo right) {
    int comparison = left.BaseUtcOffset.CompareTo(right.BaseUtcOffset);
    return comparison == 0 ? string.CompareOrdinal(left.DisplayName, right.DisplayName) : comparison;
});

In the devil’s language (VB.NET) you can achieve it using some slightly different code (no anonymous methods in VB.NET):

Dim timeZones As List(Of TimeZoneInfo) = New List(Of TimeZoneInfo)(TimeZoneInfo.GetSystemTimeZones())
timeZones.Sort(New Comparison(Of TimeZoneInfo)(AddressOf CompareTimeZones))

Private Function CompareTimeZones(ByVal left As TimeZoneInfo, ByVal right As TimeZoneInfo) As Integer
    Return IIf(comparison = 0, String.CompareOrdinal(left.DisplayName, right.DisplayName), comparison)
End Function

This will result in the list being sorted in the same order that you see under “Adjust Date/Time” in Windows.

Maybe they could fix this in the next release. 🙂 To help make that happen, vote for the issue on Microsoft Connect.

Update – 5 Feb 08: Daniella asked for a VB.NET version, so I’ve updated the post with one. Because VB.NET doesn’t support anonymous methods, you need an extra function somewhere to do the comparison.

Update – 12 Feb 08: Whitney correctly pointed out that the order still wasn’t quite right for time zones that shared the same offset. We need to sort by name as well. I’ve updated the C# and VB.NET version accordingly. I changed Whitney’s version a bit to use the conditional operator so that I could avoid two return statements, as well as translating it to VB.NET. Thanks Whitney!

Update – 15 Feb 08: Added Microsoft Connect link.

Writing Good ASP.NET Server Controls

I’ve being playing around with an XHTML WYSIWYG editor called XStandard lately. Their actual product is awesome, but before I jumped in and used their supplied ASP.NET wrapper I thought I’d just take a quick look at it in Reflector. Unfortunately, like many redistributed controls, there were some issues that jumped out at me. (This post isn’t a dig at them – they just kicked off the writing idea in my head, although I’d love it if they implemented the relevant changes.)

This post is designed as a quick guide around some of these issues, and represents the generally accepted best practice approach for control development. These were just the first issues that came to mind – in another post I’ll cover rendering best practices, as well as incorporate any suggestions by you.

Survive the postback – use ViewState

After a page has finished rendering, it is mostly destroyed in memory. When a postback occurs, the ASP.NET runtime has to rebuild all of the objects in memory. If we use fields as the backing stores for our properties, these values won’t survive the postback and thus your control will be re-rendered differently.

A traditional .NET property might look like this:

private string _spellCheckerURL;

public string SpellCheckerURL
{
    get { return _spellCheckerURL; }
    set { _spellCheckerURL = value; }
}

But in ASP.NET server controls, we need to write them like this:

public string SpellCheckerUrl
{
    get { return (string)ViewState[“SpellCheckerUrl”] ?? string.Empty; }
    set { ViewState[“SpellCheckerUrl”] = value; }
}

You might notice my use of the C# coalesce operator (??). Until we store anything in the property, requesting it from view state is going to return null. Using the coalesce operator allows us to specify a default value for our property, which we might normally have specified on the field like so:

private string _spellCheckerURL = string.Empty;

Respect the ViewState (and thus the user)

The rule above (store everything in the ViewState) is nice and reliable, but it can also be a bit nasty to the end user – the person who has to suck down your behemoth chunk of encoded ViewState.

For some properties, you can safely lose them then just work them out again. Our rich text editor control gives a perfect example – we don’t need to store the text content in the ViewState because it’s going to be written into our markup and passed back on the post-back. We can (and should) be storing this data in a backing field for the first render, then obtaining it from the request for postbacks.

Build your control as the abstraction layer that it’s meant to be

ASP.NET server controls exist to provide a level of abstraction between the developer, and the actually workings of a control (the markup generated, the postback process, etc). Build your control with that in mind.

The XStandard control is a perfect example of a server control being implemented as more of a thin wrapper than an actual abstraction layer – you need to understand the underlying API to be able to use the control.

Their Mode property looks like this:

public string Mode
{
    get { return this.mode; }
    set { this.mode = value; }
}

For me as a developer, if the property accepted an enumeration I wouldn’t need to find out the possible values – I could just choose from one of the values shown in the designer or suggested by IntelliSense.

public enum EditorMode
{
    Wysiwyg,
    SourceView,
    Preview,
    ScreenReaderPreview
}

public EditorMode Mode
{
    get { return (EditorMode)(ViewState[“Mode”] ?? EditorMode.Wysiwyg); }
    set { ViewState[“Mode”] = value; }
}

This adds some complexity to our control’s rendering – the names of the enumeration items don’t match what we need to render to the page (eg. EditorMode.ScreenReaderPreview needs to be rendered as screen-reader). This is easily rectified by decorating the enumeration, as per my previous post on that topic.

You might also have noticed that the casing of SpellCheckerUrl is different between my two examples above. The .NET naming guidelines indicate that the name should be SpellCheckerUrl, however the XStandard control names the property SpellCheckerURL because that’s what the rendered property needs to be called. The control’s API surface (the properties) should be driven by .NET guidelines, not by rendered output. This comes back to the idea of the control being an abstraction layer – it should be responsible for the “translation”, not the developer using the control.

Support app-relative URLs

App-relative URLs in ASP.NET (eg. ~/Services/SpellChecker.asmx) really do make things easy. Particularly working with combinations of master pages and user controls, the relative URLs (eg. ../../Services/SpellChecker.asmx) can be very different throughout your site and absolute URLs (http://mysite.com/Services/SpellChecker.asmx) are never a good idea.

The XStandard control uses a number of web services and other resources that it needs to know the URLs for. Their properties look like this:

[Description(“Absolute URL to a Spell Checker Web Service.”)]
public string SpellCheckerURL
{
    get { return this.spellCheckerURL; }
    set { this.spellCheckerURL = value; }
}

This made their life as developers easy, but to populate the property I need to write code in my page’s code-behind like this:

editor.SpellCheckerURL = new Uri(Page.Request.Url, Page.ResolveUrl(“~/Services/SpellChecker.asmx”)).AbsoluteUri;

This renders my design-time experience useless, and splits my configuration between the .aspx file and the .aspx.cs file.

All properties that accept URLs should support app-relative URLs. It is the controls responsibility to resolve these during the render process.

Generally, the resolution would be as simple as:

Page.ResolveUrl(SpellCheckerUrl)  (returns a relative URL usable from the client page)

however if your client-side code really does needs the absolute URL, you can resolve it like this:

new Uri(Page.Request.Url, Page.ResolveUrl(SpellCheckerUrl)).AbsoluteUri

Because we’re using the powerful URI support built in to .NET, we don’t even need to worry about whether we were supplied an absolute URL, relative URL or file path … the framework just works it out for us.

Either way, it’s your responsibility as the control developer to handle the resolution.

Use the URL editor

Now that we’ve made it really easy for developers to specify URLs as page-relative, app-relative or absolute, let’s make the designer experience really sweet with this editor (notice the […] button on our property):

image

image

That’s as simple as adding two attributes to the property:

[Editor(“System.Web.UI.Design.UrlEditor, System.Design, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a”, typeof(UITypeEditor))]
[UrlProperty]
public string SpellCheckerUrl
{
    get { return ViewState[“SpellCheckerUrl”] as string ?? string.Empty; }
    set { ViewState[“SpellCheckerUrl”] = value; }
}

Document with XML and attributes

IntelliSense reads the XML comments, but the designer reads attributes. Document your control’s API surface using both of them.This is a bit annoying, but really not that hard and well worth it. Make sure to specify the summary in XML, as well as the Category and Description attributes.

Our typical property now looks something like this:

/// <summary>
/// Gets or sets the URL of the return handler (a handler inheriting from <see cref=”PermissionResponseHandler”/>).
/// This should be a URL to a HTTPS resource to avoid a scary warning being shown to the end user.
/// </summary>
[Category(“Behavior”)]
[Description(“The URL of the return handler (a handler inheriting from PermissionRequestReturnHandler).”)]
public string ReturnUrl
{
    get { return ViewState[“ReturnUrl”] as string ?? string.Empty; }
    set { ViewState[“ReturnUrl”] = value; }
}

If you look closely, you’ll notice the documentation isn’t actually duplicated anyway … the messaging is slightly different between the XML comment and the attribute as they are used in different contexts.

Provide lots of designer hints

In the spirit of attributes, let’s add some more to really help the designer do what we want.

For properties that aren’t affected by localization, mark them as such to reduce the clutter in resource files:

[Localizable(false)]

Define the best way to serialize the data in the markup. This ensures a nice experience for developers who do most of their setup in the design, but then want to tweak things in the markup directly.

[PersistenceMode(PersistenceMode.Attribute)]

Declare your two-way binding support. If you mark a property as being bindable (which you should) then you also need to implement INotifyPropertyChanged (which you should):

[Bindable(true)]

Declare default values. Doing so means that only properties that are explicitly different will be serialized into the markup, keeping your markup clean.

[DefaultValue(“”)]

Clearly indicate properties that are read-only. Doing so will make the read-only in the designer, rather than throwing an error when the user tries to set them:

[ReadOnly(true)]

If a property isn’t relevant to the design-time experience, hide it.

[Browsable(false)]
[DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]

At the class level, it’s also good to define your default property so that it can be auto-selected first in the property grid.

[DefaultProperty(“Text”)]

Don’t get creative with IDs

ASP.NET has a great system for managing client side IDs and avoiding any conflicts – use it.

Use the ClientID property, and where needed, the INamingContainer interface. Don’t create IDs yourself.

Define your tag template

Add a ToolboxData attribute to your control class so that users get valid code when they drag it on from the toolbox:

[ToolboxData(“<{0}:XhtmlEditor runat=\”server\”></{0}:XhtmlEditor>”)]

Define your tag prefix

Define a tag prefix so that your controls have a consistent appearance in markup between projects.

[assembly: TagPrefix(“FuelAdvance.Components.Web”, “fa”)]

You only need to do this once per assembly, per namespace. Your AsssemblyInfo.cs file is generally a good place to put it.

Update 6-Nov-07: Clarified some of the writing. Added another screenshot.