Koen about .Net

February 15, 2011

Comparison of Web UI Testing toolkits

Filed under: Development, dotnetmag, Testing — Tags: , , — koenwillemse @ 16:00

I wanted to started using web UI tests because the current project I’m working on I lacking this and for my personal web shop application I also need it. So I had to make a choice for a testing framework. I already knew of Coded UI Tests in Visual Studio 2010 and Selenium. Another colleague at my current project also mentioned Watin, so I decided to do a quick test of the three and see if it matches my expectations.

Coded UI Tests in Visual Studio

This was my first choice, since it’s integrated in Visual Studio which makes it easier when testing my deployed applications from a build server. So I started by creating the test in de Visual Studio IDE and I clicked some pages. Then added a few assertions and done. Generated the coded test and I looked at the coded that was generated. And that what a bit of a shock Sad smile. It was very unreadable code, which actually shouldn’t matter since it’s generated code, but when you would like to tweak or edit it a bit (like I want) it’s not very nice to do that. I wanted to edit the tests to be able to use some parameters and stuff to make the tests more robust.
Now there is also the option of created tests using the Test Manager in Visual Studio 2010, but I haven’t tried this. That’s something I still want to do, to see if this makes it easier or better. For now, I’m a bit disappointed in using the coded UI tests. When I’ve got some time left, I’m going to check how the creation of coded UI tests is when using the Visual Studio 2010 Test Manager


Selenium is a tool which I’ve heard of several times, so I wanted to give it a try now myself. First you’ll have to install some stuff. I installed the following:

After installing all stuff (which is more work than I wanted Winking smile) I started creating my first test. Clicking the test and verifications is also pretty easy with the integration in Firefox. Then creating the code. There are a few formatters available, one of them being for c#, so I generated the code, but unfortunately it was based on NUnit and there was some stuff in it I didn’t like. But before I gave up, I looked a bit better in Selenium IDE and I saw that there is also the possibility to create your own formatter, so I decided to give that a try. It took some time with some mistakes that I made, but eventually I had a working formatter which created the C# code the way I wanted it. So I started the tests in visual studio (after starting the java application for the selenium server) and it worked Smile.


Because my colleague mentioned this framework, I wanted to give it a try. Unfortunately there was no IDE or something to create the tests. Since I was already more convinced of both other frameworks, I didn’t want to spend more time on this one, since it was also more difficult to work with it in code only mode.


So a small comparison

  Coded UI Tests Selenium Watin
  • Integrated in Visual Studio, so no installation required
  • Control over the generated code
  • Code easy to modify / extend
  • Ugly generated code
  • IDE is a bit buggy
  • Custom installation required
  • No IDE
  • Difficult to work with

So eventually I chose selenium for now, since some other people on my current project are also trying it out and were positive about it. I’ll write another blog post on the custom formatter I’ve been working on. It generates code for the MSTest framework and makes use of the FluentAssertions to make the tests more readable.

February 8, 2011

Crystal reports runtime for .NET 4.0

Filed under: Development, dotnetmag — Tags: , — koenwillemse @ 00:00

A few years ago I created an application which is used by an tutoring institute. It’s not the best application I’ve ever created (and that’s a bit of an understatement), but it works. There are some bugs now and then, some new feature request, which I do when I’ve got some spare time. One part of the application is generating invoices. I used Crystal reports to generate the invoices (first version in VS 2005, later in VS 2008).

A while ago I upgraded my solution to Visual Studio 2010 and then the problems began. Crystal Reports is no longer included in Visual Studio but needs to be downloaded separately from SAP site. Problem one, it took a long time before the final version for .NET 4.0 and Visual Studio 2010 was available. I started using the beta version when I upgraded and it worked OK. The problem however was that there was decent no runtime installation available. I found a blog post (unfortunately I don’t have the link anymore) which indicated that you could redirect the newer assemblies to the ‘old’ VS 2008 version runtime. This worked out good for me so I was ok.

In November last year, the production release of Crystal Reports for Visual Studio 2010 was ready. So when I got my new work laptop in December, I installed this last version. But then, last week I fixed a few minor bugs and made a little improvement, and I got in trouble with the deployment which complained about the Crystal Reports references. So I wanted to quickly download the runtime for the newer Crystal Reports version and install it on the client computer. However, that wasn’t done quickly. It took me a while to finally find the links to the redistributables and merge modules of the new runtime. One thing became clear and that was that I really dislike the SAP web site.

I finally found the links, so I wanted to post them here, so maybe some other people don’t have to spend the same amount of time as I had to, to find the installers (with thanks to Coy Yonce):

  • Standard EXE installation package which installs SAP Crystal Reports for Visual Studio into the Visual Studio 2010 IDE can be found here.
  • Click-Once installation package used to create self-updating Windows-based applications which can be installed and run with minimal user interaction can be found here.
  • Merge Modules installation package used to install components which are shared by multiple applications can be found here.
  • Redistributable installation (32 bit) can be found here.
  • Redistributable installation (64 bit) can be found here.

I hope this saves some time for you.

January 22, 2011

Culture specific website in ASP.NET MVC

Filed under: Development — Tags: , — koenwillemse @ 00:36

In my personal time I’m working on a web shop application. It’s an application which is really going to be used, so not like the usual throw-away home projects. I could off course just grab an already existing application, but I’m too geeky for that Winking smile. I also wanted to use it to learn a lot about working with ASP.NET MVC and jQuery.

One of the requirements is that it have to support multiple languages. I wanted to do this the way you see it at for instance http://msdn.microsoft.com/nl-nl/ with the culture code in the url. So far so good. I started out by using a feature in ASP.NET MVC which I found, and that was by using a FilterAttribute in combination with an ActionFilter which I found here: http://helios.ca/2009/05/27/aspnet-mvc-and-localization/. Nice solution I thought, so I took that code, modified it a bit to match my scenario and it worked.

But then when I added more views to the application, I started to notice that it was not a good solution at all, because of the following reasons:

  1. I had to change my routes so they would work correctly with the culture code in the url (urls will look like this: http://www.pastechi.nl/nl-NL/Products/Index etc). Maybe this wasn’t necessary and it is because I’m not completely familiar with the routing in ASP.NET, but still, it was not what i wanted.
  2. Every action on my controller now had a parameter cultureCode, which was not used in that method, because the culture code was set in the FilterAttribute code.
  3. It just felt wrong Winking smile

So I started thinking about it and I came to the conclusion that I just brought a knife to a gun fight. The solution works, but is not very usefull. So what would be better? Well, the new solution I’ve got implemented uses url rewriting. The url rewriting is done using the Application_BeginRequest event and takes place before the routing engine of ASP.NET MVC is doing it’s magic. So I created a simple HttpModule which can be used to rewrite the url, by removing the culture code from the url and placing the culture code in a suitable location.

This is the code I wrote (keep in my that this is the POC code, so it should be rewritten a bit to be unit testable etc).

using System;
using System.Text.RegularExpressions;
using System.Web;

namespace HttpModules
    public class CulturePathRewriteModule : IHttpModule
        public void Init(HttpApplication context)
            context.BeginRequest += OnBeginRequest;

        static void OnBeginRequest(object sender, EventArgs e)
            var request = HttpContext.Current.Request;
            var cultureRegEx = new Regex(@"(^\/[A-z]{2}\-[A-z]{2})|(^\/[A-z]{2}\-[A-z]{2}.*)");

            if (cultureRegEx.IsMatch(request.Url.AbsolutePath))
                string cultureCode = request.Url.AbsolutePath.Remove(0, 1).Substring(0, 5);
                string newAbsolutePath = request.Url.AbsolutePath.Remove(0, 6);
                string newUrl = "~" + newAbsolutePath;

                HttpContext.Current.Items["currentCultureCode"] = cultureCode.ToLower();

        public void Dispose()

You probably noticed that I didn’t set the CurrentCulture / CurrentUICulture on the current thread. It’s not that I don’t know that it exists, but I can’t use it. One of the languages that I want to support doesn’t have an official CultureCode. Off course, you can create this if you want, but because the site will be running on a shared hosting partner, that is not an option. The eventual code won’t place it in the context using a hardcoded key, but it will be done using a custom wrapper on the context which I can use typed, but you’ve got the idea.

January 7, 2011

Unit Test Adapter threw exception: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. – Part 2

Filed under: Development — Tags: , — koenwillemse @ 09:46

In a previous post I wrote about a problem I had with running unittests, which resulted in an exception with the message ‘Unit Test Adapter threw exception: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.’.

This morning, on my new project, I ran into the same problem after I deleted my TFS workspace locally and got everything new from the server in a different location (a new workspace). So I looked back at my previous post, tried what I wrote there, and….. still the exception :-(. To see if it would lead to more information, I started the tests in debug mode. And… all tests passed as usually. Strange…
I found the problem eventually. The assembly being tested was strongly named. Because we use the code coverage from the unittests, we had to re-sign the assembly tested after instrumentation and we set this up in the Code Coverage section in the testrunconfig.testrunconfig. Now what was the problem? The path to this key file was wrong! I made the mistake that the location pointed out there was a absolute location, instead of a relative location compared to my testrunconfig. So I changed this, cleaned my solution, rebuild the solution and ran the tests. And all tests passed again.

Another lesson learned. Whatch out that you don’t accidentally have absolute paths when referencing stuff which is located in your source control tree.

September 2, 2010

Configuration of WIF

Filed under: Development, dotnetmag — Tags: , — koenwillemse @ 16:15

The current project I’m working on is a Identity Management Solution for a client. We’re working with WIF (as you might have noticed in my previous posts) and SPML v2. I’ve been beating my head against a wall for the last few days because we had all kind of problems to get the Identity Delegation scenario working. Eventually it was a small thing which caused all kinds of problems, but I’ll elaborate on that in a different post.

One thing that I found frustrating is the lack of documentation for WIF and especially that it’s difficult that way to configure the identity related stuff correctly. We figured out now what all the configuration items in the microsoft.IdentityModel section mean, so I’m sharing it here so that other people starting with WIF don’t have the same giant learning curve we had ;-).

For a consuming application, the following is a common configuration when using an active STS (note that we use the .NET 3.5 targeted assemblies):

    <service saveBootstrapTokens="true">
        <issuerNameRegistry type="Microsoft.IdentityModel.Tokens.ConfigurationBasedIssuerNameRegistry, Microsoft.IdentityModel, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35">
                <add thumbprint="9dbc8c485022a10788832ab285a6281fe18a22de" name="CN=sts" />
            <add value="http://frontend" />
            <wsFederation passiveRedirectEnabled="false" issuer="https://sts/SecurityTokenService.svc" realm="http://frontend" requireHttps="false" />
            <cookieHandler requireSsl="false" />
            <certificateReference x509FindType="FindByThumbprint" findValue="4fa9361d1ddda6e8847313a56ab96412dd40f13b" storeLocation="LocalMachine" storeName="My"/>

Now what does all this mean and what is it for?

  1. saveBootstrapTokens=”true”
    This means that when WIF makes an ClaimsIdentity from a received SecurityToken, that the property BootstrapToken of the ClaimsIdentity will be the actual token received. I wish I had found about this earlier.
  2. issuerNameRegistry
    This section indicates which sts you trust. Here you add the certificate(s) which are used to sign the tokens you receive from the STS
  3. audienceUris
    This section is used to check whether the received information from the STS is applicable to you as the calling application. It should match the AppliesTo property which you set when you issue a RequestForSecurityToken to your STS.
  4. securityTokenHandlers
    In this section you can remove the default handlers and add your own token handlers. Note 1: Don’t clear the collection, because most of them are necessary. If you want to use a custom, then only remove the default of the type you want to add. Note 2: Be very careful with what you do here. One wrong decision here has cost us a lot of hours of bug hunting.
  5. federatedAuthentication
    This section contains information to determine if you are using an active or passive scenario. When you have a passive scenario (when the browser handles the redirects etcetera for you) then you set passiveRedirectEnabled=”true”. You have to make sure then that the correct issuer and realm and other related attributes are set.
    The cookieHandler requireSsl attribute indicates whether the written session cookie requires SSL. When you are working on a non-ssl connection and you forget to set this to false, your cookies won’t be preserved over postbacks.
  6. serviceCertificate
    This section was the least clear to me when starting out. It is for defining which certificate should be used to decrypt the incoming SecurityToken.

When you setup these values correctly, you should get up and running pretty quickly. This post is just for making clear what each section of the configuration is for. To get an overview of all steps to do for getting up and running using an active STS, please read my previous post about making a web application use an active STS.

I hope this helps some of you to save some time when configuring WIF.

August 19, 2010

If I ran DevDiv

Filed under: General — Tags: — koenwillemse @ 00:16

I read this post by Ayende today and started thinking about this myself. So, what is it all about: “What would you do if you were running the developer division of Microsoft (or… what would you do if you were the Gu ;-))?”.

If I would run DevDiv I would:

  1. embrace successful open source project instead of trying to build our own.
    I was very pleased to see that Microsoft stops working on a specific ASP.NET Ajax library but instead is helping to make jQuery better. jQuery has proved to be a very good JavaScript library which makes the life of a web developer so much easier.
    On the other side however, you’ve got Entity Framework opposed to NHibernate. NHibernate has proved to be a very good open source ORM with a very active community. Several very good and influential developers are contributing to it and make it a great framework. In my opinion, it would have been great if Microsoft had made the same decision here as they did with ASP.NET AJAX and would have assigned a few dedicated developers to help improve and extend NHibernate to be an even better Open Source project with dedicated support. Now they are working very hard on Entity Framework to try to compete with NHibernate. Why?
  2. encourage my developers to write blog posts with more ‘real life’ code examples.
    A lot of times, when a new technology or framework is announced by Microsoft, a lot of Microsoft people start blogging about it with examples etc. That’s great and helpful. However, a lot of times, the examples that are written in the blog posts are written as ‘throw away code’. For example, for ASP.NET MVC you would see examples of actions that execute SQL queries directly etc. Code that you would be ashamed of when it was in your enterprise applications. And that’s my frustration sometimes. In my opinion, Microsoft (and it’s employees) should advocate better structured code, which follow good practices like SOLID, TDD, etc.
  3. make sure that when new products are announced (lately you’ve got LightSwitch, WebMatrix, etc…) the marketing about the audience is done right.
    A lot of blog posts are written lately about LightSwitch, WebMatrix, etc. The problem I have with this is that the target audience is not made clear in most blog posts. I read those blog posts when they pass by in my RSS reader, but they absolutely not suitable for me. Those new products are suitable for entry-level developers that want to start playing with MS technology or that want to create some temporary application, not for the experienced developers that are writing enterprise applications. When you are working with stuff like Dependency Injection, Task based user interfaces, CQRS, etc, then these new applications are absolutely useless.
  4. give every Microsoft developer 1 day a week to contribute to open source project(s)
    A lot of great open source projects are out there on codeplex, github or sourcefourge, but a lot of them would really benefit if there had more involved developers. A lot of developers would be willing to spend (some of) their time on it, when they have it, but a lot of us also have more to do then just work (at least I have ;-)). I try to contribute to projects, like Fluent Assertions, but it’s very difficult to find the time for it. When (big) companies give their developers time to contribute to open source projects (and not only their personal time) we can make those projects a lot better, which eventually lead to better applications which we create by using those projects again.
  5. try to get influential developers from the community to talk to the product teams to improve the quality of Microsoft products
    There are some very influential developers in the community. Some of them are MVP’s and involved in decisions made in the product teams at Microsoft. But there are also a lot of other people who are pretty judicial (and a lot of times they have some very interesting points) when it comes to the Microsoft frameworks. Just to name two: Ayende and Jeremy D. Miller. I would try to get those people (and more that have proved to know what they are talking about and have some good ideas) to Redmond and talk to the product teams and try to share ideas to eventually improve the frameworks we are all building. There are some very smart people out there that can really help stuff for everyone.
  6. try to get Silverlight working on as many of the mobile platforms as possible
    A point of frustration I’ve got is that when you want to develop an application for mobile clients, there are two options. Create a great mobile web application, or create native applications. Some recent studies show that most users prefer the native applications. I was pleased to see that Windows Phone 7 will support Silverlight to build applications for it. This makes it possible to keep the advantages of simple deployment and all the good stuff that it offers, like the known .NET Framework and all the other already known advantages, but give the users the look and feel of a native application. The problem however is that when you want to create an application for the masses, you also have to consider the iPhone, Android and Blackberry phones. If you want to do this, you have to build the same application in several different programming languages for the different platform. This is very inefficient, time consuming and error prone. When we could use Silverlight for the other platforms also it would help for the adaption of Silverlight and it would make the life of us developer a lot easier.
  7. make sure that the WIF team would do a lot of work to improve the usability of the framework
    This last one comes from a bit of frustration of our current project. WIF makes developing claims enabled application easier, because you don’t have to worry about all the WS-Trust and SAML related stuff that are required. However, the quality of the framework is not what we’re used to. There is almost no decent documentation available. There are some tricks you have to know, and the API is not always as easy. Since claims based security is a very promising way of arranging the security of your application, it is essential to have a good, easy to use framework to help you with this.

So, what would YOU do when you ran the DevDiv?

August 2, 2010

Making a web application use an active STS

Filed under: Development, dotnetmag — Tags: , , , — koenwillemse @ 19:46

At my current assignment we’re working on a solution which among others consists of a custom Security Token Service which is used for the authentication of users in a web portal. In our case we’ve created an active STS by using Windows Identity Foundation and WCF. We’re now working a demo web application which uses the STS (and some other functionality we’re building) to show the team which creates the portal how they should consume the STS.

Since there is almost NO decent documentation about WIF, I started googling. The problem which I ran into was that the information I found was almost all about scenario’s with a passive STS. I found a bit information about consuming an active STS but all not complete, so I’ll put all the steps I had to take right here, so it can help some others.

First of all, I eventually found this blog post, which made it almost work for me. There was just some information missing which caused me to search for another 1 – 2 hours.

These are the steps to take when you want to consume an active STS from a web application:

  1. Add a reference to the Microsoft.IdentityModel assembly (WIF).
  2. Add the definition of the microsoft.IdentityModel config section to your config like this (check the correct version of the dll of course):
    <section name="microsoft.identityModel" type="Microsoft.IdentityModel.Configuration.MicrosoftIdentityModelSection, Microsoft.IdentityModel, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
  3. Add the following two HttpModules to your config (when using IIS7, add them to you system.webserver section, otherwise to you system.web section):
    <add name="WSFederationAuthenticationModule" type="Microsoft.IdentityModel.Web.WSFederationAuthenticationModule, Microsoft.IdentityModel, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
    add name="SessionAuthenticationModule" type="Microsoft.IdentityModel.Web.SessionAuthenticationModule, Microsoft.IdentityModel, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
  4. The authentication mode should be set to None:
    <authentication mode="None" />
  5. Add the configuration for the microsoft.IdentityModel section, for instance:
    <issuerNameRegistry type="Microsoft.IdentityModel.Tokens.ConfigurationBasedIssuerNameRegistry, Microsoft.IdentityModel, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35">
    <add thumbprint="{Add the thumbprint of the certificate used by your STS, for instance: 80481e4041bd6758400c62e2c811831b98eed561}" name="{Add the name of the certificate, for instance: CN=devsts}" />
    <add value="{Add the applies to url of your web application}"/>
    <wsFederation passiveRedirectEnabled="false" issuer="{The address of the STS, for instance: https://devsts/mySts.svc}" realm="{The applies to address of your web application, for instance: http://myrelyingparty.nl}" persistentCookiesOnPassiveRedirects="true" />
    <cookieHandler requireSsl="false" />
    <certificateReference x509FindType="FindByThumbprint" findValue="{The certificate used by your STS, for instance: 80481e4041bd6758400c62e2c811831b98eed561}" storeLocation="LocalMachine" storeName="My"/>

    As you can see, you register your information about the certificate being used by your sts and the information about your application, the relying party.
    The line about the cookieHandler was the one that caused me some problems because I didn’t have that. The problem is that my local site was working on http and not https, but the created cookies required https. I didn’t have this line at first, which had the effect that the session cookie was not maintained over postbacks.

  6. After you’ve configured everything, you can use the following code to consume your STS and get an IClaimsIdentity:
    // authenticate with WS-Trust endpoint
    var factory = new WSTrustChannelFactory(
    new UserNameWSTrustBinding(SecurityMode.TransportWithMessageCredential),
    new EndpointAddress("<a href="https://devsts/MySts.svc">https://devsts/MySts.svc</a>"));
    factory.Credentials.UserName.UserName = usernameField.Text;
    factory.Credentials.UserName.Password = passwordField.Text;
    var channel = factory.CreateChannel();
    var rst = new RequestSecurityToken
    RequestType = RequestTypes.Issue,
    AppliesTo = new EndpointAddress("http://myrelyingparty.nl/"),
    KeyType = KeyTypes.Bearer
    var genericToken = channel.Issue(rst) as GenericXmlSecurityToken;
    // Now you parse and validate the token which results in a claimsidentity
    var handlers = FederatedAuthentication.ServiceConfiguration.SecurityTokenHandlers;
    var token = handlers.ReadToken(new XmlTextReader(new StringReader(genericToken.TokenXml.OuterXml)));
    var identity = handlers.ValidateToken(token).First();
    // Create the session token using WIF and write the session token to a cookie
    var sessionToken = new SessionSecurityToken(ClaimsPrincipal.CreateFromIdentity(identity));
    //Perform some redirect

In our situation, this was not complete, since we need the SAML token received from the STS furtheron to authenticate to WCF services which we consume. After using reflector and trying something out, it can be easily done by changing just two lines of code. I can be done by the following:

var identity = handlers.ValidateToken(token).First();
Thread.CurrentPrincipal = new ClaimsPrincipal(new IClaimsIdentity [] { new ClaimsIdentity(identity.Claims, token) });

The code itself looks probably a bit strange, since we get a ClaimsIdentity from the ValidateToken and then we create another ClaimsIdentity. I hoped that WIF would have used this way to construct the identity, of at least provide an overload or something to do this. I created the identity like this, since the securitytoken is now availalbe in the BootstrapToken property of the ClaimsIdentity. At first we were thinking that we had to keep the security token in session but that is not necessary when you do it like this. Now we can access it simply with the following lines of code:

var identity = Thread.CurrentPrincipal.Identity as ClaimsIdentity;
var theOriginalSecurityToken = identity.BootstrapToken;

I hope this helps somebody else. It would have saved me a lot of time if I could have found this information somewhere.

Update (17-8-2010)

We’ve been using the code as listed above, but we ran into some problems, because the retrieved token is a GenericXmlSecurityToken which caused problems when supplying it to our backend services.

After some searching I found that it is possible to get the BootstrapToken property filled by WIF, but you need to set a configuration switch (some decent documentation would really be helpfull). All you have to do is change the following in your web application configuration (add the saveBootstrapTokens attribute):

<service saveBootstrapTokens="true>
<issuerNameRegistry ……

The code to creating your principal is then the following:

var identity = handlers.ValidateToken(token).First();
Thread.CurrentPrincipal = new ClaimsPrincipal(new [] { identity });

Now the BootstrapperToken is a SamlSecurityToken which is exactly what we want to be able to authenticate to the backend services. I’ll show in a new post how we have all this tied together.

Experiences using SCRUM in project

Filed under: Projects — Tags: , — koenwillemse @ 18:20

The current project I’m working on is a pretty big project. I’m working in a team of 4 persons and we’re working on a Identity Management solution which is a small part of the project. The overall project is unfortunately not working in a SCRUM way. No product owners, no product backlogs etc. However, as team we have decided to do our work in a way in which we use as many SCRUM principles as possible.


So, how do we achieve this?
First of all, we have a very specific workload, which is not really linked to any of the other teams. This makes it a lot easier to get it working. So we defined our Product Backlog items and we estimated them.
But then, there was our first obstacle, since they had to be placed in a order of business value. So, since we don’t have a product owner, we ordered the items ourselves. Since it is clear what we have to do (a lot of stuff about the how is not clear, but that’s a whole other story ;-)) we could give them a pretty good order I guess.
So, we’re up and running. We’ll be having daily scrum meetings to keep everything in the team synchronized and the team lead attends the stand-up of the other teams to keep things synchronized with them.


We are using sprints of 2 weeks. We start the sprint with the planning sessions as we should. Even tough they aren’t always going exactly the way they should go according to SCRUM, eventually we get a sprint backlog and we can get starting.
We’re using TFS, so we have all our product and sprint backlog items registered. Next to that we also use post-it’s on large piece of brown paper to get a quick visual of the state of the sprint. Unfortunately it happens sometimes that people forget to update TFS according to the post-it’s or the other way around.
We also have a hand-drawn burn down chart, which contains both the burndown of completed work and the burndown on hours.

These few things together make it already a lot better to manage and keep an eye on the progress we’re making.

Excel sprint workbooks

Everybody who has worked with the agile template in TFS 2010 will probably have seen the iteration workbooks that are supplied. I really like those, since they provide a lot of information and insight. Unfortunately, for this project TFS 2008 is still used, so we don’t have access to those workbooks. I disliked that so much, that I started to create those workbooks which connect with the TFS2008 server. It actually wasn’t very difficult to get to what I wanted. I’ll try to write a blog post about this soon to show what I did and what I added to the sheets as extra information.


When looking at the burndown charts we have of the last sprints we have seen some interesting stuff. Here are the screenshots of the hours burndown of the last 3 sprints:

sprint 2 hours burndown sprint3_hour_burndown sprint4_hours_burndown

These burndown show a few things.

  1. The estimates in the second sprint (actually our third) were not very good. The yellow line (which is the total of estimates) has a few big gaps with the total hours.
  2. The last burndown chart has a big bump on it. What happened was that we missed some work when creating our sprint backlog and a few days later, several items were removed because some functionality should be different :(. The last part of the graph showed that we underestimated a few items. This had the effect that the total hours went up and the remaining hours almost stayed the same.
  3. A last point is that these charts look pretty good. All work was completed in time. However we didn’t do so well when looking at the work done burndown.

Let’s look at the work done burndown.

sprint2_work_burndown sprint3_work_burndown sprint4_work_burndown

So, what’s not good then?

  1. At several places, the graph of the work left is going up! This should not be the case. What were the causes?
    1. We didn’t do our sprint planning 2 very well, so we missed items which we had to do
    2. A few times, at the end of a day we noticed that an item that was ‘done’ actually wasn’t completely done
  2. The total of the charts is going up in the first two sprints. This was caused by the fact that our focus factor was a lot higher than we expected. We expected our average focus factor to be about 70%. We managed to get it at 80%-90% so we had more hours available then we thought. Next to that, we underestimated a few items.
  3. There are a few places where the remaining graph is going down pretty steep. This was caused by the fact that we didn’t review items as soon as development finished. We defined the following states for our items: Not Done, In Progress, On Hold, Ready for Test (Review), Done.
    It happened a few times that we were finishing the development of stuff, but instead of reviewing stuff that was ready for review, we just started on a new item. At some times we had 8 items which had to be reviewed / tested.

Conclusion & lessons learned

There are a few things we have to be aware of for the next sprints:

  1. We have to make sure that we keep our scrum-board on the wall (the big brown paper) in sync with what we have in TFS.
  2. We have to be more strict in the way we organize the sprint 2 planning session. We have to make sure that we identify all related tasks of a product backlog item. Next to that, we’ve learned more about the time that some type of tasks take, so we can make better estimates.
  3. Don’t look at just an hour burndown chart of a work burndown chart. Together, they provide a lot more information then just on their own.

We’ve learned a lot of the information we have gathered in the last three sprints and hopefully some of you find it interesting also.

July 26, 2010

Code quality of WIF

Filed under: Development — Tags: , — koenwillemse @ 16:30

The first thing you notice when you start working with the library is, that there is no xml documentation what so ever. This makes it difficult to work with, without reading a lot of other documentation. Especially when the names of methods do not make it clear what it is doing exactly.

Let me give an example of unclear naming. Take the method Saml11SecurityTokenHandler.ProcessStatement. When just seeing this method name, you’d probably think that it is a method with does something with a statement. Wrong… The first parameter it takes is a list of SamlStatements. Then why is the method not named ProcessStatements? It’s just a small thing, but it’s annoying when you’ve read the book Clean Code and you’re trying to write clean code yourself ;-).

I just had to get it off my chest. The library doesn’t seem to have the quality I’m used to with libraries supplied by Microsoft. I hope that a newer version, at least delivers some good documentation on the methods etc. because it’s not very intuitive to work with. Next to that, let’s be honest, Identity management and authentication is a very important part of an application and it’s absolutely not trivial. The library is very helpful in making it easier to work with a claims based scenario, however, life could be a lot easier with some good documentation ;-).

July 25, 2010

First steps with ASP.NET MVC and ASP.NET routes

Filed under: Development — Tags: — koenwillemse @ 00:31

I’ve started working on an ASP.NET MVC 2 project recently and that means that there is a lot of new stuff to learn ;-).

Previous ASP.NET projects I’ve worked on were all based on Webforms and almost all ‘AJAX’ stuff which we used were UpdatePanels and the AJAX Control toolkit. I must say that I start to like ASP.NET MVC better than Webforms. It’s a different way of working, but when you get used to it, it works great. I also started to work with jQuery and it’s actually very easy. It’s a very interesting last few days, that’s for sure.

But the main reason I write this blog post is the routing stuff. I created some routes for my project and it was not working the way I wanted. Strange stuff was happening. Thankfully I remembered the talk of Scott Hanselmann on the DevDays 2010 about ASP.NET MVC. He mentioned the routing debugger written by Phil Haack. I download the dll, added it as a reference, fired up my application and within several minutes I found my problem and fixed it. Very usefull tool, that’s for sure!

« Newer PostsOlder Posts »

Blog at WordPress.com.

%d bloggers like this: