Blog is moving!

I’m moving my blog to my own domain, this for various reasons, which you can read on my new blog 😉

Here is the new location: kevinpelgrims.com/blog


Blog has moved! You should check this post out on kevinpelgrims.com

As I’m looking at CoffeeScript in my spare time, I wanted a fast way to build the scripts. Since I work with Sublime Text 2 for all my scripting, it seemed like the right tool for the job.

Sublime is a text editor that has a lot in common with TextMate for OS X, but it has the huge advantage of working cross-platform. Sublime can also be extended using plug-ins, like TextMate, and a lot of TextMate bundles are compatible with it.


Now, there are several ways to compile CoffeeScript locally (e.g. from .NET or using the built-in watch option). We are going to use Node.js and npm (the Node Package Manager). Node.js works on Windows these days and the .msi installer also makes sure npm is on your system. You will need to add the folder that contains (default is C:\Program Files (x86)\nodejs) node.exe and npm.cmd to your environment variables to make sure you can easily run them from anywhere. Here’s a nice guide to adding, removing and editing environment variables in Windows 7. The normal thing to do here, is to add it to the “path” variable, if it’s not there already, feel free to create it.

CoffeeScript compiler

Installing the CoffeeScript compiler and Node.js tools is pretty straight forward when we open up a command window and use npm:

npm install coffee-script

Warning: be sure to open the command window on the same folder as your Node.js installation, that means where node.exe lives (default is C:\Program Files (x86)\nodejs). You will need to open the command windows as administrator to do this. Because npm will not use an absolute path to create new folders, but will use ./node_modules. So it will create a new folder in a location you don’t want it to be if you don’t pay attention.

This installs CoffeeScript stuff for Node.js and on Windows it has a .cmd to run coffee-related stuff through Node.js automatically. The default folder for these .cmd files is C:\Program Files (x86)\nodejs\node_modules\.bin and you should add this to your environment variables too (preferably the existing “path” variable). After doing all that, you should be able to run “coffee” in your command window.

CoffeeScript in Sublime

To get CoffeeScript support in Sublime, we’re going to download the CoffeeScript package. First off, it’ll be useful to install Sublime Package Control, to make it easier to install packages. Detailed instructions can be found here: http://wbond.net/sublime_packages/package_control/installation

Now, when we restart Sublime, we can access package control through the Command Palette using Ctrl+Shift+P. As you start typing commands, you get an overview of the available options. If you execute “discover packages” a browser pops up giving you a nice overview of all available packages and a search bar. To install a package, we need the command “install package”. After pressing enter we get a list of all available packages. We will install “CoffeeScript”.

Building it

After doing all that we already have some support for the language, i.e. syntax highlighting. But building the scripts will not yet work on Windows. To fix this, click Preferences –> Browse Packages… in Sublime. This will open an explorer window where you can see all the packages. Navigate to CoffeeScript\Commands and open the file CoffeeScript.sublime-build. Remove the line that has the path variable and change the cmd line to have coffee.cmd instead of coffee.

Original version:

"path": "$HOME/bin:/usr/local/bin:$PATH",
"cmd": ["coffee","-c","$file"],
"file_regex": "^(...*?):([0-9]*):?([0-9]*)",
"selector": "source.coffee"

Edited version:

"cmd": ["coffee.cmd","-c","$file"],
"file_regex": "^(...*?):([0-9]*):?([0-9]*)",
"selector": "source.coffee"

Now create a file, save it as test.coffee, add some CoffeeScript, press Ctrl+B and Sublime creates a file called test.js in the same folder.

CoffeeScript file:

console.log "Hello world!"

JavaScript file:

(function() {

console.log("Hello world!");


You can adjust the build settings to your own needs, e.g. change the output directory or include JSLint. But for getting to know CoffeeScript and playing with it locally, this will work. And it makes it a lot easier! We can also for example integrate building LESS files to CSS in the same way. That would provide a uniform way of building different scripts and styles in a project and prevents us from having to use the command line to do everything manually, which would slow us down.

Blog has moved! You should put your comments on and link to kevinpelgrims.com

Blog has moved! You should check this post out on kevinpelgrims.com

Now that we’ve covered the Task Parallel Library, it’s time to move on.

What is PLINQ?

PLINQ stands for Parallel LINQ and is simply the parallel version of LINQ to Objects. Just like LINQ you can use it on any IEnumerable and there’s also deferred execution. Using PLINQ is even easier than using the Task Parallel Library!

Regular for loop and LINQ compared to PLINQ (with time in seconds)

How do we use PLINQ?

You can even make existing LINQ queries parallel simply by adding the AsParallel() method. That’s how easy it is! This makes it easy to use the power of parallelization, while enjoying the readability of LINQ. Isn’t that great?

var employees = GetEmployees();

// Regular LINQ
var query = employees.Select(e => e.Skills.Contains("C#"));

// Extension method style PLINQ
var queryParallel1 = employees.AsParallel()
                              .Select(e => e.Skills.Contains("C#"));

// Query expression style PLINQ
var queryParallel2 = from e in employees.AsParallel()
                     where e.Skills.Contains("C#")
                     select e;

Important fact: PLINQ uses all the processors of your system by default, with a maximum of 64. In some cases you might want to limit this, to give a machine some more power to take care of other tasks. Everybody deserves some CPU time! So don’t be greedy and use WithDegreeOfParallelism() on heavy queries. Following example uses a maximum of 3 processors, even if there are 16 available.

var queryDegree = employees.AsParallel()
                           .Select(e => e.Skills.Contains("C#"));

By default PLINQ doesn’t care about the order of your output, compared to the input. This is because order preservation costs more time. You can enable order preservation though, again in a very simple way, by using the AsOrdered() method. It’s good to know that OrderBy() will also take care of order preservation.

var employees = GetEmployeesOrderedByName();

var queryOrdered = employees.AsParallel()
                            .Select(e => e.Skills.Contains("C#"))

We want more!

PLINQ has a lot more to offer than what we talked about here, so be sure to use Google and MSDN if you want to know more. Check out this “old” (2007) yet interesting article on PLINQ from MSDN magazine. An important read is Understanding Speedup in PLINQ on MSDN, which explains a bit more of how PLINQ works and why it sometimes defaults to sequential mode anyway.

Blog has moved! You should check this post out on kevinpelgrims.com

I have talked about parallel programming in .NET before, very briefly: Parallel programming in .NET – Introduction. This follow-up post is long overdue 🙂

What is the TPL?

The Task Parallel Library is a set of APIs present in the System.Threading and System.Threading.Tasks namespaces. The point of these APIs is to make parallel programming easier to read and code. The library exposes the Parallel.For and Parallel.ForEach methods to enable parallel execution of loops and takes care of spawning and terminating threads, as well as scaling to multiple processors.

How do we use the TPL?

Following code uses the sequential and the parallel approach to go over a for-loop with some heavy calculations. I use the StopWatch class to compare the results in a command window.

watch = new Stopwatch();
for (int i = 0; i < 20000; i++)
Console.WriteLine("Sequential Time: " + watch.Elapsed.Seconds.ToString());

watch = new Stopwatch();
System.Threading.Tasks.Parallel.For(0, 20000, i =>
Console.WriteLine("Parallel Time: " + watch.Elapsed.Seconds.ToString());

The result of running this on my laptop (with multiple cores) looks like this:

Result of comparison sequential - parallel

As you can see, the parallel for-loop runs A LOT faster than the sequential version. By using all the available processing power, we can speed up loops significantly!

Below is a screenshot of the task manager keeping track of what’s happening  while executing the sequential and the parallel. What we can see here is that at first (where the red arrow is pointing at) we only use 1 core heavily. When the parallel code kicks in, all cores peak.

Task manager during comparison sequential - parallel

So, looking at the above code, implementing all this parallelism doesn’t seem to be that hard. The TPL makes it pretty easy to make use of all the processors in a machine.

Creating and running tasks

It’s possible to run a task implicitly by using the Parallel.Invoke method.

Parallel.Invoke(() => DoSomething())
Parallel.Invoke(() => DoSomething(), () => DoSomethingElse())

All you need to do is pass in a delegate, using lamba expressions makes this easy. You can call a named method or have some inline code. If you want to start more tasks concurrently, you can just insert more delegates to the same Parallel.Invoke method.

If you want more control over what’s happening, you’ll need to use a Task object, though. The task object has some interesting methods and properties that we can use to control the flow of our parallel code.

It is possible to use new Task() to create a new task object, but it’s a best practice to use the task factory. (Note that you can’t use the task factory if you want to separate the creation and the scheduling of the task.)

// Create a task and start it
var task1 = new Task(() => Console.WriteLine("Task1 says hi!"));

// Create a task using the task factory
var task1 = Task.Factory.StartNew(() => Console.WriteLine("Task1 says hi!"));

You can also get results from a task, by accessing the Result property. If you access it before the task is completed, the thread will be blocked until the result is available.

Task<int> taskreturn = Task.Factory.StartNew(() =>
    int calc = 3 + 3;
    return calc;
int result = taskreturn.Result;

To be continued..

You can chain tasks by using the Task.ContinueWith method. It’s also possible to access the result of the preceding task in the next one, using the Result property.

// Regular continuation
Task<int> task1 = Task.Factory.StartNew(() => 5);
Task<string> task2 = task1.ContinueWith(x => PrintInt(x.Result));

// Chained continuation
Task<string> task1 = Task.Factory.StartNew(() => 5)
                     .ContinueWith(x => PrintInt(x.Result));

The methods ContinueWhenAll() and ContinueWhenAny() make it possible to continue from multiple tasks by taking in an array of tasks to wait on and the action to be undertaken when those have finished. More about those functions can be found on MSDN.

The force is strong with this one

We only looked at a few functions of the TPL and I think it’s clear this is a very powerful library. When working on applications that need a lot of processing power, parallel programming in .NET can make it easier to improve performance, a lot.


Of course there is a lot more to TPL than covered in this small introduction, so go ahead and explore!

Blog has moved! You should check this post out on kevinpelgrims.com

Yesterday I attended a session on unit testing by Roy Osherove in Copenhagen. As I am trying to learn more about unit testing and TDD by applying it in a pet project, it was very interesting to see what a veteran like Roy had to say about the subject of unit testing. Also very interesting was his approach in this session, as he tried to teach us about good habits by showing us bad (real world) examples.

He also pointed out that anyone interested in writing unit tests and working test driven should do test reviews. It can be used as a learning tool, for example test review some open source projects. But it can also be used internally almost as a replacement of code reviews, because reviewing tests takes a lot less time and should give you a good idea of what the code is supposed to do (when working test driven).

I took some notes during the session that I would like to share – and keep here for my own reference 😉 I wrote down most of his tips, so to the unit testing experts out there some of it might seem really basic. But I thought it was interesting to have it all written down.

Three important words

The basic, yet very important requirements for tests:

  • Readable
  • Maintainable
  • Trustworthy

Unit test VS integration test

Unit tests are used for testing stuff in memory. The tests don’t change and they’re static. They don’t depend on other things.

Integration tests would be used when there is a dependency on the filesystem, a database, a Sharepoint server, etc.

Unit tests and integration tests have their own testproject!


  • Avoid test logic: too complicated
    • Ifs, switches, for loops, ..
  • No multiple asserts
    • This can be okay when you’re asserting using the same object
  • Avoid “magic numbers”
    • Using the number 42 somewhere raises the question whether it is important that the number is equal to 42; a good idea would be to use a variable with a descriptive name
  • Don’t assert on calculations or concatenations
    • Assert(“user,password”, Bleh()) is better than Assert(user + “,” + password, Bleh())
  • Don’t change or remove tests!
  • DateTime.Now (or friends like Random) –> NOT okay! These values change everytime
  • Test only publics


  • Factory methods (usually in the same class as the tests using them)
    • make_xx
  • Configure initial state
    • init_xx
  • Common tests in common methods
    • verify_xx

Tests are isolated

  • Don’t call other tests in a test
  • No shared state, have cleanup code for shared objects

Mock != Stub (in short)

  • Mock = used for asserts
  • Stub = used to help the test
  • Fake = can be both


If you need to test things related to a database, that would be an integration test and it’s a good idea to use the TransactionScope class in .NET so you can rollback everything when the test is done.

Blog has moved! You should check this post out on kevinpelgrims.com

Since I had fun using the Bing Maps API in C# (see Bing Maps – Geocoding and Imagery), I decided to try and do the same in PowerShell. This didn’t seem to be very hard either, as it’s possible to use the same objects as you would in a regular .NET application. Go PowerShell!
The only difference is the web service trick on the second line.

$key = "apikey";
$ws = New-WebServiceProxy -uri http://dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl;
$wsgr = New-Object Microsoft.PowerShell.Commands.NewWebserviceProxy.AutogeneratedTypes.WebServiceProxy1ervice_geocodeservice_svc_wsdl.GeocodeRequest;

$wsc = New-Object Microsoft.PowerShell.Commands.NewWebserviceProxy.AutogeneratedTypes.WebServiceProxy1ervice_geocodeservice_svc_wsdl.Credentials;
$wsc.ApplicationId = $key;
$wsgr.Credentials = $wsc;

$wsgr.Query = 'Brussels, Belgium';
$wsr = $ws.Geocode($wsgr);

$wsr.Results[0] | select {$_.Address.FormattedAddress}, {$_.Locations[0].Longitude}, {$_.Locations[0].Latitude};

When you execute this in PowerShell, it looks like this:

Geocoding in PowerShell

Things like this make you think about the power of PowerShell. If you can do it in .NET, you can do it in PS!

Blog has moved! You should check this post out on kevinpelgrims.com

Since I did a lot of GIS related stuff recently for work, I decided I’d have some fun with the Bing Maps API. I’ve been using Bing to display maps as a base for multiple layers of data in combination with MapGuide OS and needed to convert addresses to geocodes (coordinates). Afterwards I decided to play with it some more and I created a little app in C# that makes more use of Bing Maps.

If you want to get started with the Bing API, you’ll first need to get a key. More info on that can be found on MSDN.

Alright, now you got your key, let’s get started!

First thing you need to do to use the Bing Maps service, is adding a service reference. We’ll start out with some geocoding, so we need the geocode service. The addresses of the available services can be found here.

Click "Add Service Reference..." to open the service reference dialog

The we can use the following code to “geocode” an address (get the coordinates):

private String Geocode(string address)
    GeocodeRequest geocodeRequest = new GeocodeRequest();

    // Set credentials using a Bing Maps key
    geocodeRequest.Credentials = new GeocodeService.Credentials();
    geocodeRequest.Credentials.ApplicationId = key;

    // Set the address
    geocodeRequest.Query = address;

    // Make the geocode request
    GeocodeServiceClient geocodeService = new GeocodeServiceClient("BasicHttpBinding_IGeocodeService");
    GeocodeResponse geocodeResponse = geocodeService.Geocode(geocodeRequest);

    return GetGeocodeResults(geocodeResponse);

GetGeocodeResults is just a function that I made to print out the response on the screen. As seen here:

Get the coordinates for an address

(There are some extra options available. You could, for example, tell the service to only return “high confidence” results. But I’m not going to talk about that here.)

Because getting the coordinates was so easy, I decided to also implement reverse geocoding. Which is (as you would expect) converting coordinates to an address.

private String ReverseGeocode(double latitude, double longitude)
    ReverseGeocodeRequest reverseGeocodeRequest = new ReverseGeocodeRequest();

    // Set credentials using a Bing Maps key
    reverseGeocodeRequest.Credentials = new GeocodeService.Credentials();
    reverseGeocodeRequest.Credentials.ApplicationId = key;

    // Set the coordinates
    reverseGeocodeRequest.Location = new BingMapsSoap.GeocodeService.GeocodeLocation() { Latitude = latitude, Longitude = longitude };

    // Make the reverse geocode request
    GeocodeServiceClient geocodeService = new GeocodeServiceClient("BasicHttpBinding_IGeocodeService");
    GeocodeResponse geocodeResponse = geocodeService.ReverseGeocode(reverseGeocodeRequest);

    return GetGeocodeResults(geocodeResponse);

Converting coordinates to addresses is easy!

After getting this far in about 15 minutes of figuring it out and coding, I couldn’t stop there! I decided to add some basic imagery for the address/coordinates that are converted. For imagery you need to add a reference to the imagery service first. Writing code for this is also pretty easy, as there are plenty of examples on MSDN that can be useful. It seems Microsoft really put some effort into  documenting this right 🙂

private void GetImagery(double latitude, double longitude)
    MapUriRequest mapUriRequest = new MapUriRequest();

    // Set credentials using Bing Maps key
    mapUriRequest.Credentials = new ImageryService.Credentials();
    mapUriRequest.Credentials.ApplicationId = key;

    // Set the location of the image
    mapUriRequest.Center = new ImageryService.Location();
    mapUriRequest.Center.Latitude = latitude;
    mapUriRequest.Center.Longitude = longitude;

    // Set map style and zoom level
    MapUriOptions mapUriOptions = new MapUriOptions();
    mapUriOptions.Style = MapStyle.AerialWithLabels;
    mapUriOptions.ZoomLevel = 17;

    // Set size of the image to match the size of the image control
    mapUriOptions.ImageSize = new ImageryService.SizeOfint();
    mapUriOptions.ImageSize.Height = 160;
    mapUriOptions.ImageSize.Width = 160;

    mapUriRequest.Options = mapUriOptions;

    ImageryServiceClient imageryService = new ImageryServiceClient("BasicHttpBinding_IImageryService");
    MapUriResponse mapUriResponse = imageryService.GetMapUri(mapUriRequest);
    BitmapImage bmpImg = new BitmapImage(new Uri(mapUriResponse.Uri));
    bingImage.Source = bmpImg;

That code gives us this result:

Looks pretty good for a small app that took almost no time to make. The Bing Maps API is pretty straight-forward to work with and MSDN has some good samples to get started. So if you’re interested in working with Bing Maps, be sure to check out the documentation.

Now go and have fun with this!