Friday, 21 October 2016

The self-evident things we do or sometimes don't

This week I had quite a bit of frustration at work. This post is meant as therapy.

Here's the cause of my troubles.
For a project my team inherited we used a client-server architecture where communication was handled with remoting and binary serialization (old school, I know). The dto-project was referenced in the client and the server solution as a project reference, not an assembly (for all the wrong reasons). All worked well in all deployment stages.

We needed to make changes to the client. We've been using Visual Studio 2015 for a couple of months now but never needed to change this client or server in a manner that caused the sln file to change the visual studio version until now.

When running the server and client (from vs 2015) locally everything worked. But once we tried deploying it using a build server the deserialization broke. With a useless exception, as is often the case with the .NET binary serialization.

To summarize a couple of days of debugging and testing:
The sln file of the server mentioned vs 2013 while the client mentioned vs 2015. When building on the build server(with msbuild) the compiler was selected according to the vs version specified in the sln file. When running locally the compiler version was always the one from vs 2015, as the solutions were opened in it.

Apparently the built dto assemblies were different enough to cause serialization exceptions. Build from exactly the same code but with a different compiler that is. We came to this conclusion comparing the assembly sizes and using a binary compare tool.

There are things you do without thinking, like using a package for your dto classes project and only building them once for each release or maybe using something less brittle than binary serialization.

These kind of issues let you feel the pain that can be caused not doing those obvious things.

Thursday, 12 March 2015

Using my iPad as second screen for my Windows laptop

Last day I was looking at some hands-on Pluralsight videos (which I can highly recommend by the way) on my laptop. I had to constantly switch between my source editor and the browser where the video was playing as my screen on my laptop is to small to have both windows on at the same time and still be practical.

Then I noticed my iPad lying just besides me and started the pluralsight app to watch the video on it. Et voila, problem solved.

Although the app is very user-friendly, I found it annoying to having to take my hands of the mouse and keyboard when I wanted to pause/restart the video on the iPad.

A quick search lead me to iOS apps that could extend my screen to my tablet using a Wi-Fi connection.

First in line was the Air Display app by Avatron which most people are really positive about. My efforts were nonetheless fruitless. Disabled my firewall, checked my router, checked traffic with Wireshark, everything looked ok but still no connection. Even with the very helpful support of Avatron I wasn't able to setup the connection between my Windows 8.1 laptop and iOS 8 iPad. At least not in the allotted time I gave myself.

Second in line was the iDisplay app from SHAPE. Installed the app, installed the client on Windows, the client found my iPad in the network, clicked connect... Boom, everything worked.

The app costs 9,99€, which is a bargain for this productivity gain in my case.

Thursday, 27 September 2012

Unittesting third-party libraries

In Uncle Bob's Clean Code, which I can dearly recommend, he's writing about unit tests and TDD. Nothing new under the sun here.
What was new for me, was the part where he points out it can be a good idea to unittest a third party library. As I'm used to only put the public methods under test and mock out third party calls the idea made me frown at first.
It’s not our job to test the third-party code, but it may be in our best interest to write tests for the third-party code we use.
Of course one shouldn't adopt everything he reads without putting some thought in it first, even coming from a renowned author such as Robert Martin.
Nonetheless there are some strong arguments in favor:
  • When you have to learn a new library you have to play with it to get acquainted to it, doing this in unittests instead of production code lets you focus on the library itself.
  • When upgrading the library to a new version you at least know the methods under test still behave the same way.
  • You get a documentation that's always up to date with it as well.
And all this comes at the same cost you already had to pay learning the library.

Recently I had to implement a report functionality which generated a pdf. The choice was made to use the iTextSharp library so this was an ideal moment to put theory into practice.
The first problem I encountered was the assertion of my tests. How could I verify if a correct pdf was created? I needed to be able to parse a pdf file to achieve this. Looking further into the iTextSharp library I discovered it was more or less possible using the same library.
But would that make any sense? Although the assert-logic wouldn't use the same methods of the library, the question arose if it was a good idea to write tests where the assert uses the same library as the act part.

Writing following kind of tests would be quite ridiculous.
public class WhenSomeAction
    private int result;

    public void SetUp()
        var sut = new SystemUnderTest();
        result = sut.SomeAction();


    public void ShouldResultIntoSomething()
        var sut = new SystemUnderTest();
        var expectedResult = sut.SomeAction();
        Assert.AreEqual(expectedResult, result);
Given my asserts didn't use the same logic of the library I proceeded this way. In the end I wasn't going to include another pdf library just for testing purposes and definitely wasn't going to write one myself.
Finally I ended up with a couple of unittests (the pdf's that had to be created were rather simple) for creating a document, a footer, a header and a table.

I'm curious about the day we decide to upgrade the iTextSharp library. Unfortunately it's probable that I'll have to modify the assert logic of my tests once the library changes.

Saturday, 25 August 2012

Silverlight datepicker in a MVVM master-detail setup

Working on my pet project where the UI is developed in Silverligth 4 I had a bit of a problem last day. The setup was simple. A master-detail where the master is a listbox and the detail a bunch of textboxes. To maintain the separation of concerns we use the MVVM pattern. Now the problem occured when adding a silverlight datepicker control to the detail control. Nothing special one would think. Here's my simplified xaml.
<Grid x:Name="LayoutRoot" Background="White">
        <StackPanel Width="300" Margin="25">
            <ListBox x:Name="theListBox" ItemsSource="{Binding Dates}" SelectedItem="{Binding SelectedDate,Mode=TwoWay}" >
                        <TextBlock Text="{Binding Date}"/>

            <StackPanel DataContext="{Binding SelectedDate, Mode=TwoWay}">
                <sdk:DatePicker x:Name="theDatePicker" SelectedDate="{Binding Date, Mode=TwoWay}" />                
The master viewmodel contains a list of childviewmodels, called "Dates", and a selected childviewmodel, called "SelectedDate", as you may deduct from the xaml. The childviewmodel contains one property called "Date" which returns a DateTime.

To reproduce the problem, select a date in the listbox. Change the value in the datepicker by typing the new value (not with the calendar control) and now, before triggering a lostfocus event on the datepicker, select another date in the listbox. Much to my surprise I noticed it was not the listbox item first selected but the second one that was changed.

To illustrate to problem more vividly you can have a try on here

The problem seemed to be that the binding operation of the datepicker is only triggering after the selectd item of the listbox is changed when losing focus. Thus updating the newly selected item and not the previous one.

After trying a lot of possible solutions I figured, if silverlight won't do the binding update correctly I guess I had to do it myself. This implied setting the UpdateSourceTrigger from my datepicker and listbox to explicit. So I can decide myself when to trigger the binding operation. Furthermore I had to create a custom datepicker and a custom listbox so that I could attach the appropriate handlers to the events.

You can download the source here.

I find it peculiar that such a simple thing requires that much work. Or perhaps I'm missing something, no actually I hope I'm missing something. I'm calling out to anyone who has a better solution here. I've got it to work but I'm not glad with the imo hacked solution. Luckily the "dirt" stays in the codebehind of the view.

One of the strengths of Silverlight and WPF is the ease to bind your view with your viewmodel. Losing that makes it a much less attractive choice.

Friday, 23 March 2012

Listing files without server side code

Last day I had a request from a client to make a webpage for his company. No big deal, 5 or 6 static pages with a little css would do the deal. Then he asked me if it would be possible to create a side menu which contained links to files in pdf format. He also wanted to be able to manage those pdf's independently.

A whole other story for me at that point. I needed server side code to create the links to the pdf's and also a private "members" area with authentication for managing the files. Still no problem but I was wondering if I realy needed server side code.

Applying the most valuable progamming skill (KISS) I came up with a solution where I
  • enabled folder browsing on a folder that contained the pdf's
  • fetched the page that my webserver (IIS in my case) created when browsing to the folder with a jquery ajax call
  • distilled the links to <a> tags with a href ending in ".pdf"
  • parsed this into some format I liked and added them to an existing div
  • created a ftp user for the client so he could manage the pdf's

Here's the javascript code

      url: "./newsletters/", 
      success: function(data, textStatus, jqXHR){   

Anything that solves my problem in about 2 lines of code deserves a post.

Sunday, 12 February 2012

Lazy loading WCF services.

Last day I had a discussion with Peter who works on an application that uses NHibernate as ORM and WCF as means of communication between client and server. The problem he was confronted with, in short, was a parent-child relation in his domain where the NHibernate mapping for the parent had a one-to-many relation to it's children but the child mapping didn't reference the parent. Now he needed to fetch all the children from a certain parent id, in a different WCF call for that matter, without first fecthing the parent. If you want the more elaborate version see here and for the solution he used see here and here.

Although he solved it quite nicely a different discussion sprouted from it. How would you develop a WCF service that returns a datacontract on which some members are lazy loaded over WCF and would you actually do this?
I remembered working on a project where they had some kind of mechanism that resembled it but the experience was rather unpleasant as they used their entities as DTO's which was ugly and caused all kinds of problems. Ah ... I think I just found the topic of my next post.

To answer the first question, how would you do it, well here's how I would do it.
The gist of it is to have your datacontract use a servicelocator or perhaps better an IoC containter to resolve the interface of the service contract that fetches the children. On the server the implementation of this interface could for example fetch the children from the database. On the client it would call the service responsable for fetching the children.

So let's have a look at the datacontract from the parent.

    public class Parent
         private IList<child> _children;

        public IChildServices ChildService

            get { return ServiceLocator.SillyServiceLocator.GetInstance<ichildservices>(); }

        public int Id { get; set; }

        public String Name { get; set; }

        public IList<child> Children
                if (_children == null)
                    _children = ChildService.GetChildren(Id);
                return _children;
Note the "Children"-getter where we check if the children still have to be loaded and if so the service locator is queried for the implementation of the IChildServices and the GetChildren method is executed on it.

The implentation of the IChildServices on the client would look as folows.

public class ChildServicesProxy : ClientBase<ichildservices>, IChildServices
        public ChildServicesProxy() : base("ChildService")
        public IList<child> GetChildren(int parentId)
            return Channel.GetChildren(parentId);

Here we see the GetChildren method calls the GetChildren wcf call.

The IChildServices is also implemented on the server

public class ChildServices : IChildServices
        #region IChildServices Members

        public IList<child> GetChildren(int parentId)
//hardcoded but could fetch the children from the database
            if(parentId == 1)
                return new List<child>
                           new Child
                                   Name = "Parent 1 - Child 1"
                           new Child
                                   Name = "Parent 1 - Child 2"
            if(parentId == 3)
                return new List<child>
                           new Child
                                   Name = "Parent 3 - Child 1"
            return new List<child>();

Here the actual fetching of the data happens. In this case the data is hardcoded but a call to some persistent store could be called here.

The only thing we have to do now is register the implentation of each IChildServices on to the serviceloactor of the client and the server at startup of the applications.

Perhaps more interesting, at least for me, are the pros and cons of using such an approach.
I definitely wouldn't use it by default and it should be very obvious for the consumer he's using a lazy loaded contract as it obviously does imply a major performance hit that one would not immediately link to dereferencing a property.
On the other hand if, at the client side, you map your datacontract on a client-domain I guess it makes more sense to fetch all the data you need at once before mapping it.

Anyway my conclusion is that I would only use this in very specific cases while making more than obvious that the datacontract is a lazy loaded one.

You can find a working version of it here. Please note that it has been stripped down for simplicity reasons.

What are the pros and cons according to you? Would you use this approach and if so in which cases?

Tuesday, 13 December 2011

"is" operator confusion

A while back I stumbled upon the following code.
public void SomeMethod(object param)
            if(!(param is DateTime?))
            var d = (DateTime?) param;
                //do some stuff
Apart from the fact that you shouldn't pas an object to a method when you actualy want it to be a "DateTime?" type (code comes from a wpf converter where the parameter is passed as an object) and that an "as" operator would make a cleaner solution the thing that triggered my interest were the Resharper squigels on the line:


It told me "Expression is always true".
How can this always be true? The only thing I know for sure is that the reference has a nullable datetime once I get to that part of the code. But what makes it think it could not be a null value?

As is often the case the answer was simpler than the question... The "is"-operator checks for type but also returns false when the value is null. It doesn't seem logical to me but on the other hand maybe it's a way of forcing you to use the "as"-operator when it comes to reference types.

And indeed when looking at the IL I recieve the following code.

IL_0000: nop
IL_0001: ldarg.0
IL_0002: isinst valuetype [mscorlib]System.Nullable`1
IL_0007: ldnull
IL_0008: cgt.un
IL_000a: stloc.1
IL_000b: ldloc.1
IL_000c: brtrue.s IL_0011
IL_000e: nop
IL_000f: br.s IL_0028
IL_0011: ldarg.0
IL_0012: unbox.any valuetype [mscorlib]System.Nullable`1
IL_0017: stloc.0
IL_0018: ldloca.s d
IL_001a: call instance bool valuetype [mscorlib]System.Nullable`1::get_HasValue()
IL_001f: ldc.i4.0
IL_0020: ceq
IL_0022: stloc.1
IL_0023: ldloc.1
IL_0024: brtrue.s IL_0028
IL_0026: nop
IL_0027: nop
IL_0028: ret

So starting from the top and supposing null is passed as an argument
IL_0001: ldarg.0
the parameter with value null is loaded on the stack
IL_0002: isinst valuetype [mscorlib]System.Nullable`1
the value null is popped from the stack and checked if it is an instance of DateTime? which is not the case so null is pushed on the stack
IL_0007: ldnull
null value is loaded on the stack
IL_0008: cgt.un
the stack is popped twice, in our case meaning null is popped twice from the stack. The null values are compared and as they have the same value 0 is pushed to the stak
IL_000a: stloc.1
pops the value 0 from the stack into variable 1
IL_000b: ldloc.1
pushes the value 0 in variable 1 on to the stack
pops value 0 from the stack and goes to IL_0011 (to continue the program flow) if the value is true which is not the case
goes to IL_0028 and the method is exited

So this brings us back to our C# code and explains why we know for sure that, once we get past the "is"-check,  the param is different from null so HasValue always results in true. Nice job Resharper.

A genuine case of RTFM I suppose. Where in this case M stands for MSDN..