Thursday, December 31, 2009

An IoC-Agnostic Prism Bootstrapper

Prism from Microsoft Patterns & Practices leans on an Inversion of Control (IoC) container. While nominally indifferent to your choice of container, you have to work a bit to extricate yourself from its default choice which is Microsoft Unity. More so with some containers than others.

The sweat and tears flow in the “Bootstrapper” class with which you launch your application. Most Prism samples feature a Bootstrapper that inherits from “UnityBootstrapper”. “UnityBootstrapper” busies itself with registering and configuring a dozen or so Prism components in a manner characteristic of Unity and it unapologetically takes a hard dependence on Unity itself. You are welcome to write your own Bootstrapper base class from scratch; of course this entails reading and understanding what “UnityBootstrapper” does so you can reassure yourself that your replacement achieves the same effects. Who has time for that?

Almost a year ago I found myself writing a single application with three different client technologies, each dependent upon its own container. The three diverged in minor details – each was a different incarnation of a Container + ObjectBuilder – but the differences were sufficiently great that I felt compelled to unleash the Prism base bootstrapper class from its Unity chains.

Preoccupied with more important business, I took the most expeditious route. I salvaged as much of Prism’s UnityBootstrapper as I could while replacing its direct dependency on Unity library references.

Prism initialization is not something I want to own. When P&P updates its UnityBootstrapper, I want a quick, no-thought pathway to update my IoC-agnostic version.

The result was Cal.Bootstrapper (“CAL” is short for “Composite Application Library”, Prism’s official name) and some supporting types. Cal.Bootstrapper looks like a cleaned-up UnityBootstrapper. It goes through the same motions of registering types and configuring instances albeit with reference to a denuded IoC, shorn of all but the most minimal functionality. This, dear reader, will be my gift to you.

Recently, a few folks have wondered about replacing Unity with a completely different IoC container. The one I hear most often is StructureMap. Its author, Jeremy Miller, has several times entertained the notion of replacing “UnityBootstrapper” with something that reflects glory on StructureMap. Apparently he has better things to do. I guess I’ll tackle it.

The Goods

I’ve bundled everything described in this post into a 2 meg zip, PrismRI-Ioc.zip, downloadable from my company website. Do with it as you will.

Please note: I have no intention of maintaining this; don’t even ask.

No, This is Not the Right Way

My knowledge of StructureMap is deeply impoverished but, from the little I do know, I’m certain that the “proper” course is to abandon “UnityBootstrapper” altogether. But there’s a reason Jeremy hasn’t tackled this and no one seems to have done so for other IoC products either.

Nicholas Blumhardt, author of Autofac, took a crack at integrating Prism with his IoC back in August, 2009. He wanted to do it “the right way,” completely rethinking how a Prism app is bootstrapped and how application modules are written. He describes it here as a “work-in-progress”. I guess it took a little more work than he thought it would and he must have found something better to do because it doesn’t look like he finished.

I hoped to avoid this “road to good intentions” by doing it the “the easy way” which may be the “wrong way” but it works … which beats something that doesn’t.

I decided to change as little of the Unity-oriented Prism app approach as I could. I want to get something done. I want you to be able to convert your running Prism app with minimal effort.

Accordingly, my “Cal.Bootstrapper” mimics the “UnityBootstrapper” and it wallows in the style and ceremony that other IoC products despise.

Is it all that bad? Why am I changing my IoC container anyway? I ought to have a clear idea about what I want from an alternative to Unity. I’m wasting my time if I’m only replacing the UnityBootstrapper with something “better”. Frankly, I don’t care how Prism configures its minimal requirements. If I want StructureMap (or AutoFaq, Castle Windsor, Ninject, etc), it is because I prefer that IoC for registering and configuring my own application components. I may replace the Prism configuration eventually but right now that is neither my motivation nor my first step.

Therefore, although my Cal.Bootstrapper is ugly as sin, it will serve me for the nonce as I hope it serves you and I shall proceed to describe it without further apology. I will let it configure Prism the ugly way using my IoC container in place of Unity after which I am free in my own code to drive my IoC container as it was meant to be driven.

Battle Plan

A Prism application has a Bootstrapper class to configure the container, create and configure the main UI view called the “shell”, acquire application modules, and launch the application.

You are expected to write a custom Bootstrapper that inherits from a base Bootstrapper class provided by Prism (the aforementioned UnityBootstrapper). This base class handles the above inventory of start-up tasks. Your derived Bootstrapper is supposed to extend the base class by overriding certain abstract and virtual methods.

In other words, the bootstrapper is launch karaoke. You add your voice at predefined moments. It’s great if the music suits you. If you’re singing off key, you’re should stop faking it and write your own song.

Tempted as I was, I chose not to quarrel with this approach. In fact, I took a deep breath and then embraced it … for the reasons discussed above.

I decided to replace the UnityBoostrapper with an alternative base class (Cal.Bootstrapper) that does not know about Unity.

Your derived Bootstrapper will provide an IoC container to this base class, wrapped in a vendor-neutral “ContainerAdapter, and delivered to the base class by overriding its abstract “CreateContainerAdapter” method.

I have a few additional goals:

  • “Improve” the UnityBootstrapper so I can figure out what it does.
  • Make it easy to update my base class when (if) Microsoft updates the UnityBootstrapper.
  • Minimize the effort to get my existing app bootstrapper deriving fro the IoC-agnostic base class.
  • Minimize the changes necessary to convert my existing Unity-based Prism app.
  • “Prove it” with an app Prism-aficionados know: the Prism Reference Implementation.

The original UnityBootstrapper is code only a mother could love. Long methods; much repetition; mysterious names. I can’t replace it until I understand what it does. My first step is to refactor it for readability. This is somewhat a matter of taste but I hope you’ll agree that the my revision, called “RefinedUnityBootstrapper”, is an improvement.

I mustn’t get carried away if I am to satisfy my second subsidiary goal. Too many improvements would hinder re-synchronization with Microsoft’s changes.

I want to prove that my approach works on an “application” that anyone can examine and that would be familiar to Prism developers. The Prism RI is the obvious candidate; that’s what I tackled.

I haven’t looked at the RI in a long time. The version on my disk dates from February 2009. That’s pre-Silverlight 3 and, although I had patched it for SL 3, it’s still old and funky. The UI doesn’t work the way it should: stock positions don’t change when you buy and sell, the “Watch List” feature is broken, the Silverlight version throws a recoverable exception when you sell more stock than you own.

But I didn’t have time to freshen it up, it was good enough for my purposes, and it’s all I’m going to give you.

Swappable Versions

This is a tutorial. I want you to be able to switch among different container implementations without changing the body of the application. I don’t have the time or the interest to do anything fancy.  In the “real” world, I’d make the change and discard the original.

The version of the RI you run depends entirely upon the executing version of Prism RI Bootstrapper class. The class is divided into two partial class files. One of them is constant. The other holds just the code that varies from container to container. The file with variations is called StockTraderRIBootstrapper.Ioc.Desktop or StockTraderRIBootstrapper.Ioc.Silverlight.

The variations are governed by compiler directives at the top. You uncomment ONE of them, recompile, and let ‘er rip. The example below shows the bootstrapper readied for StructureMap.

//***************************************************************************
// *** PICK YOUR BOOTSTRAPPER AND IOC CONTAINER WITH _ONE_ OF THESE DEFINES

//#define ORIGINAL_UNITYBOOTSTRAPPER
//#define REFINED_UNITYBOOTSTRAPPER
//#define UNITY_CONTAINER_ADAPTER
#define STRUCTUREMAP_CONTAINER_ADAPTER
//#define AUTOFAC_CONTAINER_ADAPTER

//***************************************************************************

Each variation takes only a few lines. I set the title text so you can see which version is running on screen (the original three letter “CFI” company name changes to one of “OUB”, “RUB”, “UCA”, “SMA”, or “ACA” to match your selection). For the Ioc-agnostic variations, there is an override of the method that creates the selected Ioc ContainerAdapter. The two versions pinned to the UnityBootstrapper (remember that I “refined” the UnityBootstrapper first simply to understand it better) have a shim method instead.

Incredibly stupid, I know. I’d never ship anything like this. I live with it because it’s a throw away demo and doesn’t merit the effort or complexity of proper decoupling. Or at least it doesn’t merit that attention yet.

The rest of the Prism RI stays constant. It took some work to keep it that way. Not a lot, fortunately; the Patterns and Practices people did a pretty good job of keeping the RI Ioc-neutral.

I won’t bore you with the details of my journey to this blissful place. They are chronicled in “PrismRIChanges.txt”, located in the PrismRI folder next to the Visual Studio Solution file, “StockTraderRI-IoC.sln”. The gist of it is

  • Add the Ioc assemblies in a common directory structure
  • Add the PrismBootstrapper solution with its Ioc-agnostic Cal.Bootstrapper and the Ioc-specific ContainerAdapters.
  • Revise the start-up project, StockTraderRI, to access the above
  • Replace references to “IUnityContainer” with “IContainerAdapter”
  • Remove module references to Unity.

You’ll do pretty much the same if you convert your existing Prism application to another Ioc container.

ContainerAdapters

The fun is in the ContainerAdapters which implement the following interface:

namespace Cal.Bootstrapper {
  public interface IContainerAdapter {

    IContainerAdapter RegisterInstance<T>(T instance);

    IContainerAdapter RegisterType<TFrom, TTo>(
        ContainerRegistrationScope scope) where TTo : TFrom;

    IContainerAdapter RegisterType(
        Type fromType, Type toType, ContainerRegistrationScope scope);

    IContainerAdapter RegisterTypeIfMissing<TFrom, TTo>(
        ContainerRegistrationScope scope) where TTo : TFrom;

    IContainerAdapter RegisterTypeIfMissing(
        Type fromType, Type toType, ContainerRegistrationScope scope);

    T Resolve<T>() where T : class;

    object Resolve(Type type);

    T TryResolve<T>() where T : class ;

    object TryResolve(Type type);
  }

  public enum ContainerRegistrationScope {
    Singleton,
    Instance,
  }
}

Not so bad if you know your container. Clearly IContainerAdapter must be so dumb that any IoC Container vendor can support it. It needs to be just smart enough for the base Bootstrapper class to configure Prism. Fortunately, this challenge is easily met.

The code tends to be boilerplate. Autofac took a little more work (see below) but still weighs in at only 180 lines.

You also have to write an adaptor that implements Microsoft.Practices.ServiceLocation.IServiceLocator; virtually everyone has done this already.

IoC Containers Demo’d

I’ve tried this approach with three IoC Containers: Unity (Unity v.1.2.0.0, ObjectBuilder2 v.2.2.2.0), StructureMap (v.2.5.4.0), and Autofac (Desktop: v.1.4.5.676, Silverlight: 1.4.4.572).

You should know that I am not an expert in any of these containers. In fact, I just winged it, especially with StructureMap and Autofaq about which I know squat. I figured I could download them, skim whatever documentation they offered, and get something to work adequately for Shmoes like me.

It’s not quite that easy and I have low confidence in my use of these containers. Their authors are invited to lend a hand improving matters … after they’re done laughing.

I wanted to show at least one non-Unity container for both desktop and Silverlight applications. StructureMap doesn’t have a Silverlight version yet (come on, Jeremy). I tried Autofac because it does have a Silverlight version.

I’m glad I did. The StructureMap implementation was a breeze. Autofac was much harder. I’m not trashing Autofac. It just happened to be harder for me, partly due to my ignorance and mostly because of the position Autofac takes on resolving unregistered types.

Major Container “Gotcha”

Container differences matter! One pervasive Prism practice in particular relies on a Unity default behavior that other containers do not observe. It’s serious enough to influence your choice of containers or make your conversion substantially more difficult.

Prism instantiates a lot of its own components with unregistered public concrete classes. For example, somewhere in the bowels of RegionManager initialization, it resolves “DelayedRegionCreationBehavior”. This class is never registered with the container.

Unity is ok with this. When asked to resolve an unregistered public concrete type, it treats the type as if it had been registered and scoped as an instance (i.e., you get a fresh instance each time). Both TryResolve and Resolve behave this same way.

Prism components count on this default Unity container behavior. What if you change containers?

StructureMap has the same default behavior when resolving an unregistered public concrete type. However, it’s TryResolve equivalent, “TryGetInstance”, return null. As I write, I have not detected a consequence of this difference; but I know it’s waiting in the grass somewhere.

You could argue all day about whether a container should return null or return an instance the way Unity does. You could argue whether it should default to Singleton or Instance scope.

Autofac punts. If you attempt to Resolve an unregistered type, it throws an Autofac.ComponentNotRegisteredException with a clear message. “TryResolve” returns null, which is the sensible thing to do.

Aside: the three ContainerAdapter files in my sample implement a “ContainerPlay” class that reveals these differences.

I can’t quarrel with Autofac’s choice but I sure paid the price for purity. I had to hunt down all resolutions of concrete types and manually register them during AutofacContainerAdapter initialization. Unfortunately even that is insufficient. Many class constructors take parameters defined by concrete types. The container will attempt to resolve these and will throw at runtime. There aren’t integration tests to detect these automatically so I have to wander through the app under the debugger. I still haven’t got it to work.

It was easier to get the Prism RI working with StructureMap. I have assumed that the only critical difference is with explicit “TryResolve” calls. I haven’t encountered any optional constructor parameters so I expect StructureMap and Unity to resolve these in the same way.

I searched the Prism and RI source code for “TryResolve” calls involving concrete type. There weren’t many of them. I only found one of potential consequence. The base bootstrapper calls “TryResolve<RegionAdapterMappings>”. Fortunately, it also registers RegionAdapterMappings explicitly so resolution is always successful. Perhaps I was luck. Remember to be careful how you use TryResolve in your application.

The net of it is that containers differ. Prism should not have relied on a Unity-specific type resolution feature. If you have done so in your Unity-based application (as I have), you too are vulnerable.

Where Did The Tests Go?

I removed the test projects from the Prism RI and the Composite Application Library projects that go with it. Why did I do that? laziness?

I really thought this simple task would take maybe a long day at most. Stripping the tests away from the start was supposed to help me stay in the time box. It was going to be non-trivial to setup Prism’s test environment, convert the tests, and repackage for distribution to you. I didn’t think I could afford it.

The point of the exercise was to demonstrate how to liberate your application to use an alternative container. I wasn’t trying to deliver a product and there is limited value in tests for a version of the Prism RI that is (a) obsolete and (b) irrelevant to your application.

Unfortunately, this “simple” task kept expanding in scope; it took four long days. Geez like that never happens.

Now that I’m “done” I wish I had those tests back, the UnityBootstrapperFixture in particular. Debugger testing is painful as we all know. Had I committed to automated tests, I would have written something better than the idiotic compiler-directive-based approach to swapping containers. Sure it would have cost me something – a day? – to get it right, a day I didn’t want to spend. But if someone wants to take this further, the tests would accelerate his progress greatly and imbued his efforts with confidence. Mea culpa.

I wonder if some kind soul will restore them?

Happy New Year, everyone!

Monday, November 23, 2009

PDC 2009 Session Links

I extracted the session titles and links into this PDF to make it easier for me (and for you) to find and navigate to the PDC 2009 session materials. Hope it works for you.

Tuesday, November 3, 2009

Easing into Prism

I received a familiar email today:

Is it possible to start slowly with this Prism stuff and work my way into it gradually. Can I get something running quickly with DevForce as I know it now? My client has seen a prototype and I need to construct it soon. Then can I come back later and modularize the whole thing behind the scenes?

Here’s my reply:

You do not need Prism to get started!  You can introduce it into your application as you see opportunities that make sense to you. Prism is not a monolithic framework. It is a small collection of free-standing components that also work well together. You can adopt it a piece at a time; some of it you may comfortably ignore.

It’s definitely worth reading about. It’s worth looking at the Reference Implementation and the Quick Starts. Get the Composite Application Guidance book from Patterns and Practices, look at what folks are doing with it and saying about it.

Don’t get hung up on Prism itself. It is guidance first and implementation second. Poke around. See how other people tackle some of the same concerns. You might look at Caliburn, for example.

In other words, take your time and educate yourself.  You don’t have to dive into the deep end.

However, I feel strongly that you should start wrapping your head around Dependency Injection (DI) and Inversion of Control (IoC) containers as soon as you can.  They will completely transform the way you write your application. They will position you for future modularity and make your code ever so much easier to maintain.

Learning IoC is also a great career move.

When you see yourself writing “new XXX(…)” you will learn to stop and ask yourself, “Do I really want to lock myself into the XXX type or would I be better off if XXX were delivered to me instead?”  In recent years my answer has increasingly been to avoid “new” and let the IoC construct and inject the instance I need.

Don’t get carried away now. It’s a shiny new toy. Take it easy. But get used to it.

I know you asked about Prism and here I am waxing on about IoC. I only go on about this because your decision to use or not use DI/IoC is fundamental. Prism (and its ilk) will come along naturally if you use DI; they won’t seem so natural if your application is not DI-ready.

I promise the effort of learning DI (and it does takes some effort) will pay off quickly … and for the rest of your career. The principles are fundamental to emerging Microsoft technologies (see the Managed Extensibility Framework – MEF). Dependency Injection is the primary vehicle for implementation of most modern application development patterns.

You have plenty of IoC containers to consider. Microsoft Unity is one choice but by no means the only one: AutoFaq, Castle Windsor, Ninject, and StructureMap are popular (apologies to any I omitted).

Do not be overwhelmed! These alternatives are vying for your affections with tons of features and you may easily become convinced that DI is super-complicated.

You can ease into DI. Take a look at our “Prism Explorer” (which uses Unity); you’ll hardly notice IoC but it’s there. I limited myself to the simplest Unity techniques. If I remember correctly, I only use constructor injection, I register very few interface types, and I don’t use any of the attribute markup. You can go amazingly far without any advanced IoC features.

In sum, get going on your application and do not let these technologies distract you. They are important and valuable but you don’t want to destroy your business relationship by drowning in technologies that you know nothing about.

Get acquainted with Dependency Injection first and start using it experimentally. Learn about Prism in all that spare time.

Try partnering with someone who has been down this road before. If that is not possible, find a local users group and latch on to other like-minded developers. It’s much more fun when you play with others.

Sunday, November 1, 2009

Young @ Heart: The (2007) Movie

Stumbled into “Young @ Heart”, a documentary about a New England choral group performing punk, rock, and R&B covers; the hook is that the members are well over 70 (high of 92).

Sounds like an opportunity for terminal cuteness … “oh those feisty seniors … isn’t it amazing that they can sort of sing at all at their age? … it’s so inspiring!”.  That’s exactly the sense you get from the trailer which is delivered with the typical phone-it-in Fox Searchlight style. Exactly the kind of movie I avoid.

But this one caught me completely by surprise and I’m still roiling with emotions a day later. I’m trying to figure out why.

It should have been utter treacle like one of those sentimental movies about animals, or kids, or illness, or the mentally infirm. Instead it’s a rich character study in which each member of the cast takes you somewhere you’re likely to go … somewhere you want to go … if you’re lucky enough to live that well.

Singing is the glue and reason enough to watch. The songs - well-known pop tunes of youthful love, rage, despair, and hope – confess new moods and meanings as sung by performers far from adolescence.  The juxtapositions can be simple fun as in the raucous “I Want to be Sedated”, given literal treatment by a cast for whom the wheelchair is a serious threat. Or it can rip you open as when Bob Knittle’s rich baritone pleads “I will try to fix you” to the barely perceptible beat of his oxygen machine, counting down toward the inevitable. It would be contrived if it didn’t actually happen, inadvertently, before a packed house of loving fans.

The heart of the movie is the interviews and the lives of the cast as they orbit the rehearsal hall. The authenticity is striking. At a jail house concert the camera lingers on the prisoners’ faces each surrendering a thought of present loss or worrisome future. They are not being entertained (although they think they are); they are transported to the company of someone they miss, of someone they may become.

Death is a looming presence. Close as he is, the cast are no more ready for him than we are.

It’s not great filmmaking. It’s great human material triumphing over the pedestrian. It’s people being themselves, striving for meaning and community, just a few feet ahead of us down the road we travel together.

Friday, October 16, 2009

Patterns and Frameworks

I just returned from the Microsoft Patterns and Practices Summit 2009 which may be my favorite Microsoft conference. Great for exposure to customers with real applications and to the wonderful, dedicated Microsoft developers who aspire to serve them. More on this in another post. I have another point to make here, which is …

Patterns are not Frameworks. We should draw a bright line between Patterns and their Implementations. A pattern only becomes a pattern if and when there are multiple implementations. You don't know a pattern well until you've seen more than one implementation. If there is a single, canonical implementation, the thing is not a pattern, it’s a framework.

“Duh” you say. We know this. And yet it eludes us in practice.

I had numerous conversations about this with folks who work in PnP. The conversation goes something like this:

The Conversation

“We are not a product group. We don’t want to be a product group.

“We’re trying to provide guidance to consumers of Microsoft development products. We are reluctant even to say ‘best practices’ anymore. It’s just the best guidance we can give today, built with time, and commitment, and our access to experts inside and outside Microsoft.

“We’re answering the customer call for Microsoft to demonstrate how to use its own products in real applications.

“This bleeds inevitably into more general questions about how to build applications and soon (and properly) we investigate and describe general design and implementation patterns that go beyond the particulars of Microsoft products.

“It’s not enough to talk about the patterns. We need to demonstrate those patterns … express them in real code that works with Microsoft technology.

“That is when it starts to get away from us. Our guidance, as it becomes code, starts to cover more and more customer cases. It begins to harden and the pattern becomes harder to see behind the bells and whistles. Customers say ‘wow, can I use that in my application?’ and we say ‘sure’. They say ‘it would really help our particular situation if it also did this’ and we say ‘sure’ again.

“Then they want us to give it a name, package it up, support it, develop a roadmap. They want us to productize it … and we start to do that because serving customers is also our mission. And, in our (not so) secret hearts, we all want to ship products … we’re wired that way.

“Then we start worrying about backward compatibility … because customers get angry at us if we change anything

“Then others in the community, especially the Open Source community, get on our case for having sucked the oxygen away from alternative implementations that may be better in many scenarios. They say we have imposed, de facto, a particular implementation of the pattern; people can’t tell the difference between the pattern and what Microsoft did to it. And I see their point.

“And now we have old code (e.g., CAB) that no longer represents our current thinking … that, in fact, embodies practices we would counsel against today … and we cannot walk away from it.

--

I don’t know that there is an easy answer to this one. I’m not willing to place all the blame on PnP either. The customer should know better.

“Best Practices” do not sit still. They evolve, leaving behind a trail of “formerly best practices” to which millions of dollars and entire careers are firmly attached. PnP is pulled in one direction by its responsibility to offer the best current guidance … and by its responsibility to customers who have placed big bets on its past guidance.

Perhaps we need some different rules of engagement.

Explicit Shelf Life

What if PnP put a “Sell By” date on their code? What if it had a shelf life and warning label: “Use with caution. Do not expect support beyond 3 years”. Stamp this at the top of every piece of code and every web page.

Then mean it. If you need to break, you break. Customers need to know that you are serious about your commitment to giving them the best advice you can and you do them no favors by clinging to outdated code.

This happens in OSS all of the time. Dead bodies are strewn everywhere.

You can do this. Microsoft can’t but you can. I think you can wean your customers away from thinking of you as a product group. If that means fewer customers served … that may be for the best … for everyone: your customers, you, and Microsoft itself.

Play Well With Others

My thesis again: a pattern is independent of any one implementation. You don't know a pattern well until you've seen at least two implementations.

What if PnP routinely called out the other implementations of the patterns they describe? Front and center, not as an afterthought. Openly encourage customers to look at these alternatives … if only to gain a contrasting perspective on the PnP guidance.

One of the best ways to learn about the Composite Application Block (CAB), for example, was Jeremy Miller’s “Build your own CAB” series. I assure you I had no intention of following Jeremy’s advice and building my own CAB. It didn’t even occur to me. But that series was the best (unintentional) documentation of CAB I ever found, It introduced me to different ways of thinking about each issue (e.g., Event Aggregation and Dependency Injection). I learned indirectly what CAB was doing and the consequences of my choice. I’m not saying the consequences were all bad either; but there were consequences and I only recognized them in the oblique light cast by Jeremy’s series.

PnP might do more to highlight these alternatives and to encourage discussion about the contrasting choices. People who come to Prism should learn … right there on the home page … that a Caliburn exists. When they come to the Unity home page, they could learn that there are 5 to 10 other popular IoC containers to choose from and find links to learning more about IoC practices.

[I was just on the Unity page. It’s a fine product home page; it is not fine for guidance about Dependency Injection and IoC in general.]

I don’t think that PnP should endorse Caliburn or anything else for that matter. I’m not suggesting that PnP’s output is inferior or that PnP should express less than abundant enthusiasm for its own work.

I am saying that PnP is presenting itself as a product purveyor in a manner that conflicts with its role as a trusted source of guidance. This ensures that it will be trapped into long term support commitments, the kinds of commitments that some people there tell me they do not want.

PnP could say in some unambiguous fashion, “my dear customer, we offer a wonderful implementation of this pattern. HOWEVER … you ought to look at some alternatives, not only when first choosing your approach but also throughout your application development; there’s gold in seeing how others approach these same problems.”

True Open Source

What if PnP could accept contributions from the field? I don’t mean through “Contrib”; I mean directly into the guidance itself.

What if PnP developers actually read some of the Open Source code out there?

I’m pretty sure the legal guardians won’t permit it … in which case “Contrib” is the best they can do.

Conclusion

They call it "Microsoft Patterns and Practices", not "Microsoft Tool and Die".

It is time for customers to pull back from unrealistic expectations of Patterns and Practices. They are not your application infrastructure outsourcer. They are a trusted source of guidance. Rejoice if they provide more but do not expect or demand it.

Patterns and Practice doesn’t need to change what they build. I do think PnP can rethink its commitments to prior output. It can do a much better job of promoting the spirit of exploration and directing its community to external resources that offer contrasting implementations and points of view.

And now I descend from this soap box … in search of another.

Wednesday, September 16, 2009

Man and Superman

I was reading Jeremy Miller’s musings on “Two Types of Developers”.

Developers fall into two different extremes … Some developers can jump right into any technology and find out how best to use it and new ways to twist it and apply it without really questioning the “goodness” of that technology.  Others look at a technology with preconceived notions of how it should work and are quick to drop a technology when it doesn’t fit their vision of “good.”

I am reminded that there are two kinds of people in this world: the people who divide the world into two kinds of people and the people who don’t. I am one of those two kinds of people.

What Jeremy really means it that there are two developer mindsets. Actually-existing developers indulge both modalities as he also observes in this second paragraph. Still, the thrust of his argument is that developers habituate to one or the other extreme.

Jeremy aligns himself mostly with the second. He describes an encounter with someone who, as Jeremy would have it, is inclined to the first.

I happen to have read the blog post on exhibit, know both parties (call the other “B”), and participated in some of the exchange that ensued. It got me wondering whether I buy Jeremy’s dichotomy.

Jeremy strives to be non-judgmental, averring potential advantages on both sides. I suggest that his phrase “without really questioning the ‘goodness’ of that technology” gives the game away.

I know “B” and he is a curious fellow (double entendre intended). His curiosity is evident in the post and manifold in conversation. “B” never takes a technology on face value, certainly not “Prism” which was the technology at issue in this example.

It may be relevant to note that “B” was deeply involved in Prism development. He is obviously comfortable with it. But “B” was as deeply involved in the debates, disappointments, and compromises that are inevitable when one works collaboratively on a framework. He may be blind to some of its faults but he can’t be accused of being uncritical by nature.

What happened here is that “B” chose to implement a solution to a problem using technology that Jeremy dislikes. One may say so. One may ponder the reasons. But to leap to the conclusion that “B” (or anyone who chooses similarly) is simply “unquestioning” …  that is unjust.

Worse, it is lazy.

It forecloses a genuine attempt to investigate how developers actually make choices in the real world of customers and deadlines.

Are there drudge journeyman programmers who paint by numbers? Sure … but they aren’t relevant to this discussion and that is not what Jeremy thinks of “B”.

Do we often cheerily follow the easy, well-trod path? Of course. Do we challenge our every decision to be the best possible in the abstract. No. None of us do. Could “B” have made a better choice than Prism… even in his particular circumstances? Maybe … which makes it worth pursuing with “B” .

But Jeremy’s point isn’t that we are prone to be the willing captives of our familiar tools. No, he claims we are by nature either slaves or freeman, the “last man” or the “Ãœbermensch”. That is the true import of his “interesting taxonomy of developers that I’ve observed for years”.

The earth has become small, and on it hops the last man, who makes everything small. … One still works, for work is a form of entertainment. But one is careful lest the entertainment be too harrowing. One no longer becomes poor or rich: both require too much exertion. … 'We have invented happiness,' say the last men, and they blink.

Behold, I teach you the overman. The overman is the meaning of the earth.Let your will say: the overman shall be the meaning of the earth! I beseech you, my brothers, remain faithful to the earth, and do not believe those who speak to you of otherworldly hopes!"

- Nietzsche's Thus spoke Zarathustra

It’s thrilling to hear … but I am not buying it.

I’m a huge Jeremy Miller fan; this was not his best day.

Saturday, September 12, 2009

Silverlight Spy 3 – First Look

Silverlight Spy 3 was just released. I think you’ll want this in your tool bag right away. Why? Because it is a great way of seeing what’s going on in your Silverlight application … and twiddling its appearance … at runtime.

It is especially helpful if you’re building a compositional application of many, independently-designed views; you want to see them playing together and understand what’s happening.

Quick Tour

I’m looking at my Prism Explorer demonstration app right now. This demo shows our DevForce Silverlight Data Services product working with Prism.

Fired it up in Visual Studio (to get the Cassini server side going), then pasted the URL (http: //localhost:9006/) into Spy’s address bar. Spy fires up the client in its browser just fine.

Spy drew the red highlight box, not me. I held down the ctrl + shift keys and moved my mouse around the browsed page; as I did, Spy highlighted each element in this manner; and then I clicked on the address to set the focus as you see it.

Now I swing over to the Explorer view and I see that the address is a TextBlock in the middle of a DataGridCell … deep in the visual (or logical) tree that begins with my Prism shell. I love being able to start from something I see on the page right to the element in the object tree.

I get the same effect in the other direction by navigating from the tree: select a node and watch Spy highlight the corresponding visual real estate in its browser.

I can see the XAML for this TextBlock in Spy’s View window:

Suppose I want to see what would happen if I gave the address a little more air. I open Spy’s Properties window, locate the Padding property and try “20” all around. I see the effect immediately:

Peeking and tweaking the look on the fly is value that can hardly be overstated especially in a composite app. Sometimes you just have to see the app run in a production context with real data behind it to understand what is going on. Adjusting some values to visualize how it might look after correction … that’s worth something to me.

Spy integrates seamlessly with Reflector so I can poke around in the client-side source code while in Spy. First, I tell Spy where to find Reflector through Tools / Options / Reflector. Then, in the Explorer window, “Object Browser” node, I navigate the assemblies in the XAP:

The View window exposes the disassembled code using Reflector:

I can also spy on other people’s applications. Now, when I see an effect I like, I’ll have a fighting chance to figure out how it’s done. Nothing evil in that.

Perhaps more productively, Spy can help me better support my customers. I don’t have their code. I don’t want their code! Now I can fire up their app in Spy from out here in Stinson Beach (yes, kicking back with the blog) and gain immediate insight. Having reflector baked in makes it easy to walk their code too.

Helpful Touches

Spy let’s me monitor mouse and key events which I can filter. In this snapshot, clicking on the event also takes me to the associated visual element.

I like to know when my app is visiting the server. Fiddler, Firebug (FireFox), and Web Development Helper (IE) are superior tools for the in-depth analysis. But if I just need to know that something went over the wire and peek at the headers, Spy does nicely. In this snap I just asked for a projection over all customers and their orders:

The top window is a grid of the traffic; double clicking a row reveals the headers panels in the lower half.

There’s a Performance Monitor with the visual familiarity of the one in Windows Task Manager.

The processor metric seems parallel to the Task Manager’s. The memory metric probably tracks just your app’s usage although how and how precisely I can’t tell. I noticed unpredictable figures simply by refreshing the application address to restart the app. I think of it as a rough gauge of relative behavior. You’ll get a general feel and you might be able to detect some memory leaks this way.

There are other goodies. These are just the features that attracted my immediate attention and made me want to buy.

Price

Spy was free in earlier incarnations. It now sells for roughly US$100 for a personal copy. You can download an evaluation copy that’s good for 27 days.

Frankly, I’m glad to see a price. I love “free” as much as the next guy but nothing is truly free. I want Spy’s author to keep improving this tool and he has to make a living at it to do it well. I’m bummed when a great free tool comes along and then it just sits there, waiting upon the author’s free time to answer questions, fix bugs, and take the next giant leap.

The biggest shortfall (as I will repeat) is the utter absence of documentation. With a little dough, Koen Zwikstra - the man behind spy - can afford to hire a tech writer and cut some videos.

Full disclosure: I got a free copy as an MVP. This review, however, is entirely my idea with no promise of reward.

Wrap Up

Silverlight Spy 3 is a super tool and a fine piece of work. Runs like a top. No install issues, no runtime glitches so far.

No documentation! None that I could find. This is a huge weakness. Thank goodness for Koen’s Channel 9 video with Adam Kinney recorded at MIX in March 2009; it opened my eyes to the potential and gave me valuable operating clues. How else would I discover “Ctrl+Shift+Mouse+Click”? Web search revealed nothing else (there is material on Spy 2).

Fortunately, it is pretty intuitive. Just bang around on it awhile and it starts to make sense. But some kind of guide needs to come out soon.

I’m hopeful this introduction piques your interest and gets you started.

A few minutes with Spy and you remember in a hurry never to put anything secret in your Silverlight client; that’s the wrong place for the Coca Cola recipe. Sure other tools can do it …. but Spy makes it so easy.

Go Spy!

Tuesday, September 8, 2009

AdventureWorks 2000 tinyint nightmare

You’re going to love this one!

Some of our integrations tests use AdventureWorks 2000. One developer’s test updated a Sales.SalesOrderDetail. The test suddenly blew up with a SQL Server 2005 error that complained about exceeding the size limit of a TinyInt.

The error didn’t say where this TinyInt lived or what the column was called. The best we could tell was that the exception occurred while updating Sales.SalesOrderDetail.

SalesOrderDetail has a TinyInt column, “LineNumber”, which was not jeopardized by the update.

Aside: I have a different AdventureWorks db on my machine that doesn’t have “LineNumber” or a TinyInt but it does have, “OrderQty”, which is a SmallInt. I would have suspected that (“SmallInt, TinyInt, what’s the difference?”); there would be no problem there either.

No other developer could reproduce the SQL exception.

After hours of hair-pulling, the developer found that there is an update trigger on SalesOrderDetail that updates the parent Sales.SalesOrderHeader. SalesOrderHeader has an update trigger that increments SalesOrderHeader.RevisionNumber … which is a TinyInt.

Over the course of many tests, that RevisionNumber kept creeping up … until it hit 255. Each developer has his own AdventureWorks copy so no surprise no one else could reproduce it.

You can blame the victim if you wish; should have had adequate rollback logic all along.

But I have to admire the genius who decided that you’d never exceed 255 updates of a SalesOrderHeader in a test database.

Let’s thank SQL Server for throwing an exception on exceeding the max for a TinyInt without telling us the name of the column or table involved.

Kudos also to the other evil genius in this mess who wrote a trigger on one table that fires a trigger on another table to update the RevisionNumber … and didn’t bother to check the type or implement exception logic that would help clarify the problem.

There must be an award for this kind of thing.

Lesson #1: If you are using AdventureWorks 2000, be sure to reset all SalesOrderHeader.RevisionNumber values to 0. It’s too late to mess with the schema.

Lesson #2: Hey, SQL Server folks, how about a little more help in exception messages?

Lesson #3: Please write good exception messages yourself … messages that would give the poor sod who received it a fighting chance of understanding the problem. Extra credit if you suggest how to fix it.

Lesson #4: Business logic in the database is killing us out here. You DBAs have got to be more helpful.

Lesson #5: Stop trying to squeeze every last byte out of the schema. TinyInt should be used only in the most extreme circumstances and never where it could be incremented.

Friday, September 4, 2009

On Software Warning Labels

"The only demand I make of my reader is that he should devote his whole life to reading my works." – James Joyce to an interviewer.

I’ve been weighing in on Colin Blair’s blog recently about concerns I have about the RIA Services DomainDataSource (DDS) … concerns that apply equally to our own DevForce ObjectDataSource.

Colin’s blog on RIA Services is worth watching as Colin’s is a thoughtful voice. He likes RIA Services a lot … but not uncritically.

His post is a response to Jeff Handley’s twitter poll: “Do you consider the .NET RIA Services DomainDataSource to be ‘Data Access Logic’ in the UI?”.

Collin said “Yes”, then did the responsible thing and posted his reasons. That set in motion a small to-and-fro with Jeff who is maintaining and evolving the DDS for Microsoft.

What is going on there is some sensible thinking. There is no Microsoft bashing. I suppose some posturing is inevitable (at least where I am involved) but I believe that matters of real substance are poking through. We’re not just talking about “good design”; we’re talking about consequences.

Which brings me to something that has been gnawing at me since I fulminated against Drag-and-Drop back in August.

I received feedback, not only from Microsoft friends, that I had been uncharacteristically caustic and univocal in my opinion. I seemed to be sliding into the camp that would “shun” people who might use a DDS control; the camp that would “think those people are stupid.”

I recall too well my Berkeley days of smug “Us versus Them”-dom. Appears my capacity for self-righteousness is undiminished.

Timeout! I do not believe that the people who use the DDS are stupid. On the other hand, I do meet a great number of developers who are overworked, undertrained, and uncurious. They constitute the majority of developers … as they do in every profession.

I look in the mirror and can say the same of myself with respect to many topics. There isn’t time to understand the implications and nuances of everything I do. I can muster little interest in certain areas of software development. I simply trust that the technologies work and are appropriate … and I move on with my busy day. Our love of our craft is not equally distributed over the entirety of our domain.

So when I … or anyone … get amped up on a topic like DDS or drag-and-drop or ViewModel … what should I expect? Is this stuff really that bad for you? Are they clubbing baby seals?

No. It’s only software. We are not geniuses. We are not James Joyce and we certainly don’t deserve that kind of devotion.

I’m not backing off my “concern” about drag-and-drop. I’m not recanting my critique of the DDS (and our own ObjectDataSource). I just want to position myself differently … a bit more sympathetically … perhaps suggest an approach that may benefit the developer community and lower the temperature.

What if Microsoft (and IdeaBlade) were more upfront about the consequences and alternatives of some of these dubious controls and practices?

There is the sense among many that Microsoft doesn’t talk as openly as it should about technical debt.

Taking on technical debt can be a wise investment. I use the DataForm with auto-generation in my demos. Incredibly convenient. Could probably get away with it for some production screens that have only a few fields and don’t need much management; it will save tons of time that would be utterly wasted if I did a better job of customizing the user interaction … because nobody wants or needs “a better job”.

I incur that technical debt with gratitude. I choose what I’m doing.

There is no problem with the DataForm, or the DDS, or drag-and-drop. The problem is that many developers are insufficiently aware of the debt they are incurring. I use my credit card knowing that I’ve got it covered; too many developers are using the “credit card” thinking it’s free money.

Alan Stevens suggested something like the following in the Herding Code podcast I wrote about. He said it would be ok to do demos with drag-and-drop if the presenter stopped for a moment and said:

“This approach has utility under specific circumstances … but watch-out because it has costs too. I strongly recommend that you read up on it over here … to appreciate the limits of what I’m showing you … and to learn about alternative paths that may be more appropriate now or in the future.”

That speech is a responsibility we have as presenters at conferences.

Those of us who make and sell a product bear an additional responsibility, especially when we target the broad market of developers who, let’s face it, don’t try very hard to master the craft.

It isn’t enough to squirrel the discussion away in the corner of some blog. That’s the equivalent of hiding in the small print.

I think we (IdeaBlade, Microsoft) should put a warning sticker right on the control itself. Put it in the XAML doc. Teach Intellisense to show the “warning” hyperlink.

Smack me as a hypocrite if we neglect to put a warning on our own ObjectDataSource when we ship it.

Saturday, August 29, 2009

Fiddler + WCF SOAP+ Cassini = Ooof!

What a PITA trying to observe my DevForce Silverlight application running in Cassini using Fiddler2. Thanks to John Papa I’ve got that working now.

Here’s the situation:

  • DevForce communicates using WCF SOAP service
  • I’m running both the client and the service in Cassini, not IIS
  • Tried all the recommended configurations (see below)

The problem is the WCF SOAP service. There is no issue with REST.

What Finally Worked

Can’t make it work with IE at all. Haven’t found anyone who knows how.

But I can make it work with FireFox … and I am fine with running in Firefox. Here ya go:

  1. Install Firefox (duh)
  2. In fiddler: Tools / Fiddler Options / General / disable IPv6 [may not be necessary]
  3. In fiddler: Tools / Fiddler Options / Connections. Click “Copy Browser Proxy Configuration URL” … which puts the right phrase in your clipboard
  4. In Firefox: Tools / Options / Network / Settings. Click “Automatic proxy” radio button which enables the text box. Paste script address from your clipboard.
    On Vista/Win 7, the address looks like
    “file:///C:/Users/YOUR_LOGIN_NAME/Documents/Fiddler2/Scripts/BrowserPAC.js”
  5. Remember to “OK” your way out of the dialogs (yes, I forgot this and hit cancel)
  6. Close Firefox
  7. Re-launch Firefox
  8. Re-launch Fiddler
  9. Confirm Firefox browser actions show up in Fiddler
  10. Launch your app
  11. Paste app address into Firefox

What Works If I Change Requirements

You can deploy the server to IIS and point the Silverlight Client at it. That’s fine … but I didn’t want to bother with that. Just want Cassini to do it all.

You can use Nikhil’s excellent IE plug-in, Web Development Helper. This works too. You enable it from the IE menu: View / Explorer Bars / Web Development Helper.  But I was looking for a Fiddler solution.

What Doesn’t Work

Don’t have VPN running. The minute I’m running my VPN, Fiddler stops listening to all IE traffic, even regular browser traffic (no problem for Firefox though). I’m sure there is a way around this. I don’t know what it is … and at the moment it is just easier to shut down my VPN.

Many kind people offered suggestions … all of which failed me one way or another:

  • localHost.:port/myapp (that’s with a period between “localhost” and colon) - app crashes, unspecified service security exception
  • 127.0.0.1:port/myapp instead of localhost - app crashes, unspecified service security exception
  • ipv4.fiddler:port/myapp instead of localhost - app crashes, unspecified service security exception (because translates to 127.0.0.1)
  • myMachineName:port/myapp instead of localhost – “[Fiddler] Connection to ward-xps failed.
    Exception Text: No connection could be made because the target machine actively refused it”
  • Fiddler proxy setting: Tools / Fiddler Options / General / uncheck “Enable IPv6” – no help
  • Fiddler Tools / WinINET Options / LAN Settings. “Use a proxy server” checked. Click Advanced button … proxy addresses are set to 127.0.0.1 – no help

Many of these suggestions are variations on the cryptic paragraphs in the configuration FAQ on the Fiddler site: http://www.fiddlertool.com/Fiddler/help/hookup.asp#Q-LocalTraffic

Saturday, August 22, 2009

Discover Silverlight with “Silverlight 3 Jumpstart”

Maybe you’ve heard of this Silverlight thing. There’ve been 2,300 new Microsoft technologies introduced this year but Silverlight seems like it might be important. You are a busy, experienced business application developer with limited time. You demand substance but there’s no way you’re going to wade through 800 pages of how to build custom flashy controls. You won’t tolerate marketecture; you don’t want the “Silverlight Programmers Bible” either.

You should snag a copy of “Silverlight 3 Jumpstart” by Microsoft MVP and Regional Director David Yack. At a slim 209 pages you can blaze through it on a roundtrip flight to Redmond just like I did. You won’t learn Silverlight in depth. But you will get a soup-to-nuts view of what building a business application in Silverlight is really like. All of the essential mechanics are there.

  • How Silverlight compares to alternative client technologies (ASP, WinForms, WPF, Flash/Flex/Air)
  • Development tools – got Visual Studio? not much more is absolutely required
  • “Hello, Silverlight” – of course
  • Hosting a Silverlight app – pretty easy stuff
  • Basic screen layout with XAML and visual controls
  • Data binding controls to data – because that’s our bread and butter
  • Debugging – about time someone talked about that
  • Making it look decent with styling - think CSS
  • SketchFlow intro – executable sketches to win your client’s confidence
  • Plumbing (aka, Application Architecture) – so you don’t reinvent every wheel

David is steadfast and clear about his purpose: to give you a firm grasp of what is involved in Silverlight development.

He shoves to the side everything that would get in the way. You already know it gets more complicated than “that” … whatever “that” happens to be. “Rough road ahead” signs are sufficient for the nonce; no need for the bumpy ride right now.

When it comes to getting data into and out of your application, I’m personally thrilled to report that David devotes pages [181 - 184] to our DevForce Silverlight product; full disclosure: I helped him with those four pages. Nearby you can read about the other “Business Application Frameworks”: CSLA, RIA Services, and roll-your-own.

Do yourself a huge favor: don’t roll-your-own; I hope you see why when you look at all the challenging ground these “frameworks” cover.

The very nature of this book requires that it be early to market. It’s not going to be a “timeless classic” nor is that its intention. This is one of the first … if not the first … Silverlight 3 books out there.

The haste shows. There are grammar and spelling mistakes. Chapter 10 was clearly intended to be Chapter 1 as it makes forward references to chapters that you’ve already read. That’s mildly disappointing. Don’t let this undermine your faith in the material; the book is accurate in all important respects relevant to your purpose: to learn what Silverlight development is like and whether it might be for you.

The book is affordably priced at $30 for print, $20 as an e-book; if you use the discount code for my blog (“WardBlog”), the e-book is only $15. The e-book arrives as a PDF and (unlike many technical books) is perfectly readable on your laptop or Kindle or other e-book reader.

Silverlight is a tremendous business application delivery vehicle and a heck of a lot more productive than any other web technology out there. Silverlight Jumpstart will show you why.

Thursday, August 20, 2009

Presentation Patterns Podcast

Jeremy Miller, Glenn Block, Rob Eisenberg & I held a lively, civilized two hour conversation about presentation patterns, courtesy of the good fellas at Herding Code. Part One of the podcast was just released; the hosting page provides a synopsis so you can read quickly to discover if it’s for you.

You always feel a tinge of regret after doing these things … like you’re not sure what you did last night after a serious bender … but having replayed it, I’m thinking there are some useful hints tucked in there. Check it out.

Saturday, August 15, 2009

Do Not Make Every Method Virtual

I’m reacting to Roy Osherove’s recommendation, “make methods virtual by default”, in his excellent The Art of Unit Testing which I reviewed a few days ago.

I’ve heard my friend Jeremy Miller wish that .NET methods were virtual by default as they are in Java. I respect these guys immensely but I strenuously disagree. You should make each member virtual reluctantly and in the full knowledge of the risk you are taking. I hope you’ll at least appreciate my concerns even if you are not convinced.

Why Virtualize All Members By Default

It’s Roy’s first suggestion in his “Design for Testability” appendix [258]. He calls it “handy” and it is handy indeed. The easiest way to stub or hand-mock a class is the “extract and override” approach in which you override a production class to gain access to its innards during testing.

In one such example [71], a TestableProductionClass derives from a ProductionClass so that the implementation of the latter’s GetConcreteDependency() can be replaced with an alternative that returns a stub class instead of the concrete class.

// ProductionClass
protected virtual ILogger GetLogger() { return new ProductionLogger();}


// TestableProductionClass : ProductionClass
protected override GetLogger() { return new TestLogger(); }

What an easy way to swap out the dependency on the production logger … and I don’t need any of that messy IoC machinery.

If every method were virtual, any member could be replaced in this fashion within the test environment.

Making every method virtual makes life easier for the mocking frameworks too. Most do well at fabricating mocks either for interfaces or for unsealed classes with protected virtual methods. I believe it is fair to say that most have a tougher time mocking closed concrete classes.

A class with all virtual methods is easier to proxy too, lending itself to injection techniques such as Ayende demonstrates in his post about injecting INotifyPropertyChanged into POCO objects.

What could possibly be wrong with this?

Invites Violation of Liskov Substitution Principle

The “Liskov Substitution Principle (LSP)” is the “L” in Robert “Uncle Bob” Martin’s SOLID design principles. Formally this principle states that the derived class can’t make a method’s pre-conditions stronger nor post-conditions weaker. In colloquial English, the derived method can’t change the fundamental guarantees of the base method. It may do things differently but it shouldn’t violate our expectations of what goes in, what comes out, or what the method does.

Uncle Bob, in his Agile Principles, Patterns, and Practices in C# , describes LSP as a “prime enabler of the Open Closed Principle (OCP)” [151]. It follows that a violation of LSP is effectively a violation of the more familiar OCP.

Now I have no problme with deliberately making some methods virtual. That is one of the favored techniques for facilitating OCP; it is a mechanism for identifying an opening for extension.

My argument is with opening up the class blindly and totally by making everything virtual. Suddenly nothing in the class is truly “closed for modification.” The “virtual” keyword announces to the world “here is my extension point.” When every method is virtual, the world is invited to change every method.

The lesson of Martin’s LSP chapter is that extending a class by overriding a virtual method is tricky business. The innocent seeming Rectangle is his famous example. What could go wrong in deriving Square from Rectangle and overriding its Length and Width properties? It turns out that plenty can go wrong [140] and that it’s almost impossible for the author of the class to anticipate and guard against the abuse of his class.

Martin is pragmatic. “Accepting compromise instead of pursuing perfection is an engineering trade-off. … However, conformance to LSP should not be surrendered lightly.” [his emphasis, 149] . He continues:

The guarantee that a subclass will always work where its base classes are used is a powerful way to manage complexity. Once it is forsaken, we must consider each subclass individually.

I’m going to come back to this point. Because we give up the guarantee … and open wide the door to big trouble … when we make every member virtual.

Let back up a second and elaborate on the danger.

The Wayward Elevator

I am the maker of Elevator software that runs elevators around the globe. My Elevator class has an Up method. Suppose I make it virtual. How might value-added elevator developers implement an override of a virtual Up in their derived BetterElevator class? They could

  • replace it completely; base.Up() not called
  • call base.Up(), then invoke custom logic
  • invoke custom logic, then call base.Up()
  • wrap base.Up() in pre- and post-logic

It’s not an issue when I am doing the deriving. I almost never call base when I derive a test class (TestElevator). If I support construction of a dynamic proxy to address cross-cutting concerns, I expect the proxy to wrap the base method. These scenarios are not worrisome to me. Why?

When writing tests, I have intimate knowledge of Elevator. Elevator’s details are open to me. I wrote it. I know what I’m doing.

If I make Up() accessible to a proxy, I again know what I’m doing … and more importantly, I know how the proxy will manipulate my method. The proxy may behave blindly but it operates automatically and predictably. Whether I decorate the method with attributes or map it or rely on some conventions, I, the author of the class, am choosing precisely how it will be extended.

Unfortunately, I can’t declare Up() to be “virtual for testing” or “virtual for proxying”. It is simply “virtual”, an invitation to extension by unknown developers in unknown ways. I have lost control of my class.

I knew that Up() sent the elevator ascending. But I can’t stop someone from re-implementing Up() so that it descends instead. Maybe base.Up() triggers the doors to close. The developer might call base.Up() too late, sending the elevator in motion before the doors have closed. The developer could replace my base.Up with something that juggled the sequence of door closing and elevator motion methods, interleaving custom behaviors, yielding upward motion that failed to satisfy some other elevator guarantees.

Any of these or a hundred other implementations of Up() could alter Up-ness in ways that are subtly incorrect or catastrophically wrong. Everyone one of those implementations requires that the developer understand intimate details of the Elevator class, details that he would not have to know if the method were sealed.

“Up” is definitely not closed to modification. It is dangerously open. As an Elevator man, I should work hard … perhaps writing a lot of Elevator.Up certification tests … to ensure that essential pre-conditions, post-conditions, and possible side-effects are all correct even for derived classes.

The burden on my development has gone up as well, not through conscious design but by default. This is unthinking design, fiat design. My code fails “open” instead of “failing” closed. I better be very good and very conscientious about manually sealing what the language would make virtual on its own.

I’m not that good and I’m definitely not conscientious.

Is This Paternalism?

Am I being absurdly protective. Sure there are risks. Programmers are adults and we should treat them as such.

I hear this a lot. I hear about how Microsoft tends to infantilize the developer, tries to shield them from the bumps and bruises that are essential to mastering the profession. There comes a time when you let the kid have sharp scissors and run around with them if he must. A few blind kids is a price worth paying.

I agree … as long as I’m not paying the medical bills.

I Write Frameworks, You Write Applications

I can’t tell you how to build your applications. When you own the class … and you do as an application developer … your team is answerable to you. You don’t come to me with your medical bills.

But I write frameworks for you to use. You’ve licensed my product and you’re paying me for support. When the elevator goes down instead of up; when the doors close suddenly and injure a rider; when the elevator simply stops … you don’t say “I wonder what I did?” You say “that elevator we bought is a piece of crap.”

You paid for support. You call me. And I am pleased to give it.

It’s not free to you. It’s not free to me either. I have to find what’s wrong as quickly as possible and get your elevator moving safely again. Unfortunately, if you’ve written BetterElevator (as you are supposed to do … it’s a framework remember) and you can change everything about my Elevator, I face an extremely challenging support call.

I have no idea what you’ve done. I don’t know what you’ve done to elevator and I can’t guess. You can tell me … you will tell me … that you haven’t touched “Up”. Perhaps you didn’t. Instead you’ve overridden another method on another of my classes that Up requires.

Maybe you don’t write applications. Maybe you write an open source framework. Fantastic. You don’t have true customers. If someone has a problem, you diagnose it and maybe you fix it … at your leisure. That someone has no recourse … and knows that going in.

That’s often why businesses won’t use open source. As more than one manager has told me, “I want a neck to wring.”

I think I’m entitled to fear for my throat. But that’s not my real motivation. I’m in the business of providing good service for a product that makes certain behavioral guarantees. I can’t deliver good service if I can’t make those guarantees and I can’t make those guarantees if every method of every class is up for grabs.

I’m not sure you can either.

.NET Framework Design Guidelines

I’m not the only guy with a healthy suspicion of virtual methods. I admit my degree of suspicion is higher than the that of the application architect’s. The application architect probably knows and controls the developer who derives from his class. The unknown developer who derives from my class controls me.

The designers of .NET understand this too well. That’s why in the .NET Framework Design Guidelines, Krzysztof Cwalina and Brad Abrams write

Virtual members … are costly to design, test, and maintain because any call to a virtual member can be overridden in unpredictable ways and can execute arbitrary code. .. [M]uch more effort is usually required to clearly define the contract of virtual members, so the cost of designing and documenting them is higher.  [201]

If all members are virtual by default,

  • I waste effort manually sealing most of them,
  • I test and document and support more methods than I have resources to support
  • I add unwanted complexity

and what do I gain for my pains?

What Should We Do?

I want to blame someone. I’m going to blame the language authors. Maybe methods are automatically virtual only in test environments. We should be able to mark methods for proxying and compile time injection (as in static Aspect Oriented Programming). Otherwise, members are closed unless explicitly made virtual through the conscious effort of a conscientious programmer.

Meanwhile, I’m prepared to be pragmatic. I like Roy’s recommendation [260] that dependencies should be defined in virtual properties (or methods) without logic.  Auto-properties and similar logic-less methods are safer to make virtual. The Template Design Pattern is a controlled approach to extensibility that may assist testability as well. Interface-based designs help too.

That’s as far as I dare go as a framework developer. Application developers may have more rope to hang … er … more latitude.

Friday, August 14, 2009

The Art of Unit Testing

After I read Tim Barcz’s review of Roy Osherove’s “The Art of Unit Testing” and I knew I had to get a copy right away. It just arrived and I read it in one sitting.  I am so pleased that I did. I’ll quarrel with it … but do not let that deter you from rushing to buy your own copy.

Let me say that again. I highly recommend this book – five stars -, especially to folks like me who are not deep into unit testing. This review is full of my grumpy disagreements. That’s how I engage with a good book. Don’t be dissuaded.

Warning: Long post ahead. The short of it: buy the book. Everything else is commentary.

There is no point in recapping the book’s main points as Tim Barcz did that for us. I’m coming at it from a different angle. I’m coming at it from the angle of a guy who wishes he wrote more tests, wishes he was good at testing, even wishes he practiced (or at least gave a serious, sustained effort at trying) TDD. A guy who doesn’t.

A guy much like the vast multitude of developers out there … who is embarrassed by being “old school”, is looking for an opportunity to catch up, but isn’t going to take crap from an obnoxious TDD fan-boy.

I’ve had plenty of success over the years, thank you very much. I’ve written good programs (and bad) that still work. And I can mop the floor with legions of developers who think TDD/BDD/WTF experience yields greatness. They remind me of newly minted MBAs who believe with unshakeable certainty that they’re entitled to a management position. Think again.

Do I sound defensive? Yup. Enough already. My point is this ...

One of Roy’s goals is to reach people like me. We’re experienced developers who may have mucked around with unit testing but aren’t doing it regularly and may have had some rough experiences. We believe … but we don’t practice. Can he do something for us that makes us want to try again or try harder. Can he keep it simple and approachable and be respectful and non-dogmatic.

Yes he can.

He extends olive branches aplenty throughout. Out the gate he writes: “One of the biggest failed projects I worked on had unit tests. … The project was a miserable failure because we let the tests we wrote do more harm than good.” 

Thank you. I don’t believe it for a second. Oh I believe the tests were every bit as unmaintainable. I’m just not buying that the project failed because of the tests. They contributed perhaps, but in my experience, projects fail for other deeper reasons. That, however, is another post.

What I applaud is that he opens empathetically. He goes straight to the dark heart of our limited test-mania experience: when brittle, inscrutable tests became so onerous that they had to be abandoned. Been there. Seen it several times.

I appreciate that a similarly open and self-critical sensibility shines throughout. I’m particularly fond of the section on alternatives to “Design for Testability” in appendix A. There he notes that the uncomfortable coding-style changes required to support testing are an artifact of the statically typed languages we use today.  “The main problems with non-testable designs is their inability to replace dependencies at runtime. That’s why we need to create interfaces, make methods, virtual, and do many other related things. [266]”

Dynamic languages, for example, don’t require such gymnastics. Perhaps with better tools and language extensions (Aspect Oriented Programming comes to mind) we can make testing easier for the statically typed languages.

Here he acknowledges that testing is just too darned hard, harder than it should be, and this difficulty – not resistance to new-ish ideas by crusty old farts like me – is a genuine obstacle.

Until then, we have to accept that incorporating unit testing in our practice requires more than an act of will. You will need hard won skills and experience and you will have to contort your code to get the benefits of unit testing. This is not your fault. You will pay a bigger price than you should have to pay. It may be rational to say “I can’t pay that price today, on this project.”

It may be rational. It may also be wrong. In any case, Roy’s goal is to reduce that price as best he can (a) with a progressive curriculum yielding skills you can use at each step and (b) by introducing you to tools that cover for language deficiencies.

Roy succeeds for me on both fronts. Each step was a small enough to grasp and big enough to be useful. The tools survey was thin … but at least he has one – with opinions – that gives you places to look and an appreciation of their place in a complete testing regime.

Part 1 - Basics

This part is so important for readers like me. Overall, I thought it was grand. I’m about to freak out about a few of Roy’s choices but before I do I want to say “(mostly) well done!”

My biggest disappointment is Roy’s scant mention of IoC. There is brief treatment of Dependency Injection [62-64] and a listing of IoC offerings in the appendix. That’s it. There is not a single example of IoC usage.

Testing is one of the primary justifications for using IoC. Such short shrift could leave the reader wondering what all the fuss is about. Wrongly, in my opinion. I was really looking forward to guidance on proper use of IoC in unit testing.

The omission felt consequential in Roy’s discussion of test super classes [152ff] where he takes a couple of classes that do logging and refactors their test classe to derive from a BaseTestClass [155] whose only contribution derived classes is its StubLogger. What a waste of inheritance. Injecting a logger is the IoC equivalent of “Hello, World”. What am I missing?

I realize (from painful experience) that it’s easy to create an IoC configuration rat’s nest in your test environment. That’s why I was hoping Roy would propose some best practices. Instead, I believe we are served an anti-pattern.

I must also say I was shocked to see favorable mention of using compiler directives [79 – 80]. He urges caution; I would ban the technique outright.

I was not fond of Roy’s preference for the AAA (Arrange-act-assert) style of test coding. This style facilitates brittle tests because it brings the “arrange” moment into the test class and this has been a source of trouble for me.

“Arrange” code is distracting and bloats the test, making it too hard to see what is going on and leading to test methods that do too many things at once. When I was using this style, I couldn’t stop putting multiple asserts in each method [a “no-no” discussed 199-205]; it was too painful to make separate methods.

His associated test naming convention tends to say more about how the test works than what it is trying to achieve … and I think it is easier to find and understand tests when the names express intent.

Since I adopted more of the Context/Specification style espoused by BDD fans (see, for example, Dan North’s 2006 essay and a more recent manifesto by Scott Bellware), I’ve written smaller tests that are easier to read and easier to maintain. Roy can’t be faulted too much for this; Context/Specification is starting to take hold only this year (2009) and we don’t have the years of experience that go with AAA.

Two caveats: As I made clear at the beginning, I don’t do enough unit testing to be taken seriously as a guide. Second, test regimes falter in year #2 as the long term maintenance of actually-existing-unit-test-implementations overwhelm the development effort; that’s why Roy’s book is important. But the Context/Specification style hasn’t been around long enough to prove its worth in the field. It will take a couple of years to find out.

Part 2 – Core Techniques

Discussion of difference between Stubs and Mocks was brilliant. "If it’s used to check an interaction (asserted against), it’s a mock object. Otherwise , it’s a stub” [90]

Loved that he handwrote mocks before introducing mocking frameworks (he prefers to call them “Isolation Frameworks”). This is a crucial pedagogical move. Many of us are stunned by the mocking framework syntax (e.g., Rhino Mocks) and our instinct is to run away and only use state-based testing.

Those of you who know better will smile knowingly as I confess to the awful mess I made for myself by hand rolling my own mocks for fear of frameworks. There is a reason and it is the sheer ugliness of mocking framework APIs.

Roy gets it. That’s why he sneaks up on Rhino Mocks.

“One Mock Per Test” [94]. I like the sound of it. I like Roy’s reasoning. It’s the kind of clear, unambiguous advice that novices like me need. I’m sure there are times when it is smart to set it aside but it has the whiff of hard-earned wisdom.

I much appreciated the “traps to avoid” section at the end of the Mocking chapter 5. It’s easy to say “if it looks complicated, stop”. We should say it again anyway. Roy goes one better and identifies the tell-tale signs of too much mock framework fascination.

Part 3 – Test Code

I tend to agree with Tim Barcz: Chapter 7, “The Pillars of good tests” is essential and some of it feels like it belongs early in the book … not here, 100 pages in. On the other hand, the reader isn’t ready for a review in depth of test smells and maintainability until they know the basics. On balance, the timing of this chapter feels right.

The passages on “trustworthy tests” overflow with good sense. How to fix a broken test … which includes breaking the production code to ensure the test still catches the failure … that’s a step you overlook at your peril.

It’s proof again, if proof is needed, that you don’t automate unit tests and you can’t do it by rote. Junk testing is hardly better than no testing … and Roy has an iron grip on this fact.

Chapter 6 concerns build automation, code organization, and conventions … crucial blocking and tackling.

This is the place I mentioned earlier at which Roy speaks favorably of test class inheritance where I feel IoC techniques are more appropriate. I don’t think much of overriding a virtual setup method either; I think the Template Pattern is much preferred. With Template Pattern  – in which derived classes override an empty virtual method that is called by the base class – you ensure that base behavior is always invoked and you don’t trouble the developer with knowing when the base method should be called.

Roy describes something he calls the “Test Template Pattern” [158] which sounds like Template Pattern but isn’t. His Test Template Pattern consists of abstract test methods which, perforce, must be implemented by derived test classes. The intention is to ensure that all derived test classes implement specific tests – not, as in Template Pattern, to provide a well-managed base class extension point.

The Context/Specification approach employs the Template Pattern (in the form of a virtual Context() method) as the preferred means by which a derived Specification class makes arrangements (adds “context”) that are particular to its needs.

Speaking of Context/Specification, if you prefer that style, you’ll need to adjust Roy’s recommendation from “One Test Class Per Class Under Test (CUT)” [149] to “One Test File Per Class Under Test”. That’s because Context/Specification yields many test classes, each dedicated to a different “context” in which the CUT is revealed. It is typical of the examples I’ve seen that these many classes can be found in the same physical file, named after the CUT.

I have a feeling that BDD practitioners go farther and argue that you build tests around scenarios, not classes. They could say that it’s a category mistake to force a correlation between CUTs and test files. I just don’t know. Such correlation seem convenient but it may distort the design process. I lack the experience necessary to weigh the tradeoffs. I wish Roy had explored this avenue.

Part 4 – Design and Process

Chapter 8 is about the politics of implementing a testing regime where none exists, a hugely important topic. I enjoyed this chapter immensely. Unfortunately, Roy is utterly unpersuasive.

To summarize, a team that writes tests takes twice as long to deliver the first implementation as the team that doesn’t [232], there are no studies proving that unit tests improve quality [234] even though we believe it anecdotally, there is strong evidence that programmers who write tests won’t do a good job of testing for bugs despite their best intentions [235], and finally, it appears most defects stem not from poor code quality but rather from misunderstanding the application domain [237] . This litany is not the way to management’s heart.

I will expand on each of these observations.

Time to Market

In the “Tough questions and answers” section Roy’s prepares an answer to the #1 question on your manager’s mind: “How much time will this add to the current process?”

Roy’s frank answer is “it doubles your initial implementation time …” [232]

That’s a conversation stopper. Management prizes an early delivery date and it is extremely difficult for management to distinguish the first implementation from “the” implementation.

Roy hastens to add “… the overall release date for the product may actually be reduced.”

That may re-open the conversation … because you’re talking about the delivery date again. You’re making the case that the project won’t be considered delivered until it passes some quality bar … that the savings in the mandatory testing phase may compensate for the slower start.

The equivocation - “may” – will be noticed. Management has heard too many stories about Total Cost of Ownership and Reduced Maintenance. It’s going to be tough.

Here’s the worst part. It is often true that to be in the lead at the first turn means you win the race. It means you get resource commitments that won’t be available without a (ridiculous) early delivery. This is so even if we finish much later than the conscientious, test driven developers. Too bad, because they never get the shot. And by the time the technology debt comes due, there are sunk costs (real and political) that management will be loathe to abandon.

This is just how it is. So, while I applaud Roy’s honesty, this is a tough sell. He needs another plan. He needs a way to shift the definition of “delivered” to an implementation that passes a measurable quality bar. He needs to talk about short cycles so that the evidence is experienced on this project and registers in management’s short term memory.

Roy shows some grasp of this dynamic. In his example – a tale of two projects – the debugged release time is 26 days in the worst (no-testing) case. You can win a month to prove your point … but not much longer.

Does unit testing improve code quality?

Roy is his typical honest self here. Unfortunately, what he reports is not likely to advance his cause.

He draws proper attention to code coverage. There are lovely charts. There is just one flaw: you have to convince the skeptics that you’re measuring something that matters.

You think that’s a good metric because it measures unit testing activity. The skeptic doesn’t care about your activity. Activity – expenditure of effort – is irrelevant. The skeptic cares about delivering the system that “works acceptably” as quickly as possible. The skeptic suspects your polishing one apple, while he wants many apples, perhaps less polished.

A devastating admission: “There aren’t any specific studies I can point to on whether unit testing helps achieve better code quality.” [234] Ouch! That has to be fixed.

Here’s another groaner: “A study by Glenford Myres showed that developers writing tests were not really looking for bugs, and so found only half to two-thirds of the bugs in an application.” [235]

Here’s another citation that Roy interprets as strengthening the case for unit tests although I think it does the opposite: “A study held by … showed that most defects don’t come from the code itself, but result from miscommunication between people, requirements that keep changing, and a lack of application domain knowledge.”[237]

It is not self-evident how unit testing alleviates these sources of error. The best he can say is that, as you correct course, the unit tests provide some assurance that the other things you still think are true are still tested. That’s valuable … but weak beer at best.

This chapter made him wonder again if I should be so ashamed of my test-less oeuvre.

Nah. We may lack the proof but absence of proof is not proof of absence. Where would we be if we only followed rigorously proven practices? Show me the study that proves “GoTo”s are bad.

There was a prolonged and super-heated argument in the ‘70s and ‘80 about the (de)merits of GoTo. Steve McConnell covers it in an article from his Code Complete where he makes reference to a Ben Shneiderman “literature survey”. I suspect a literature survey would yield comparable support for unit testing. Literature survey’s perhaps reflect the “wisdom of the field”; they are not evidence.

The fact is, we have very little social science on any development practices. The objection that unit testing and TDD are unproven could be raised about almost any practice. The anecdotal support for unit testing remains strong.

We shouldn’t leave it there. We need real studies. I’d like to see some of my former colleagues in economic sociology jump in. There’s at least a masters thesis here.

It’s also possible that the limited studies to which Roy refers (he does not cite them) produce inconclusive results because they don’t account for test quality. Noise from botched test regimes may be hiding the good news. Roy established early that (a) a poor bad unit testing can be worse than no unit testing and (b) it’s easy to make a mess.

If this interpretation is correct, we are challenged to improve testing as actually practiced in the wild. We lose the argument – and we should lose it -  if proper unit testing remains a rare skill, difficult to acquire. If Roy’s book becomes widely read and as more developers learn to write better tests, we could hope for a positive swing in the statistics.

Finally, I’ve heard Steve McConnell claim in a DotNet Rocks Show (0:28)  that “40% to 80% of its effort on unplanned defect correction work … in other words, low quality is the single largest cost driver for the average project.” I don’t know how Steve came by these statistics (and “40% – to 80%” is huge swag). It is Steve’s business to measure and track this stuff. And if you’re doing something that attacks the “single largest cost driver” … and you’re not disproportionately increasing costs with your remedy [!] … then you’re making business sense.

Chapter 9 on testing legacy code is a welcome introduction with good advice … but no substitute for Michael Feather’s Working Effectively with Legacy Code. Feather’s book is expensive ($47 on Amazon); perhaps Roy’s chapter and his enthusiasm for Feather’s book will encourage sales.

Appendices

Appendix “A”, ostensibly about design and testability, is mostly about design for testability. That’s no small leap. Testing code heavily is one thing. It is another to distort your design to satisfy inadequacies in the language that make testing difficult.

I’ve deliberately expressed this point in the most contentious way possible to dramatize the implications of exchanging “for” for “and”.

I hasten to express my enthusiasm for the contribution of “unit testing” to design. Expressing your expectations in code clarifies the design and casts a strong light on otherwise dark edge cases. Many of the test disciplines, loosening dependencies in particular, promote SOLID design principles (especially Single Responsibility) that are beneficial in their own right. Roy is excellent on these points.

The problem is that at least one of Roy’s recommendations, “Make methods virtual by default” [258], reduces design quality in order to make testing easier. Testability and Good Design are at cross purposes.

“Make methods virtual by default” is a terrible idea in my opinion. I explore that opinion in a separate post. My argument in brief is that a virtual method is an invitation to extension everywhere. Extensibility is not a frill. You have doors in your house for a reason; that’s where people are expected to enter. They aren’t expected to come through the windows. You don’t punch orifices into every wall. A plethora of virtual methods invites violation of the Open / Closed Principle (“Liskov Substitution Principle” to be more precise) and makes delivery, maintenance, and support of a system pointlessly more difficult.

This aside, the chapter, although brief, is clear and persuasive.

Appendix “B” enumerates helpful tools and test frameworks. Each merits only a brief blurb but I was pleased to have an annotated list of Roy-approved choices.

Conclusion

This is a wonderful book for the experienced developer who is open to unit testing while having limited experience of it. I suspect it will help technical managers of a certain age … managers who’s programming days are behind them, who’ve heard the fuss, been through a few fads, and want a serious, honest, warts-and-all look at unit testing.

I’m told also that it has earned the respect and admiration of many with deep unit testing experience. That’s a confidence builder for me.

Get it.