Saturday, August 29, 2009

Fiddler + WCF SOAP+ Cassini = Ooof!

What a PITA trying to observe my DevForce Silverlight application running in Cassini using Fiddler2. Thanks to John Papa I’ve got that working now.

Here’s the situation:

  • DevForce communicates using WCF SOAP service
  • I’m running both the client and the service in Cassini, not IIS
  • Tried all the recommended configurations (see below)

The problem is the WCF SOAP service. There is no issue with REST.

What Finally Worked

Can’t make it work with IE at all. Haven’t found anyone who knows how.

But I can make it work with FireFox … and I am fine with running in Firefox. Here ya go:

  1. Install Firefox (duh)
  2. In fiddler: Tools / Fiddler Options / General / disable IPv6 [may not be necessary]
  3. In fiddler: Tools / Fiddler Options / Connections. Click “Copy Browser Proxy Configuration URL” … which puts the right phrase in your clipboard
  4. In Firefox: Tools / Options / Network / Settings. Click “Automatic proxy” radio button which enables the text box. Paste script address from your clipboard.
    On Vista/Win 7, the address looks like
  5. Remember to “OK” your way out of the dialogs (yes, I forgot this and hit cancel)
  6. Close Firefox
  7. Re-launch Firefox
  8. Re-launch Fiddler
  9. Confirm Firefox browser actions show up in Fiddler
  10. Launch your app
  11. Paste app address into Firefox

What Works If I Change Requirements

You can deploy the server to IIS and point the Silverlight Client at it. That’s fine … but I didn’t want to bother with that. Just want Cassini to do it all.

You can use Nikhil’s excellent IE plug-in, Web Development Helper. This works too. You enable it from the IE menu: View / Explorer Bars / Web Development Helper.  But I was looking for a Fiddler solution.

What Doesn’t Work

Don’t have VPN running. The minute I’m running my VPN, Fiddler stops listening to all IE traffic, even regular browser traffic (no problem for Firefox though). I’m sure there is a way around this. I don’t know what it is … and at the moment it is just easier to shut down my VPN.

Many kind people offered suggestions … all of which failed me one way or another:

  • localHost.:port/myapp (that’s with a period between “localhost” and colon) - app crashes, unspecified service security exception
  • instead of localhost - app crashes, unspecified service security exception
  • ipv4.fiddler:port/myapp instead of localhost - app crashes, unspecified service security exception (because translates to
  • myMachineName:port/myapp instead of localhost – “[Fiddler] Connection to ward-xps failed.
    Exception Text: No connection could be made because the target machine actively refused it”
  • Fiddler proxy setting: Tools / Fiddler Options / General / uncheck “Enable IPv6” – no help
  • Fiddler Tools / WinINET Options / LAN Settings. “Use a proxy server” checked. Click Advanced button … proxy addresses are set to – no help

Many of these suggestions are variations on the cryptic paragraphs in the configuration FAQ on the Fiddler site:

Saturday, August 22, 2009

Discover Silverlight with “Silverlight 3 Jumpstart”

Maybe you’ve heard of this Silverlight thing. There’ve been 2,300 new Microsoft technologies introduced this year but Silverlight seems like it might be important. You are a busy, experienced business application developer with limited time. You demand substance but there’s no way you’re going to wade through 800 pages of how to build custom flashy controls. You won’t tolerate marketecture; you don’t want the “Silverlight Programmers Bible” either.

You should snag a copy of “Silverlight 3 Jumpstart” by Microsoft MVP and Regional Director David Yack. At a slim 209 pages you can blaze through it on a roundtrip flight to Redmond just like I did. You won’t learn Silverlight in depth. But you will get a soup-to-nuts view of what building a business application in Silverlight is really like. All of the essential mechanics are there.

  • How Silverlight compares to alternative client technologies (ASP, WinForms, WPF, Flash/Flex/Air)
  • Development tools – got Visual Studio? not much more is absolutely required
  • “Hello, Silverlight” – of course
  • Hosting a Silverlight app – pretty easy stuff
  • Basic screen layout with XAML and visual controls
  • Data binding controls to data – because that’s our bread and butter
  • Debugging – about time someone talked about that
  • Making it look decent with styling - think CSS
  • SketchFlow intro – executable sketches to win your client’s confidence
  • Plumbing (aka, Application Architecture) – so you don’t reinvent every wheel

David is steadfast and clear about his purpose: to give you a firm grasp of what is involved in Silverlight development.

He shoves to the side everything that would get in the way. You already know it gets more complicated than “that” … whatever “that” happens to be. “Rough road ahead” signs are sufficient for the nonce; no need for the bumpy ride right now.

When it comes to getting data into and out of your application, I’m personally thrilled to report that David devotes pages [181 - 184] to our DevForce Silverlight product; full disclosure: I helped him with those four pages. Nearby you can read about the other “Business Application Frameworks”: CSLA, RIA Services, and roll-your-own.

Do yourself a huge favor: don’t roll-your-own; I hope you see why when you look at all the challenging ground these “frameworks” cover.

The very nature of this book requires that it be early to market. It’s not going to be a “timeless classic” nor is that its intention. This is one of the first … if not the first … Silverlight 3 books out there.

The haste shows. There are grammar and spelling mistakes. Chapter 10 was clearly intended to be Chapter 1 as it makes forward references to chapters that you’ve already read. That’s mildly disappointing. Don’t let this undermine your faith in the material; the book is accurate in all important respects relevant to your purpose: to learn what Silverlight development is like and whether it might be for you.

The book is affordably priced at $30 for print, $20 as an e-book; if you use the discount code for my blog (“WardBlog”), the e-book is only $15. The e-book arrives as a PDF and (unlike many technical books) is perfectly readable on your laptop or Kindle or other e-book reader.

Silverlight is a tremendous business application delivery vehicle and a heck of a lot more productive than any other web technology out there. Silverlight Jumpstart will show you why.

Thursday, August 20, 2009

Presentation Patterns Podcast

Jeremy Miller, Glenn Block, Rob Eisenberg & I held a lively, civilized two hour conversation about presentation patterns, courtesy of the good fellas at Herding Code. Part One of the podcast was just released; the hosting page provides a synopsis so you can read quickly to discover if it’s for you.

You always feel a tinge of regret after doing these things … like you’re not sure what you did last night after a serious bender … but having replayed it, I’m thinking there are some useful hints tucked in there. Check it out.

Saturday, August 15, 2009

Do Not Make Every Method Virtual

I’m reacting to Roy Osherove’s recommendation, “make methods virtual by default”, in his excellent The Art of Unit Testing which I reviewed a few days ago.

I’ve heard my friend Jeremy Miller wish that .NET methods were virtual by default as they are in Java. I respect these guys immensely but I strenuously disagree. You should make each member virtual reluctantly and in the full knowledge of the risk you are taking. I hope you’ll at least appreciate my concerns even if you are not convinced.

Why Virtualize All Members By Default

It’s Roy’s first suggestion in his “Design for Testability” appendix [258]. He calls it “handy” and it is handy indeed. The easiest way to stub or hand-mock a class is the “extract and override” approach in which you override a production class to gain access to its innards during testing.

In one such example [71], a TestableProductionClass derives from a ProductionClass so that the implementation of the latter’s GetConcreteDependency() can be replaced with an alternative that returns a stub class instead of the concrete class.

// ProductionClass
protected virtual ILogger GetLogger() { return new ProductionLogger();}

// TestableProductionClass : ProductionClass
protected override GetLogger() { return new TestLogger(); }

What an easy way to swap out the dependency on the production logger … and I don’t need any of that messy IoC machinery.

If every method were virtual, any member could be replaced in this fashion within the test environment.

Making every method virtual makes life easier for the mocking frameworks too. Most do well at fabricating mocks either for interfaces or for unsealed classes with protected virtual methods. I believe it is fair to say that most have a tougher time mocking closed concrete classes.

A class with all virtual methods is easier to proxy too, lending itself to injection techniques such as Ayende demonstrates in his post about injecting INotifyPropertyChanged into POCO objects.

What could possibly be wrong with this?

Invites Violation of Liskov Substitution Principle

The “Liskov Substitution Principle (LSP)” is the “L” in Robert “Uncle Bob” Martin’s SOLID design principles. Formally this principle states that the derived class can’t make a method’s pre-conditions stronger nor post-conditions weaker. In colloquial English, the derived method can’t change the fundamental guarantees of the base method. It may do things differently but it shouldn’t violate our expectations of what goes in, what comes out, or what the method does.

Uncle Bob, in his Agile Principles, Patterns, and Practices in C# , describes LSP as a “prime enabler of the Open Closed Principle (OCP)” [151]. It follows that a violation of LSP is effectively a violation of the more familiar OCP.

Now I have no problme with deliberately making some methods virtual. That is one of the favored techniques for facilitating OCP; it is a mechanism for identifying an opening for extension.

My argument is with opening up the class blindly and totally by making everything virtual. Suddenly nothing in the class is truly “closed for modification.” The “virtual” keyword announces to the world “here is my extension point.” When every method is virtual, the world is invited to change every method.

The lesson of Martin’s LSP chapter is that extending a class by overriding a virtual method is tricky business. The innocent seeming Rectangle is his famous example. What could go wrong in deriving Square from Rectangle and overriding its Length and Width properties? It turns out that plenty can go wrong [140] and that it’s almost impossible for the author of the class to anticipate and guard against the abuse of his class.

Martin is pragmatic. “Accepting compromise instead of pursuing perfection is an engineering trade-off. … However, conformance to LSP should not be surrendered lightly.” [his emphasis, 149] . He continues:

The guarantee that a subclass will always work where its base classes are used is a powerful way to manage complexity. Once it is forsaken, we must consider each subclass individually.

I’m going to come back to this point. Because we give up the guarantee … and open wide the door to big trouble … when we make every member virtual.

Let back up a second and elaborate on the danger.

The Wayward Elevator

I am the maker of Elevator software that runs elevators around the globe. My Elevator class has an Up method. Suppose I make it virtual. How might value-added elevator developers implement an override of a virtual Up in their derived BetterElevator class? They could

  • replace it completely; base.Up() not called
  • call base.Up(), then invoke custom logic
  • invoke custom logic, then call base.Up()
  • wrap base.Up() in pre- and post-logic

It’s not an issue when I am doing the deriving. I almost never call base when I derive a test class (TestElevator). If I support construction of a dynamic proxy to address cross-cutting concerns, I expect the proxy to wrap the base method. These scenarios are not worrisome to me. Why?

When writing tests, I have intimate knowledge of Elevator. Elevator’s details are open to me. I wrote it. I know what I’m doing.

If I make Up() accessible to a proxy, I again know what I’m doing … and more importantly, I know how the proxy will manipulate my method. The proxy may behave blindly but it operates automatically and predictably. Whether I decorate the method with attributes or map it or rely on some conventions, I, the author of the class, am choosing precisely how it will be extended.

Unfortunately, I can’t declare Up() to be “virtual for testing” or “virtual for proxying”. It is simply “virtual”, an invitation to extension by unknown developers in unknown ways. I have lost control of my class.

I knew that Up() sent the elevator ascending. But I can’t stop someone from re-implementing Up() so that it descends instead. Maybe base.Up() triggers the doors to close. The developer might call base.Up() too late, sending the elevator in motion before the doors have closed. The developer could replace my base.Up with something that juggled the sequence of door closing and elevator motion methods, interleaving custom behaviors, yielding upward motion that failed to satisfy some other elevator guarantees.

Any of these or a hundred other implementations of Up() could alter Up-ness in ways that are subtly incorrect or catastrophically wrong. Everyone one of those implementations requires that the developer understand intimate details of the Elevator class, details that he would not have to know if the method were sealed.

“Up” is definitely not closed to modification. It is dangerously open. As an Elevator man, I should work hard … perhaps writing a lot of Elevator.Up certification tests … to ensure that essential pre-conditions, post-conditions, and possible side-effects are all correct even for derived classes.

The burden on my development has gone up as well, not through conscious design but by default. This is unthinking design, fiat design. My code fails “open” instead of “failing” closed. I better be very good and very conscientious about manually sealing what the language would make virtual on its own.

I’m not that good and I’m definitely not conscientious.

Is This Paternalism?

Am I being absurdly protective. Sure there are risks. Programmers are adults and we should treat them as such.

I hear this a lot. I hear about how Microsoft tends to infantilize the developer, tries to shield them from the bumps and bruises that are essential to mastering the profession. There comes a time when you let the kid have sharp scissors and run around with them if he must. A few blind kids is a price worth paying.

I agree … as long as I’m not paying the medical bills.

I Write Frameworks, You Write Applications

I can’t tell you how to build your applications. When you own the class … and you do as an application developer … your team is answerable to you. You don’t come to me with your medical bills.

But I write frameworks for you to use. You’ve licensed my product and you’re paying me for support. When the elevator goes down instead of up; when the doors close suddenly and injure a rider; when the elevator simply stops … you don’t say “I wonder what I did?” You say “that elevator we bought is a piece of crap.”

You paid for support. You call me. And I am pleased to give it.

It’s not free to you. It’s not free to me either. I have to find what’s wrong as quickly as possible and get your elevator moving safely again. Unfortunately, if you’ve written BetterElevator (as you are supposed to do … it’s a framework remember) and you can change everything about my Elevator, I face an extremely challenging support call.

I have no idea what you’ve done. I don’t know what you’ve done to elevator and I can’t guess. You can tell me … you will tell me … that you haven’t touched “Up”. Perhaps you didn’t. Instead you’ve overridden another method on another of my classes that Up requires.

Maybe you don’t write applications. Maybe you write an open source framework. Fantastic. You don’t have true customers. If someone has a problem, you diagnose it and maybe you fix it … at your leisure. That someone has no recourse … and knows that going in.

That’s often why businesses won’t use open source. As more than one manager has told me, “I want a neck to wring.”

I think I’m entitled to fear for my throat. But that’s not my real motivation. I’m in the business of providing good service for a product that makes certain behavioral guarantees. I can’t deliver good service if I can’t make those guarantees and I can’t make those guarantees if every method of every class is up for grabs.

I’m not sure you can either.

.NET Framework Design Guidelines

I’m not the only guy with a healthy suspicion of virtual methods. I admit my degree of suspicion is higher than the that of the application architect’s. The application architect probably knows and controls the developer who derives from his class. The unknown developer who derives from my class controls me.

The designers of .NET understand this too well. That’s why in the .NET Framework Design Guidelines, Krzysztof Cwalina and Brad Abrams write

Virtual members … are costly to design, test, and maintain because any call to a virtual member can be overridden in unpredictable ways and can execute arbitrary code. .. [M]uch more effort is usually required to clearly define the contract of virtual members, so the cost of designing and documenting them is higher.  [201]

If all members are virtual by default,

  • I waste effort manually sealing most of them,
  • I test and document and support more methods than I have resources to support
  • I add unwanted complexity

and what do I gain for my pains?

What Should We Do?

I want to blame someone. I’m going to blame the language authors. Maybe methods are automatically virtual only in test environments. We should be able to mark methods for proxying and compile time injection (as in static Aspect Oriented Programming). Otherwise, members are closed unless explicitly made virtual through the conscious effort of a conscientious programmer.

Meanwhile, I’m prepared to be pragmatic. I like Roy’s recommendation [260] that dependencies should be defined in virtual properties (or methods) without logic.  Auto-properties and similar logic-less methods are safer to make virtual. The Template Design Pattern is a controlled approach to extensibility that may assist testability as well. Interface-based designs help too.

That’s as far as I dare go as a framework developer. Application developers may have more rope to hang … er … more latitude.

Friday, August 14, 2009

The Art of Unit Testing

After I read Tim Barcz’s review of Roy Osherove’s “The Art of Unit Testing” and I knew I had to get a copy right away. It just arrived and I read it in one sitting.  I am so pleased that I did. I’ll quarrel with it … but do not let that deter you from rushing to buy your own copy.

Let me say that again. I highly recommend this book – five stars -, especially to folks like me who are not deep into unit testing. This review is full of my grumpy disagreements. That’s how I engage with a good book. Don’t be dissuaded.

Warning: Long post ahead. The short of it: buy the book. Everything else is commentary.

There is no point in recapping the book’s main points as Tim Barcz did that for us. I’m coming at it from a different angle. I’m coming at it from the angle of a guy who wishes he wrote more tests, wishes he was good at testing, even wishes he practiced (or at least gave a serious, sustained effort at trying) TDD. A guy who doesn’t.

A guy much like the vast multitude of developers out there … who is embarrassed by being “old school”, is looking for an opportunity to catch up, but isn’t going to take crap from an obnoxious TDD fan-boy.

I’ve had plenty of success over the years, thank you very much. I’ve written good programs (and bad) that still work. And I can mop the floor with legions of developers who think TDD/BDD/WTF experience yields greatness. They remind me of newly minted MBAs who believe with unshakeable certainty that they’re entitled to a management position. Think again.

Do I sound defensive? Yup. Enough already. My point is this ...

One of Roy’s goals is to reach people like me. We’re experienced developers who may have mucked around with unit testing but aren’t doing it regularly and may have had some rough experiences. We believe … but we don’t practice. Can he do something for us that makes us want to try again or try harder. Can he keep it simple and approachable and be respectful and non-dogmatic.

Yes he can.

He extends olive branches aplenty throughout. Out the gate he writes: “One of the biggest failed projects I worked on had unit tests. … The project was a miserable failure because we let the tests we wrote do more harm than good.” 

Thank you. I don’t believe it for a second. Oh I believe the tests were every bit as unmaintainable. I’m just not buying that the project failed because of the tests. They contributed perhaps, but in my experience, projects fail for other deeper reasons. That, however, is another post.

What I applaud is that he opens empathetically. He goes straight to the dark heart of our limited test-mania experience: when brittle, inscrutable tests became so onerous that they had to be abandoned. Been there. Seen it several times.

I appreciate that a similarly open and self-critical sensibility shines throughout. I’m particularly fond of the section on alternatives to “Design for Testability” in appendix A. There he notes that the uncomfortable coding-style changes required to support testing are an artifact of the statically typed languages we use today.  “The main problems with non-testable designs is their inability to replace dependencies at runtime. That’s why we need to create interfaces, make methods, virtual, and do many other related things. [266]”

Dynamic languages, for example, don’t require such gymnastics. Perhaps with better tools and language extensions (Aspect Oriented Programming comes to mind) we can make testing easier for the statically typed languages.

Here he acknowledges that testing is just too darned hard, harder than it should be, and this difficulty – not resistance to new-ish ideas by crusty old farts like me – is a genuine obstacle.

Until then, we have to accept that incorporating unit testing in our practice requires more than an act of will. You will need hard won skills and experience and you will have to contort your code to get the benefits of unit testing. This is not your fault. You will pay a bigger price than you should have to pay. It may be rational to say “I can’t pay that price today, on this project.”

It may be rational. It may also be wrong. In any case, Roy’s goal is to reduce that price as best he can (a) with a progressive curriculum yielding skills you can use at each step and (b) by introducing you to tools that cover for language deficiencies.

Roy succeeds for me on both fronts. Each step was a small enough to grasp and big enough to be useful. The tools survey was thin … but at least he has one – with opinions – that gives you places to look and an appreciation of their place in a complete testing regime.

Part 1 - Basics

This part is so important for readers like me. Overall, I thought it was grand. I’m about to freak out about a few of Roy’s choices but before I do I want to say “(mostly) well done!”

My biggest disappointment is Roy’s scant mention of IoC. There is brief treatment of Dependency Injection [62-64] and a listing of IoC offerings in the appendix. That’s it. There is not a single example of IoC usage.

Testing is one of the primary justifications for using IoC. Such short shrift could leave the reader wondering what all the fuss is about. Wrongly, in my opinion. I was really looking forward to guidance on proper use of IoC in unit testing.

The omission felt consequential in Roy’s discussion of test super classes [152ff] where he takes a couple of classes that do logging and refactors their test classe to derive from a BaseTestClass [155] whose only contribution derived classes is its StubLogger. What a waste of inheritance. Injecting a logger is the IoC equivalent of “Hello, World”. What am I missing?

I realize (from painful experience) that it’s easy to create an IoC configuration rat’s nest in your test environment. That’s why I was hoping Roy would propose some best practices. Instead, I believe we are served an anti-pattern.

I must also say I was shocked to see favorable mention of using compiler directives [79 – 80]. He urges caution; I would ban the technique outright.

I was not fond of Roy’s preference for the AAA (Arrange-act-assert) style of test coding. This style facilitates brittle tests because it brings the “arrange” moment into the test class and this has been a source of trouble for me.

“Arrange” code is distracting and bloats the test, making it too hard to see what is going on and leading to test methods that do too many things at once. When I was using this style, I couldn’t stop putting multiple asserts in each method [a “no-no” discussed 199-205]; it was too painful to make separate methods.

His associated test naming convention tends to say more about how the test works than what it is trying to achieve … and I think it is easier to find and understand tests when the names express intent.

Since I adopted more of the Context/Specification style espoused by BDD fans (see, for example, Dan North’s 2006 essay and a more recent manifesto by Scott Bellware), I’ve written smaller tests that are easier to read and easier to maintain. Roy can’t be faulted too much for this; Context/Specification is starting to take hold only this year (2009) and we don’t have the years of experience that go with AAA.

Two caveats: As I made clear at the beginning, I don’t do enough unit testing to be taken seriously as a guide. Second, test regimes falter in year #2 as the long term maintenance of actually-existing-unit-test-implementations overwhelm the development effort; that’s why Roy’s book is important. But the Context/Specification style hasn’t been around long enough to prove its worth in the field. It will take a couple of years to find out.

Part 2 – Core Techniques

Discussion of difference between Stubs and Mocks was brilliant. "If it’s used to check an interaction (asserted against), it’s a mock object. Otherwise , it’s a stub” [90]

Loved that he handwrote mocks before introducing mocking frameworks (he prefers to call them “Isolation Frameworks”). This is a crucial pedagogical move. Many of us are stunned by the mocking framework syntax (e.g., Rhino Mocks) and our instinct is to run away and only use state-based testing.

Those of you who know better will smile knowingly as I confess to the awful mess I made for myself by hand rolling my own mocks for fear of frameworks. There is a reason and it is the sheer ugliness of mocking framework APIs.

Roy gets it. That’s why he sneaks up on Rhino Mocks.

“One Mock Per Test” [94]. I like the sound of it. I like Roy’s reasoning. It’s the kind of clear, unambiguous advice that novices like me need. I’m sure there are times when it is smart to set it aside but it has the whiff of hard-earned wisdom.

I much appreciated the “traps to avoid” section at the end of the Mocking chapter 5. It’s easy to say “if it looks complicated, stop”. We should say it again anyway. Roy goes one better and identifies the tell-tale signs of too much mock framework fascination.

Part 3 – Test Code

I tend to agree with Tim Barcz: Chapter 7, “The Pillars of good tests” is essential and some of it feels like it belongs early in the book … not here, 100 pages in. On the other hand, the reader isn’t ready for a review in depth of test smells and maintainability until they know the basics. On balance, the timing of this chapter feels right.

The passages on “trustworthy tests” overflow with good sense. How to fix a broken test … which includes breaking the production code to ensure the test still catches the failure … that’s a step you overlook at your peril.

It’s proof again, if proof is needed, that you don’t automate unit tests and you can’t do it by rote. Junk testing is hardly better than no testing … and Roy has an iron grip on this fact.

Chapter 6 concerns build automation, code organization, and conventions … crucial blocking and tackling.

This is the place I mentioned earlier at which Roy speaks favorably of test class inheritance where I feel IoC techniques are more appropriate. I don’t think much of overriding a virtual setup method either; I think the Template Pattern is much preferred. With Template Pattern  – in which derived classes override an empty virtual method that is called by the base class – you ensure that base behavior is always invoked and you don’t trouble the developer with knowing when the base method should be called.

Roy describes something he calls the “Test Template Pattern” [158] which sounds like Template Pattern but isn’t. His Test Template Pattern consists of abstract test methods which, perforce, must be implemented by derived test classes. The intention is to ensure that all derived test classes implement specific tests – not, as in Template Pattern, to provide a well-managed base class extension point.

The Context/Specification approach employs the Template Pattern (in the form of a virtual Context() method) as the preferred means by which a derived Specification class makes arrangements (adds “context”) that are particular to its needs.

Speaking of Context/Specification, if you prefer that style, you’ll need to adjust Roy’s recommendation from “One Test Class Per Class Under Test (CUT)” [149] to “One Test File Per Class Under Test”. That’s because Context/Specification yields many test classes, each dedicated to a different “context” in which the CUT is revealed. It is typical of the examples I’ve seen that these many classes can be found in the same physical file, named after the CUT.

I have a feeling that BDD practitioners go farther and argue that you build tests around scenarios, not classes. They could say that it’s a category mistake to force a correlation between CUTs and test files. I just don’t know. Such correlation seem convenient but it may distort the design process. I lack the experience necessary to weigh the tradeoffs. I wish Roy had explored this avenue.

Part 4 – Design and Process

Chapter 8 is about the politics of implementing a testing regime where none exists, a hugely important topic. I enjoyed this chapter immensely. Unfortunately, Roy is utterly unpersuasive.

To summarize, a team that writes tests takes twice as long to deliver the first implementation as the team that doesn’t [232], there are no studies proving that unit tests improve quality [234] even though we believe it anecdotally, there is strong evidence that programmers who write tests won’t do a good job of testing for bugs despite their best intentions [235], and finally, it appears most defects stem not from poor code quality but rather from misunderstanding the application domain [237] . This litany is not the way to management’s heart.

I will expand on each of these observations.

Time to Market

In the “Tough questions and answers” section Roy’s prepares an answer to the #1 question on your manager’s mind: “How much time will this add to the current process?”

Roy’s frank answer is “it doubles your initial implementation time …” [232]

That’s a conversation stopper. Management prizes an early delivery date and it is extremely difficult for management to distinguish the first implementation from “the” implementation.

Roy hastens to add “… the overall release date for the product may actually be reduced.”

That may re-open the conversation … because you’re talking about the delivery date again. You’re making the case that the project won’t be considered delivered until it passes some quality bar … that the savings in the mandatory testing phase may compensate for the slower start.

The equivocation - “may” – will be noticed. Management has heard too many stories about Total Cost of Ownership and Reduced Maintenance. It’s going to be tough.

Here’s the worst part. It is often true that to be in the lead at the first turn means you win the race. It means you get resource commitments that won’t be available without a (ridiculous) early delivery. This is so even if we finish much later than the conscientious, test driven developers. Too bad, because they never get the shot. And by the time the technology debt comes due, there are sunk costs (real and political) that management will be loathe to abandon.

This is just how it is. So, while I applaud Roy’s honesty, this is a tough sell. He needs another plan. He needs a way to shift the definition of “delivered” to an implementation that passes a measurable quality bar. He needs to talk about short cycles so that the evidence is experienced on this project and registers in management’s short term memory.

Roy shows some grasp of this dynamic. In his example – a tale of two projects – the debugged release time is 26 days in the worst (no-testing) case. You can win a month to prove your point … but not much longer.

Does unit testing improve code quality?

Roy is his typical honest self here. Unfortunately, what he reports is not likely to advance his cause.

He draws proper attention to code coverage. There are lovely charts. There is just one flaw: you have to convince the skeptics that you’re measuring something that matters.

You think that’s a good metric because it measures unit testing activity. The skeptic doesn’t care about your activity. Activity – expenditure of effort – is irrelevant. The skeptic cares about delivering the system that “works acceptably” as quickly as possible. The skeptic suspects your polishing one apple, while he wants many apples, perhaps less polished.

A devastating admission: “There aren’t any specific studies I can point to on whether unit testing helps achieve better code quality.” [234] Ouch! That has to be fixed.

Here’s another groaner: “A study by Glenford Myres showed that developers writing tests were not really looking for bugs, and so found only half to two-thirds of the bugs in an application.” [235]

Here’s another citation that Roy interprets as strengthening the case for unit tests although I think it does the opposite: “A study held by … showed that most defects don’t come from the code itself, but result from miscommunication between people, requirements that keep changing, and a lack of application domain knowledge.”[237]

It is not self-evident how unit testing alleviates these sources of error. The best he can say is that, as you correct course, the unit tests provide some assurance that the other things you still think are true are still tested. That’s valuable … but weak beer at best.

This chapter made him wonder again if I should be so ashamed of my test-less oeuvre.

Nah. We may lack the proof but absence of proof is not proof of absence. Where would we be if we only followed rigorously proven practices? Show me the study that proves “GoTo”s are bad.

There was a prolonged and super-heated argument in the ‘70s and ‘80 about the (de)merits of GoTo. Steve McConnell covers it in an article from his Code Complete where he makes reference to a Ben Shneiderman “literature survey”. I suspect a literature survey would yield comparable support for unit testing. Literature survey’s perhaps reflect the “wisdom of the field”; they are not evidence.

The fact is, we have very little social science on any development practices. The objection that unit testing and TDD are unproven could be raised about almost any practice. The anecdotal support for unit testing remains strong.

We shouldn’t leave it there. We need real studies. I’d like to see some of my former colleagues in economic sociology jump in. There’s at least a masters thesis here.

It’s also possible that the limited studies to which Roy refers (he does not cite them) produce inconclusive results because they don’t account for test quality. Noise from botched test regimes may be hiding the good news. Roy established early that (a) a poor bad unit testing can be worse than no unit testing and (b) it’s easy to make a mess.

If this interpretation is correct, we are challenged to improve testing as actually practiced in the wild. We lose the argument – and we should lose it -  if proper unit testing remains a rare skill, difficult to acquire. If Roy’s book becomes widely read and as more developers learn to write better tests, we could hope for a positive swing in the statistics.

Finally, I’ve heard Steve McConnell claim in a DotNet Rocks Show (0:28)  that “40% to 80% of its effort on unplanned defect correction work … in other words, low quality is the single largest cost driver for the average project.” I don’t know how Steve came by these statistics (and “40% – to 80%” is huge swag). It is Steve’s business to measure and track this stuff. And if you’re doing something that attacks the “single largest cost driver” … and you’re not disproportionately increasing costs with your remedy [!] … then you’re making business sense.

Chapter 9 on testing legacy code is a welcome introduction with good advice … but no substitute for Michael Feather’s Working Effectively with Legacy Code. Feather’s book is expensive ($47 on Amazon); perhaps Roy’s chapter and his enthusiasm for Feather’s book will encourage sales.


Appendix “A”, ostensibly about design and testability, is mostly about design for testability. That’s no small leap. Testing code heavily is one thing. It is another to distort your design to satisfy inadequacies in the language that make testing difficult.

I’ve deliberately expressed this point in the most contentious way possible to dramatize the implications of exchanging “for” for “and”.

I hasten to express my enthusiasm for the contribution of “unit testing” to design. Expressing your expectations in code clarifies the design and casts a strong light on otherwise dark edge cases. Many of the test disciplines, loosening dependencies in particular, promote SOLID design principles (especially Single Responsibility) that are beneficial in their own right. Roy is excellent on these points.

The problem is that at least one of Roy’s recommendations, “Make methods virtual by default” [258], reduces design quality in order to make testing easier. Testability and Good Design are at cross purposes.

“Make methods virtual by default” is a terrible idea in my opinion. I explore that opinion in a separate post. My argument in brief is that a virtual method is an invitation to extension everywhere. Extensibility is not a frill. You have doors in your house for a reason; that’s where people are expected to enter. They aren’t expected to come through the windows. You don’t punch orifices into every wall. A plethora of virtual methods invites violation of the Open / Closed Principle (“Liskov Substitution Principle” to be more precise) and makes delivery, maintenance, and support of a system pointlessly more difficult.

This aside, the chapter, although brief, is clear and persuasive.

Appendix “B” enumerates helpful tools and test frameworks. Each merits only a brief blurb but I was pleased to have an annotated list of Roy-approved choices.


This is a wonderful book for the experienced developer who is open to unit testing while having limited experience of it. I suspect it will help technical managers of a certain age … managers who’s programming days are behind them, who’ve heard the fuss, been through a few fads, and want a serious, honest, warts-and-all look at unit testing.

I’m told also that it has earned the respect and admiration of many with deep unit testing experience. That’s a confidence builder for me.

Get it.

Tuesday, August 11, 2009

Drag & Drop Debate on Herding Code

I just listened to a Herding Code podcast in which “Drag and Drop” development was attacked and defended (thanks to Tim Heuer for the pointer).  That wasn’t the offical topic but there was a golden 10 minutes on this subject about 35 minutes into the program.

I don’t believe they do transcripts of Herding Code podcasts. With apologies to those guys, I transcribed those 10 minutes for your delectation. Go hear the whole thing.


The guests (and bio extracts from their web sites) are

G. Andrew Duthie (GAD): ”G. Andrew Duthie, aka .net DEvHammer, is the Developer Evangelist for Microsoft’s Mid-Atlantic States district, where he provides support and education for developers working with the .net development platform.”

Alan Stevens (AS): “Alan Stevens is a ... software artisan living in Knoxville, TN. Alan is an Open Space Technology facilitator. Alan is a Microsoft Most Valuable Professional (MVP) in C#. Alan is a member of ASP Insiders

The Exchange

GAD: Alan likes to beat me up on Drag and Drop (d&d) development … and I’ll be the first to say that I’ve done my fair share of d&d demos and some of the code I write I use d&d and I’m not ashamed to admit it.

AS: Will you please promise here now never to do that again!

GAD: No, I won’t actually. Here’s my take on the whole drag and drop thing. … [I enjoy] the concept of “Technical Debt” and d&d is an example of something that can lead to Technical Debt and I’m perfectly willing to accept some of that Technical Debt and sometimes the reality is that I pay for it later …

AS: No!, No. _I_ pay for it later. You go on and do a demo of something else that will be released next year. And I have to clean up behind these poor slobs that all they know about .NET development is what you showed them in the PowerPoint deck and the d&d demo … and they don’t have a _clue_ what’s going on. And it’s just _garbage_. I walk into these steaming piles of poo that are called “business applications” and have to clean this mess up. You just aren’t doing anybody any good by making them ignorant.

Unknown_1: So what you’re saying, Alan, is that your upset that MS is providing you with a job.

AS: I would say that there are other skills that my clients could exploit

GAD: So let me give you my take on this. This is a place where I think you and I disagree pretty vehemently.

AS: Yeah, you’re wrong!

GAD: My take is that if we are out there doing demos of any rapid application development feature that has the potential for accruing technical debt that we probably ought to be saying that. … So … If all that we’re doing is teaching people how to … spin the knobs … we’re not teaching them enough, but at the same time I think we have an obligation … to show off those new features. I’d love to see us do a better job of giving the caveats and letting people know that “this demo that I’m building here is not based on any patterns so … you don’t want to … emulate this.” … Pete Brown says “as I’m building this keep in mind that this is demo code. Ideally, when you’re moving into an application, I show you a pattern you can use but right now, this code is really to demonstrate this feature and I don’t want to clutter it up with the conversation around the pattern _yet_. ” Eventually he gets to MVVM …

AS: I think caveats of “here there be dragons” are fantastic. I just haven’t seen them yet at a launch event. It seems like there is a fear of criticizing the product, that it’s not super easy to use .. if there’s a trend toward doing that [caveats], I’m 100% for that because that’s what people need to get in their heads, this is not a model of how you should do your real world development. I’m trying to show you features.

The other aspect of d&d demos that gets under my skin is “I didn’t write a single line of code”. What the hell is wrong with writing code? … Why should dragging something from the toolbox be a better experience than actually writing it in the editor?

GAD: I’ll give you an example. I don’t ever want to have to write a login dialog again in my life. I don’t want to have to write that code. So when we brought out ASP.NET v. 2 and VS 2005 and you could just drag a login control and if you had the membership service provider set up, you were good to go. That’s a big win to me and I don’t see any downside to that particularly given that if you have to change out your membership provider … you don’t have to change anything.

AS: … I agree that I don’t want to write plumbing code every time. But I don’t mind writing code to re-use a component

GAD: So, ultimately, if you have that code somewhere and you can drag it into the code, are you guilty of d&d development too?

AS (incredulous): Why would I ever do that?

GAD (frustrated): So do you object to controls?

AS: I’m leaving visual designers out of this. I’m talking about _non-visual_ components where you create a visual designer for no reason. Why would I drag that code out of somewhere? Why wouldn’t I configure my IoC container to inject that component and then just use it in my code? Intellisense is a wonderful thing. I don’t need some graphic on the screen; I don’t need to reach for the mouse to add these things into my application.

GAD: I think you’re getting into a philosophical disagreement …

AS: No, no, no! You are still wrong.

GAD: … Trying to make a “one size fits all” statement about how development should be done, I don’t see that working in our industry. There are many different ways that you can successfully build software …

AS: Let me leave it at this. I’ve never seen it work. I’ve never seen it work in a long term, maintainable sense. Only in the initial release. And, honestly, the initial release is not where the cost of any application lies. It is always down the line, in extending and maintaining the application. …

Kevin Dente: … Not only do I never see it working, I never use it in my work. And whenever I hear Microsoft say “We have resource constraints so we have to make judgment calls” well when tons of effort is poured into those things for which I think have no value and aren’t put into places which have a lot of value, I get very frustrated.

“And the Winner Is …”

My heart is with Andrew (GAD). I do a lot of demos myself both of Microsoft technology and of our product. I appreciate the challenge of demonstrating features such that they can be seen without scaffolding … scaffolding that proper patterns may introduce.  I love IoC and wouldn’t develop without it. I confess I can’t do a demo with it … today … and still reach my audience.

But my head says Alan (AS) is dead-on right.

This is not a “philosophical disagreement”. The drag-and-drop approach in which you populate your views with non-visual controls is always wrong outside of a demo. There is no “choice” here. The d&d proponents are “flat earthers”, pretending that this isn’t a settled fact.

How can I be so sure? Let me offer the same reason that others brought up. If these components had a legitimate place in my application, I would use them. If I would always yank them out, they have no good reason to be there in the first place. And if it’s not good enough for my code, why would it be good enough for my customer’s code?

Think about this. When you never see these controls in good code and you often see them in bad code, what must you conclude?

Note that AS and GAD do not disagree about “good” and “bad” code in this regard. It isn’t as if reasonable minds differ on this. GAD never says that coding with these components is ever good. He admits straight-up that their use incurs “technical debt”.

And by the way, GAD, these components don’t “have the potential to incur technical debt”. There is no “potential”. They are always technical debt. The only question is when you pay for it.

All of us understand that we incur “technical debt”. It’s an essential fact of development. But we don’t take on debt frivolously. And these d&d components are frivolous choices.

We know from our own experience that it is neither easier nor faster to use these components than to do it right the first time. Once you know the proper way, d&d saves you zero effort. I say this assuming that you employ some amount of decent structure the first time … that your app is not a total hack job from day one to first release. Would any of us do that?

No, the only place for d&d non-visual components is in demos. MS is properly faulted for failing to warn people. They should be faulted for wasting resources on these components and their visual designers … resources that could otherwise be deployed to improve the platform and the experience.

Today, my poster child for waste happens to be the RIA Services DomainDataSource which mimics the comparably atrocious ASP ObjectDataSource.

And guess what? Because Microsoft is doing it, we at IdeaBlade feel that we have to do it too. That’s right … we’re forced to write our own Silverlight ObjectDataSource just so we won’t lose customers to a Microsoft demo. What a frigging waste all around.

Our customers suffer doubly. They are encouraged to develop poorly and they are deprived of means and guidance to develop well. What a shame.

Sunday, August 2, 2009

Screen 54, Where Are You?

The justly renowned Jeremy Miller just issued a call for real-world “screen activation” scenarios.

Jeremy is writing a book on UI design and development patterns. A number of us who think and pontificate about this a lot (including Glenn Block, Rob Eisenberg, John Papa, Shawn Wildermuth, and many, many others) are helping Jeremy by chipping in with our favored approaches and examples drawn from our own experiences and our customers’ experiences.

I’m pretty active in this area as you, dear reader, know well. I hope you’ll take a moment to (a) look at Jeremy’s agenda and (b) contribute your thoughts – either directly to his post or to mine.

I’ll see that he gets them. And I’ll highlight with my own commentary some that pique my interest. All of it, from all sources, will help us at IdeaBlade do a better job of providing you with supportive infrastructure and guidance.


Footnote: You are all too young to remember the early 60’s TV hit that inspires this title: “Car 54, Where Are You” with the late Fred Gwynne, also of “The Munsters” and “My Cousin Vinny” fame.