Monday, December 8, 2008

Is Entity Framework For Real?

Five months after the official release of Microsoft’s ADO.NET Entity Framework (EF) do we know where it stands? In particular:

· What is Microsoft’s commitment to Entity Framework?

· Is it good technology?

· Should you move to Entity Framework?

· Should you move to IdeaBlade’s “DevForce EF”?

Short answers: (a) total, (b) yes, (c) it depends, (d) it depends.

This is a piece I wrote recently to help our customers and prospects decide if they should use our Entity Framework-based product or stick with the more traditional approach of our "Classic" product. Clearly it is a promotional piece. I'm not shy about that; I'm darned proud of our product. But my blog isn't the the right place for that kind of thing.

I'm making an exception because several people who read it felt that my opinions about the state of EF deserved airing here. How can I deny them?

There are no fully adequate brief answers. I will give you a summary opinion now and then I’ll expand my observations and answer your questions over the coming weeks.

Microsoft’s Commitment

Entity Framework is the future of database access technology from Microsoft.

That may be hard to see. Microsoft announces a new data programmability option every six months. In the last year I count LINQ to SQL, Entity Framework, ADO.NET Data Services (aka, Astoria), SQL Data Services (not the same thing), Dynamic Data, and Azure Storage Services. Yikes.

These are actually a mixed bag of often interrelated capabilities. Who can keep them straight? In fact, of this set, only LINQ to SQL (L2S) and EF directly touch the data sources. The other services offer alternative modalities of client data access and themselves depend internally upon L2S or EF (or something else) for the “last mile.”

Microsoft recently merged announced that L2S is a dead end, smothered in the cradle. You can read about it here where Tim Mallalieu, the program manager for both LINQ to SQL and Entity Framework technologies writes,

as of .NET 4.0, LINQ to Entities [aka, Entity Framework] will be the recommended data access solution for LINQ to relational scenarios.

You can learn more from Julie Lerman and L2S developer Damien Guard but Tim’s statement settles it for me.

There remain plenty of non-“LINQ to relational scenarios” that call for raw ADO.NET. But the contest between pro- and anti-ORM camps has been decisively won by the pros. There is no discernable Microsoft action on “traditional” data access technologies.

Entity Framework was the data access technology on display at Tech Ed and PDC 2008, turning up repeatedly in talks about a wide range of subjects including cloud computing, the “M” modeling language, and Silverlight LOB application development.

With this kind of promotion and visibility it is hard to imagine Microsoft backing off or changing course. Perhaps they might if EF were an irredeemable catastrophe.

Is Entity Framework Good Technology?

In fact, Entity Framework is pretty good. And it’s getting better.

There is a small crowd of nay-sayers. Some of the more alarmists made their case in the widely noticed web petition called the "Vote of No Confidence". You’ll find my response on my blog where I say, in essence, that the petitioners grossly exaggerate both the deficiencies of EF and the benefits of their alternatives.

Is EF great? Not yet in my opinion. EF is still “version one” software so there are problems a plenty … and I’m not shy with my list of “turn offs”. Our “DevForce EF” product compensates for many of these problems but some of them will have to be worked through over time with a combination of upgrades, commercial products (e.g., ours), and community contributions.

Yet the EF core is rock solid and unequivocally suitable for most line-of-business applications. We’ve worked with it intensively for almost two years, starting long before its official release. If there were a rat to smell we would have smelled it and walked away long ago. Instead, we’ve bet our company on Entity Framework and so have many of our customers. One of them is building a software service on it that will reach more than 20,000 customers next year.

They are not alone but just how many have moved to EF remains a mystery. Microsoft hasn’t announced any numbers and they aren’t particularly good at this kind of counting anyway. My only figure comes from an October 2008 survey on Scott Hanselman’s blog in which 643 of 4900 respondents (13%) said they used EF. There is nothing remotely scientific about this survey but we’ll take what we can get; at least we know there 643 adoptees J !

Compare this with NHibernate adoptions. Ayende Rahien reports that, in September 2008, there were over 20,000 downloads of NHibernate and 1300 members of his “NHibernate Users” group. I rather doubt EF has 20,000 users yet; we can’t measure EF interest from downloads because EF was embedded in Visual Studio 2008 SP1 . On the other hand, I do think we can make some reasonable inferences about user community size by comparing his user group size to EF forum traffic.

A casual tour of the Entity Framework forum on MSDN reveals that post viewings routinely exceed; as I write there are three posts today with more than 1000 views – that’s a 1000 views in less than 24 hours. Is that a lot? Seems like traction to me and strongly suggests EF is already overtaking the field.

Should You Move to Entity Framework?

The consultant in me knows to say “it depends”. I think the major dimensions for consideration are:

  • The nature of your application
  • Your freedom to control the database schema
  • Your willingness to lead
  • Your reasons for choosing EF

In my view, EF could be right for you if … :

  • You’re building a significant line-of-business application – one with plenty of user interaction and complex, interrelated data. By “significant” I mean an application you’ll spend more than a year developing.

    You are thinking seriously about building a Silverlight client for your application, you should strongly consider Entity Framework; your best Silverlight data programmability options are going to rely on Entity Framework.
  • You control the database schema. EF excels at mapping objects to existing databases but there are some peculiarities that are best resolved if you can change the schema. You also must beware of very large entity models. I’d hold off for awhile if you think you’re application domain is over 500 entities. I hasten to add that your design needs some serious rethinking if you’re entertaining a domain model with that many entities.
  • You are prepared to be at the front of the adoption curve. EF quality is great – it is not buggy. But you are going to take some lumps and it lacks the wealth of knowledge and experience that surrounds more mature technologies. We are all going to be doing “stupid things” for awhile.
  • You have a sound business reason to use Entity Framework. If your application is already built upon an existing object relational framework and you are reasonably content with that framework, I don't think I'd move to EF right now. Yes, it makes sense to migrate to what will surely become the industry standard platform. But today may not be that day.

    On the other hand, if you are starting fresh, I believe it is foolhardy to consider any other object relational platform. I am not making a technical case. For the sake of argument, I will stipulate that there are several other choices that are technically “better” than EF … and I don’t have to know what you are doing to say that. Because it doesn’t matter.

    You are making a business decision and you want to weigh heavily the benefits to your organization of going with the industry leader, with a product whose mind share and market share will most certainly be preeminent one year from now. Think about that. Think about future development on your application and what you will have to do to find and develop the expertise necessary to maintain and grow your application.

    This is almost a good enough reason to migrate today. For some folks, especially those who are about to substantially enhance an existing application - I mean a serious, long term investment in extending that application - the business reason is more than good enough.

    Maybe I’d feel differently if some alternative were dramatically better than EF or if EF were going to let you down. But no alternative is that much better and EF won’t betray you. EF is good enough today and with Microsoft’s unmatched resources and commitment, EF will only get better.

Should you move to IdeaBlade’s “DevForce EF”?

Heads up! This is the shameless commmerce section.

I said “it depends” and it does; it depends upon your answer to the previous question.

If like most of our customers, you are happy users of an existing ORM solution, this is probably not the day to switch to Entity Framework or DevForce EF.

If you are starting a new project but Microsoft’s EF immaturity gives you pause … listen to your gut. I’d be happy to suggest techniques that position you for an EF future while writing with our “Classic” product today.

On the other hand, if you think Entity Framework is right for your next project, you owe it to yourself to consider “DevForce EF”.

This essay isn’t really about our product; I recommend you go to our website to learn more. So let me give you the story in a nutshell:

DevForce EF is thoroughly grounded in Microsoft’s Entity Framework. We share exactly the same entity data model file and we rely on EF to perform all persistence operations. You don’t surrender an iota of EF capability when you adopt DevForce.

But DevForce EF will save you months of application development time and help you build a far better application than if you proceeded with EF alone because DevForce EF

· extends the Entity Framework with capabilities you need,

· compensates for many of EF’s deficiencies today,

· makes EF easier to learn and use correctly.

And if you are considering a future with Silverlight, know this: DevForce EF is the only way to take full advantage of Entity Framework on the Silverlight client. Yes, that’s an extraordinary claim; but it ain’t bragging if it’s true. Make me prove it to you.

Monday, July 28, 2008

Vista Update Whacked My Video Driver (Not)

Lost the weekend thinking I'd blown up my XPS video chip.

Update: It was nVIdia chip after all! Everything I said below ... all the wildly varying advice on the web and from our in-house tech ... was for naught. In fact (as some reported) the nVidia chip slowly fried into digital mush.

Lucky for me, Dell replaced the motherboard (at last that 3 year service agreement pays off!) and haven't seen any problems since. A BIOS update that turns on the fan at lower temperatures was also part of the deal. My XPS isn't as quiet any more but she's running!

I leave the completely false information below as a sign of the abounding misinformation. Do not be fooled and waste hours thinking you can fix this yourself or blame Microsoft.

--- The following is false ---

Turns out Windows Update automatically replaced my Vista nVidia driver with something Microsoft recommended. Their replacement driver behaved in ways that looked like hardware failure to me. Rolling back the driver didn't help which made me "sure of it".

I'm way out of my league with this stuff. Thank goodness we have someone on staff who could figure it out. The clue that it wasn't hardware is that everything worked fine (if ugly) running with just VGA drivers. Like I know how to do that? No way. But my guru did. Found the correct drivers on Dell's site and reinstalled them.

I don't know how the "normal" world copes with this stuff. One minute everything is fine. Next minute (after a Windows Update that you didn't even remember seeing), your laptop is failing with bizarre video artifacts, driver reboots, freezes ... and you're weeping like a baby.

Lessons Learned:

  • Don't let Windows update your machine without your explicit agreement
  • Don't download the optional stuff
  • Examine the proposed updates carefully and choose the ones you "know" to be safe. My guru advises that I stick to the security updates only.

Good luck!

Monday, July 14, 2008

DDD is for CRUD Apps

I wrote this piece for the Entity Framework Wiki where rage a number of folks with contrasting degrees of affection for and animosity toward Entity Framework. Maybe some of you will see it here first. :-)

When confronted with a brown-field application development scenario I often find myself in the camp that finds value in leveraging an existing database schema during the development of my domain model. By "leveraging" I mean that I construct some part of my domain model with the aid of an ORM tool that takes the database schema as one of its inputs and produces a conceptual model as one of its outputs. I am pleased to use that conceptual model to generate a portion of my domain model.

For some this approach is anathema. One expression of revulsion is to dismiss the tool and approach as suitable only for CRUD applications. Apparently a CRUD app is pretty low on the sophistication scale. I am often told that if this is "all" I'm going to do (read: all that I am capable of), I should stick to one of the less intellectually challenging design patterns, maybe ACTIVE RECORD, and leave OBJECT MAPPER and DOMAIN MODEL to the big thinkers.

It follows also, they would suggest, that this so-called "Data First" approach betrays an almost constitutional ignorance of modern design patterns and practices and is utterly incompatible with Domain Driven Design (DDD). I think this leap to judgment, while understandable, is unwarranted and prematurely terminates what could be productive discussions.

In this page I will hold that

  • DDD is almost invariably demonstrated with a CRUD application
  • Data-firsters build behavior rich Domain Models too
  • Ease of code generation undermines thought
  • The real question is "do you engage your design faculties or not?"
  • Data-first can assist DDD
  • DDD and "Data First" are compatible when the "Data Firster" uses his or her head

DDD for CRUD

Jimmy Nilsson shows us DDD in action in his highly regarded book, Applying Domain-Driven Design and Patterns [ADDP] by stepping through the design and development process of an application with the following requirements:

  1. List customers by applying a flexible and complex filter
  2. List the orders when looking at a specific customer
  3. An order can have many different lines
  4. Concurrency conflict detection is important
  5. A customer may not owe us more than a certain amount of money
  6. An order may not have a total value greater than a predetermined system-wide order limit
  7. Each order and customer should have a unique and user-friendly number
  8. A new customer is acceptable only after passing a credit check by an independent institution
  9. An order must have a customer; an order line must have an order
  10. Saving an order and its lines should be atomic
  11. Orders have an acceptance status that is changed by the user

This is a classic CRUD application. It is, in words typically uttered with total derision, "just a CRUD app."

It is also the richest investigation of DDD development that I have found. Jimmy devotes nearly half of the 500 pages of his book to this example. I have yet to see an example of DDD in practice that is as thorough as this one.

This is the only example in Jimmy's. He never even hints that this CRUD app is inadequate to the task of demonstrating DDD. It is all he requires to show DDD's superiority relative to the way he used to build applications. Yes, the app is a toy and half-baked - as it must be for purposes of exposition. But it does what he wants it to do pedagogically.

I'm not trying to make Jimmy a saint or sinner. This post is not about Jimmy.

I do intend to make it hard for someone to say, "well, that's just a CRUD example and DDD is for more sophisticated applications." I figure if Jimmy wrote the app with DDD, it's a DDD app. If Jimmy thought we should use ACTIVE RECORD instead of Domain Model, he would say so.

I am also trying to discover if there is something distinctly different about applications that people are building with DDD. On the strength of this example, we all seem to be building the same kind of apps.

Are so-called "object first" applications somehow beyond the reach of applications developed when you use "data first" techniques? Is there something Jimmy is doing here that we aren't doing routinely ourselves? Not that I can tell.

We Build Behavior Rich Domain Models Too

A subset of Jimmy's application's requirements are manifestly "behavioral"

  • Concurrency conflict detection is important
  • A customer may not owe us more than a certain amount of money
  • An order may not have a total value greater than a predetermined system-wide order limit
  • Each order and customer should have a unique and user-friendly number
  • A new customer is acceptable only after passing a credit check by an independent institution
  • Saving an order and its lines should be atomic
  • Orders have an acceptance status that is changed by the user

Guess what? You will find implementations for such requirement in my "data first" applications. These kinds of requirements are routine for us as well.

Look closer at my own applications and you will see Aggregates, Value Objects, Services, Repositories, Unit-of-Work, inheritance, etc.. You'll see Dependency Injection and MVC/MVP too. I can't say that I have always been as clear and decisive with the purely DDD structures; I'm new to the DDD formalisms although one of the reasons it resonates so strongly with me is that (as with the GoF patterns) there is a shock of recognition when you first seem them - the realization that DDD captures what you should have been doing - and were sort of doing - all along.

It certainly seems to me that I approach these needs with the same attitude and catalog of "solutions" as any "object-first" architect. I just happened to get there with my "data-first" tools. And I didn't have to stand on my head or otherwise fight my own tools or predilections to do so.

Let's take "behavior" for example. Someone is always trying to tell me that "data firsters" don't understand the difference between data objects and objects with behavior.

There is some strange misconception, widely repeated, that "data firster" business objects lack behavior; that they are just stupid property bags straight from the ORM code generator. Where does this notion come from? Every generated domain object class file is paired with a custom class file. That's where we put our behaviors. We are going to enrich our domain model in order to satisfy the expectations coming from the business and we're going to do it in that custom class. Go ahead and decry the noise and alleged confusion that I must be experiencing because I have two class files to do the work of your single file (psst - I hardly notice). But why insist that I'm not writing behavior at all?

When I look at Jimmy's classes - the ones he actually wrote - I don't see any important differences between what his code does and what my code does. You will find no less behavior in one of my business object classes than in one of Jimmy's classes.

Yes there are differences - we won't write the same code. But once you set aside the code gen and Persistence Awareness artifacts, what is left of genuine substance to fight about?

I must be quick to acknowledge that there are characteristic misbehaviors with the "data-first", code-generation approach that always show up in the code. Every technique is prone to its "signature" mistakes - the kinds of mistakes that are so easy to make that they always leave evidence behind. We'll talk about some of these shortly. But they are minor sins, easily expiated.

My point, in this section, is that, from the perspective of a consumer of the Domain Model, there is no fundamental reason for Data-firsters to produce a domain model that is appreciably different from the one produced by an Object-firster.

There are some differences in how we got to a given place. There are some differences in how we pursue development in subsequent iterations. But if we both consistently produce a domain model that delivers the same capabilities, with similar APIs, in a similar amount of time, with comparable quality, ... iteration after iteration ... then I cannot see why one camp must lord it over the other.

The burden of proof falls on he who would claim that we cannot achieve comparable outcomes.

What About Those Getters and Setters?

There is a faction within the DDD family that is at war with getters and setters in Domain Model classes. I think they make important points. I also think they over-state the benefits and understate the challenges of doing without getters and setters. Challenges begin in earnest when you have to present domain objects in the UI. If we are to move state between a domain object and widgets on the screens, with today's client technologies one is driven to writing intermediaries (e.g., DTOs) that actually do have properties . For a glimpse of the tedious care and feeding such intermediaries require, see Mats Helander's article in Jimmy's book [p.431]

Aside: this has nothing to do with separated presentation per se. If domain model objects are property-less, the "Model" -  in MVC or MVP or embedded in a Presentation Model - can not consist of domain model objects; there must be intermediaries. Perhaps this is a virtue but it is won at hard cost.

If you believe getters and setters are bad, you will really hate the "data-first" ORM approach which emits properties in great abundance. Property generation is the strength of the "data-first" style. It is its strength ... and its weakness.

Before I pursue that thought, I must observe that the "no properties" faction appears to be a minority within DDD. Maybe Jimmy's example application is "old-school" DDD - it's so "2006" - but his domain model classes have plenty of properties and his associates, who build the UI on his domain models, are not shy about working directly with those domain objects and their properties. So I don't think DDD'ers are united in hating properties.

But the property resisters have a great point. DDD stresses the importance of designing and implementing in the language - the ubiquitous language (UL) - of the business domain. If "Get LastName" and "Change LastName" are sensible operations in the UL, the properties belong. But if "ShoeSize" is not a meaningful fact about a person in the UL, we should not have a ShoeSize property.

Data-firsters have the bad habit of acting as if every column in every table is directly expressible in the UL as a "get" and "set" operation. In other words, we have a tendency to expose every column as a property. It's just so easy to do.

The same is true for bi-directional relationships. Order.Customer? Customer.Orders? The ORM can generate them both; let 'er rip.

Again, it's so easy ... that we let the ORM generate these properties ... and now our domain model has unwanted behavior that clouds our vision, adds a point of failure, and consumes testing resources.

There is something insidious in this too. Our ubiquitous language may support setting the Last Name. But not necessarily at will. Not necessarily in isolation from other domain model state or rules. By blithely spitting out a LastName property, we gloss over the careful analysis that should have gone into the decision to expose a mutator of this value.

Do You Design Or Not

Let me stipulate: blind "data-first" thinking combined with rapid code generation is a formula for poor design.

I know the perverse delight in spewing a "model" of 100 table-backed-classes in fifteen minutes. I suppose this is like firing off a few thousand rounds from an assault rifle. Kind of cool on the range; not very cool if I do the same thing at ... I don't know, let's pick on the poor post office again.

Is it the tool's fault? Or is it my fault?

If I use the ORM this way, shame on me. I didn't have to.

I may have to cope with the fact that the legacy ShoeSize column is in my Person table ... and I am not allowed to get rid of it. But I don't have to expose ShoeSize publicly as a property. I don't have to expose both sides of a relation. And, if there is special business logic governing how to change LastName, I can bury the property and write a "message" method to mutate it properly.

In short, if I just fire up the ORM and pull the trigger, does my tool make me an idiot? Or am I an idiot to begin with.

When Data-First Improves Design

A significant portion of Evans seminal DDD book concerns how you determine what the domain model should be; how you align it with the business. The message, repeated in many forms, is "this is very hard."

If you've been a consultant, you know that learning your customer's requirements is wickedly difficult both because she isn't sure what she wants and because you don't understand her business well enough to understand her even if she explained it well.

You have to play anthropologist. An apologist listens to stories, yes, but he also looks at actual behavior and, in particular, the artifacts of the culture he studies.

I suggest that an existing database is one of your most important sources of insight into the culture. That database didn't happen by accident. ShoeSize is in there because someone went to the trouble of putting it there. Just because your client didn't mention ShoeSize once during your interviews doesn't mean you can ignore it. Even if she says "we never use that", the experienced consultant retains the nagging suspicion that something important is missing ... and won't rest until the mystery of ShoeSize is resolved.

We used to say, "show me your data and I'll tell you what your application does." Flippant perhaps, but not altogether wrong.

I almost forgot this maxim because it seemed so obvious. Yet I can't remember it being mentioned in a single DDD book or article. I think you're missing an important design opportunity when you neglect to start from the existing data schema.

Conclusion: DDD is for Data-Firsters too

DDD is first and foremost about studiously matching the domain model to its business purpose. That's hard work. Data-Firsters cannot escape that work - even if running the ORM on auto-pilot seems at first to deliver good results. The schema is only one of the inputs to the ORM; our judgement - what tables and columns to model, how to expose columns or relationships as properties, what data should appear as Value Objects, etc. - is the more important input.

Of course domain model objects have behavior. Data-firsters are not satisfied with property-bag classes. They add behavior as they go ... just as Object-firsters do.

DDD describes structures and design patterns that favor an evolutionary domain model that serves the business. We sometime data-firsters build those same structures and follow those same patterns.

Unit testing is critical to the iterative process promoted by DDD. Our persistence infrastructures must facilitate unit testing. In particular, we must be able to test the model without connecting to a database. Some persistence infrastructures just missed this boat. Big mistake. Unfortunately, many of these infrastructures - I'm thinking of Entity Framework in particular - are associated with tools favored by data-firsters.

I'm going to claim that this is a spurious correlation. There is nothing about the data-first approach that requires an infrastructure that makes testing hard. A persistence aware infrastructure does not have to make unit testing hard. That many do is a correctable error. 

DDD emphasizes the importance of reducing friction in facilitating continual redesign and re-implementation. Friction discourages us from seeing and making the changes that improve the model. Our code-generating ORM tools undoubtedly introduce some friction into the process. The need to regenerate the domain model simply to change a persisted property's name or accessibility is among the more glaring examples of friction.

But I think it's also time for the object-firsters to come clean about the friction they introduce. The friction is not always in the domain model; it pops up elsewhere in the system because of what is not in the domain model classes. Jimmy's book is pretty fair in its recital of the ugliness in the "infrastructure ignorant" discipline. The "no properties" school introduces another, huge source of friction to the process - whatever the compensating benefits.

But I digress. The point I want to make is that, if data-firsters tame their unbridled exhuberance for their tools and use them wisely, they too can practice DDD.

Then we can all build CRUD apps ... of any sophistication.

Friday, July 11, 2008

I talk with Jeremy Miller about ORM

Mike Moore of Alt.NET Podcasts arranged and captured a conversation between Jeremy Miller and me about Object Relational Mapping (ORM) and persistence frameworks. You'll find it here.

Jeremy is one smart guy and he has the industry experience that gives depth to his intelligence. He cares passionately about the craft. He has a fine sense for the balance between the principle and the practical. He's a gifted speaker and writer and, fortunately for us, he does a lot of both.

He also signed the Entity Framework "Vote of No Confidence", a document I began to criticize in an earlier post. We have contrasting views about what makes a good ORM and we explore some of the particulars in this conversation.

I think we did a number of things well. It was not a "smack down" or any of that kind of nonsense. We find much common ground. We jointly articulate the benefits of ORM as we see them.

And we retain that camaraderie even as we explore our differences. We go beyond the doctrinaire recital of our respective catechisms to discover what we each think matters most. As I review the tape, it seems to me that we share the same fundamental concerns; we just react differently to the pain and trouble you'll encounter when you follow one route or the other.

I don't believe either of us was persuaded to change sides. I don't believe there was a "winner" nor was that the intent.

I do believe we each found merit in the other's position and remained open to challenging our own most fondly held views.

I hope you'll give it a listen.

Kudos to Mike for producing a lively and balanced show.

Wednesday, July 2, 2008

Composite Application Guidance (CAG) for WPF is here!

Patterns and Practices just released the Composite Application Guidance (CAG) for WPF 2008.

You might know it by its codename, "Prism" ... a name I loved and will miss. What matters is that it exists. There is now a set of libraries and guidance for building WPF applications in a compositional manner.

Why does this matter? Because most of us have learned the hard way that we should "compose" our applications from parts and pieces. The question is not should we do so, but how to do so.

How do we build those parts independently and make them come together in what the user perceives as a single application? It's easy if the parts have nothing to do with each other. It's much harder if the parts must interact. We're going to need some architecture, some patterns, and some glue code to achieve that blissful state in which truly independent components, with no references to each other, somehow manage to collaborate seamlessly.

Patterns and Practices has been down this road before with the Composite UI Application Block (CAB) and its add-ons (Smart Client Software Factory, Web Client Software Factory, etc.). CAB was PnP's pioneering effort in this space. It achieved considerable success; there were many substantial applications built with CAB and my company wrote its own application integration layer on top of it called Cabana.

It also acquired substantial notoriety. Everyone agrees it had too many moving pieces that were hopelessly entangled and that it was devilishly difficult to learn.

So when the team took up the challenge of providing compositional guidance for WPF applications, they seized the opportunity to re-examine the CAB experience, harvest what worked, and improve upon its defects.

They reached out to the developer community, drawing upon those with CAB experience and upon those who had experience with rival approaches. And they listened. And they spiked what they heard and listened some more. They turned the spikes into production code and listened again. Iteration after iteration.

The Results

They have delivered an elegant product. On time. Far ahead of the December '08 date that I dreamed of in my unbridled optimism.

I think it hits the right notes. CAG is small enough to understand and rich enough to support serious application scenarios. The parts work well together. But you don't have to use them all and you can easily substitute your service for any of the shipped services.

The team likes to talk about how you can replace the Dependency Injection mechanism with your favorite: Unity, Castle, StructureMap, etc. I think it is as important that you can replace the EventAggregator or RegionManager or any other ingredient that didn't suit your needs. Not that I'm in a hurry to do so. I just want the comfort of knowing that I could.

This time PnP devoted serious effort and resources to guidance. You see it most prominently in the "Stock Trader Reference Implementation" (RI) that demonstrates (a) what we mean by a "compositional UI" and (b) how all of CAG collectively supports such an application. It is simple without being simple-minded. It's a toy for sure but it serves its pedagogical purpose well.

  Note: Please don't build on it! It is not an application framework. Its practices are not always the "best" or even necessarily good. It is a resource for you; a place to discover how you might solve a problem with two or more CAG components in combination.

I don't know for sure but it sure strikes me that CAB's RI, the "Bank Teller", was an afterthought. Not so with CAG's "Stock Trader". It was designed first, before CAG work began in earnest. It co-evolved with CAG. It is and was intended to be CAG's "acceptance test".

There are eight "Quick Starts" that explore CAG components individually and (mostly) in isolation. As with the RI, you wouldn't slavishly adopt Quick Start code. Indeed, you should probably fight with this code, hurl it against the wall, and shake your fist ... and after your satisfying tantrum subsides, you will realize that you understand how it works. Somewhere, someone on the PnP team will be smiling.

There are over 300 pages of CAG documentation. This is not of the "insert tab 'A' into slot 'B'" variety. The team tried to explain "why" as well as "how" and it shows.

Is CAG right for you?

How would I know? I haven't built anything with it yet. And I don't know anything about you. That won't stop me from telling you what to do.

I would use CAG if I was starting a new WPF project. I'm convinced it will pay dividends immediately. WPF applications, more than Windows Forms applications, resist compositional strategies. I'd want to work on manageability early. WPF itself is still a struggle to learn and when things go wrong, they go really wrong. I want confidence that if this little corner of the application catches fire, the rest of my house is well insulated and will keep standing.

There is no CAG for Windows Forms or Web Client. If I had an existing CAB investment, I'd stay with it. If you are starting a new Windows Forms project and have some time to kill, you might glom on to one of the efforts to port CAG to WinForms ... there is sure to be someone attempting it soon. I say this as a CAB guy too. I would be sorely tempted to develop in CAG rather than CAB if I was starting a new WinForms app and had enough lee-way to make some beginner mistakes. It's simply that much better than CAB, even in the medium term.

If you have a CAB app today, do not despair. Don't be distracted. Don't drop everything and re-implement in CAG. CAB is working for you. Sit this one out until there is more real world CAG experience.

If you're building your application in Silverlight ... you're getting ahead of yourself aren't you.  Expect to see a CAG for Silverlight. Someone is bound to port it. I just don't know who.

The Best WPF Business App You'll See This Year

I just watched a DotNetRocksTV video of the application that Billy Hollis demo’d at Tech Ed 2008.

Far and away the most effective demonstration of WPF for a Line-of-Business (LOB) app that I’ve seen. You will never mistake it for a Windows Forms knock-off. I  look at this app and I say:

“THIS is how WPF can be a force for greater user productivity.”

Chock full of “aha” moments. Check out the tasteful use of color, typography, and animation. You'll be amazed by the humble ListControl. The only thing I feel I'm missing is sound.

He isn’t using anything complex or incomprehensible. All techniques rest on foundations you will find in all of the books. Except Billy has put them to good use. He and his team are laying down patterns that should encourage us and inspire us.

It’s a short, feature packed video and worth the 20 minutes of your time it will take to watch (assuming you skip the commercials).

This is the future of user experience. Ok ... I'm gushing ... but it's still great.

Tuesday, July 1, 2008

Big Silverlight Troubles! No synchronous server calls

We have been tearing our hair out around here because we can’t find a way in Silverlight 2.0 Beta to write a method that blocks the UI thread while it fetches data.

In case you are not aware of this problem, permit me to illustrate.

Suppose in the bowels of your business logic you have a validation rule that says: “total of order not to exceed the customer’s maximum cost for a single order”.

You’re using a domain model with objects that have dotted navigation. You write your rule along the lines of:

… TotalCost(myOrder.Details) < myOrder.Customer.SingleOrderLimit …

Unfortunately, both myOrder.Details and myOrder.Customer.SingleOrderLimit could involve lazy loads of the Details and the Customer.

The UI that enabled the user to create a new detail item for the order is not aware of this rule and didn’t pre-fetch Details and/or SingleOrderLimit.

No problem in regular .NET. You just wait while you lazy load the SingleOrderLimit from the database.

Big problem in Silverlight.

All server calls have to be asynchronous. Your fetch of the SingleOrderLimit is going to have to return immediately; the value will be accessible later during the asynch callback.

Ok, you try to rewrite SingleOrderLimit to stall the UI thread until you get the callback. But there’s no way to do it. If you sleep the thread, you never hear the callback. You make the server call on a worker thread and sleep the UI thread; you never hear the callback. Sit and spin on the UI thread? You never hear the callback. What’s with that?

I understand that I can write synchronous code on the background thread. The problem is that I can't make the UI thread wait for the result.

So you try to live within this crazy world. You decide that you will somehow detect this particular problem and you stash the validation object somewhere, postponing its execution until all the dependent server trips have completed; in my example, the validator can’t run until Details and Customer have been retrieved and how I knew to stack these dependencies is anyone's guess. Good thing they're mutually independent requests because if one query depended on the outcome of another query, the logic would get really twisted.

Meanwhile your original validation call - the one that got us into this mess - must return with some indication like “I don’t know”.

It can't report “everything is ok” or “it’s invalid”. So you have to be sure that the ancestor caller somewhere way up the stack postpones whatever it was doing (like trying to save the order) until you have a definitive answer. And, of course, when it becomes possible to render an answer, you have to remember to renew your originating operation (e.g., the save). You weren't just going to tell the user "Sorry, I can't save yet, not enough information ... try again soon" were you?

Let's say you got that all figured out. But watch out. Maybe your WPF-ish UI is binding to objects in your unit-of-work container. Over on your worker thread you're doing some synchronous magic that fetches Customer and Detail objects. You better not put them into that container on the UI thread ... because it's not thread-safe and if WPF is looking for 'em there's going to be trouble! You better be ready to fetch Customer and Detail into a separate container and then martial them back across the thread boundary in the callback. Oh joy.

You see where I’m going with this. Application programming just became immensely difficult. You can’t write a real program if every data fetch is async. Your UI controller shouldn't have to anticipate every bit of data you might possibly need upfront either. This is a disaster.

Do you know what to do about this? Do you know anyone who does? We’re beating every bush and so far, nada.

Tuesday, June 24, 2008

Rejoinder #1 to "Vote of No Confidence in Entity Framework"

There's a "vote of no confidence" in Entity Framework petition that is circulating and, as I write this, it has drawn signatures of some 156 people. Many of them are well known, deservingly respected, and some of them I consider friends.

But this petition is wildly misleading. The dire warnings of "potential risks .. to Microsoft customer projects" are blown way out of proportion. It is one thing to be critical of EF v.1 - to draw attention to its shortcomings as we all have done - and quite another to consign it to damnation.

Tim Mallalieu, the new PM for Entity Framework, has written a careful response in the measured tones that befit a Microsoft representative. I, on the other hand, am not bound by such restraints and am free to indulge such rhetorical flourishes as suit my mood and temperament.

At the moment, I'm hopping mad. Every two-bit architect with visions of grandeur is going to send this petition to his boss as proof that Entity Framework will doom the project.

Hey boss, all these MVPs are against Entity Framework. Let me write our application with Domain Driven Design. I don't know a thing about it but how hard can it be? Of course I'll have to learn nHibernate first. Not sure how I'm going to do that. I can't seem to find a book on it. The documentation looks ok though. Well ... yeah .. I haven't found any examples or guidance on how to build a real application with it. But I'm an architect ... I'll figure it out.

Six months later our budding genius proudly shows off his ugly baby: some undocumented, impenetrable morass that only he understands and that works "most of the time", just not while you're watching. The application itself?  "Ah ... it's coming ... honest."

We've seen this over and over again and it is no accident. Some of my company's best customers found us after they cratered their first attempts with nHibernate.

I am not saying this is nHibernate's fault. It's great stuff in skilled and experienced hands. I am saying that there are risks with any choice of a profound technology and, before I lecture you on the "potential future risk to your projects," I would do well to find out about your business - and your developers - first.

Now I think there is a ton of merit to many of the criticisms of EF in that petition and much more to learn independently from the petition signatories.

But these folks are not coming clean about your prospects for building with some unnamed "something else." And, I submit, the chances of your success with EF are vastly better than with many of the alternatives that you are likely to hear about or pursue (including continuing with a traditional ADO.NET approach if that's what you're doing today).

Yes, I'm making a generalization without knowing anything at all about you or your business. And if you give me another minute, I'll tell you why you'll want my company's value-add product for Entity Framework. But before I get ahead of myself, let me try these thoughts on you:

  • EF is going to mop the floor with all of the niche players. It will become the "standard" in this space. You want to throw yourself and your employer against that buzz saw? You better be able to show that something horrible is going to happen if you use EF instead. It is not enough to argue that technology 'X' is better. You have to show that the long term ROI of building with your pet technology is vastly superior to building with EF. You'll have to show that the EF defects identified in that petition are going to spell disaster for your project. Frankly, I don't think you can make either case. I'm not challenging your intelligence or persuasiveness. I'm saying the case is not there.
  • There are thousands of successful applications built with frameworks like Entity Framework. I'm defining "success" here in soft business terms as in "the business likes it and thinks you are doing a great job". Don't stack the deck against me with Fitness tests (which are great, btw, but not implemented in most shops and irrelevant to the argument). Ultimately, business satisfaction is what we're striving for.
  • We have zero empirical evidence that actually existing applications built with pure Persistence Ignorance frameworks are intrinsically more successful than applications built with Persistence Aware frameworks.
  • On the other hand, there is good evidence that Persistence Aware frameworks are (a) easier to use and (b) result in earlier delivery of useable applications. You can start coding against an EF entity model almost out of the box. I'm not saying you should ... but you can. The fair proponents of Persistence Ignorance always acknowledge this when they talk about the "trade-offs" of PI.
  • EF isn't even released and there is already more of an eco-system surrounding EF than around all of its competitors combined. Count the forum entries, magazine articles, and blog posts. Count the books on Amazon (six so far; ok, most not released yet ... but try to find a single published book on nHibernate).
  • Expertise in EF will emerge quickly and spread widely as it always does with MS technologies. You will struggle today to find affordable developers and consultants who know about the rival niche technology of your choice; that situation is unlikely to improve as EF gains traction. 
  • Some people will be good at developing with EF; some will be awful. But this is a numbers game and you can be an atrocious nHibernate developer too. The difference is that you will be able to find someone who can tell you that you have an EF fraud in your organization; nHibernate charlatans can hide like roaches.

... and from the "damning with faint praise" department ...

  • I  eagerly accept that the evidence is overwhelmingly in favor of tested applications. EF is not unit test friendly (to put it mildly) and that's bad. They could have ... and should have ... addressed this in version one. But (a) you don't have to be PI to be unit testable and (b) you can build EF Domain Models that are unit testable (no visits to a database) as I will explain in a later post.   
  • You can build your app with a DDD perspective ... you'll just have to work harder than you ought to.

Sure, I'm a proponent with an investment in EF (our new product is built on it). But the signatories to this petition are stained by compromises of their own. Many of them have deep emotional and even financial interests in rival "platforms" such as nHibernate (you have a financial interest if you've been paid to build an application with nHibernate).

Our little corner of the world would be better served if the petitioners rallied to (a) help customers make the best of EF version one and (b) got busy helping Microsoft improve it in version two.

---

Watch for a future blog when I examine the petitioners' "unresolved issues", point by point. They have merit. But they are not decisive objections and you can mitigate the problems, often with little pain.

Somehow I find I cannot end this post without dipping my toe into these waters. Let me start with the big one, DDD.

The petition avers that Entity Framework inhibits sound application architecture through its data-centric approach to mapping and code generation. We should be using Domain Driven Design and you can't do that with EF, so they say.

Eric Evans wrote the book on Domain Driven Design. It's a fantastic book; get it, read it, adopt it. Whether you're dealing with an existing database or starting afresh, you can benefit from Evans acute observations and analysis. But DDD is not a panacea and it is not trivial to apply.

Unlike his acolytes, Evans is the first to say that Domain Driven Design is hard. His is a book on perspective not on technique. Over and over he cautions that there is no machine for cranking out DDD applications. The practice of DDD is a relentless reexamination of the business problem and your implementation of it, punctuated by flashes of insight. What this book does is (a) prepare you for insights and (b) provide intellectual tools for unearthing and recognizing those insights.

Of course what does the industry do? It turns DDD into a cook book! You'll have little trouble finding someone with an MVP tacked to the end of his name, bloviating on DDD and what he knows that you don't.

Evans also makes the point ... several times ... that you don't have to adopt a particular data access infrastructure to build a Domain Model. Sometimes it makes sense, he says, to go with the infrastructure you have and bend it to your needs. Ideal? No. But serviceable unless the infrastructure is just crap.

For sure you do not enter DDD heaven simply by writing your domain objects before you write your database schema. Nor can any priest excommunicate you from the church of DDD because you map and generate Entities from your schema.

More soon ...

Update 6/25/2008 Added below:

I am looking forward to responding to the comments I've received so far, all of them remarkably well mannered. I'm going to get back to you all when I have a moment to breath. Meanwhile, I just want to direct your attention to these fine folks:

Tim Mallalieu, PM for Entity Framework, wrote the official Microsoft response and is showing a lot of love to the Persistence Ignorance fans these days on his blog and on the EF Design blog.

Microsoft's Elisa Flasko offers her views on her blog.

Roger Jennings has been following EF for a long time. He's been bulldogging links to "the vote" - adding his own appraisals of each - on his blog. He can save you a ton of time if you're trying to stay on top of what's happening in EF world.

Here's a shout out to Julie Lerman whose been in tireless pursuit of matters EF and who first clued me in to the vote in this post. She's a wonderful, level-headed resource on EF and a pleasure to read. Awful nice in person too.

Update 12/17/2008

I finally got around to addressing the five points of the "No Confidence Vote". I did so in a reply to Scott Bellware's comments on my  post, Is Entity Framework For Real?

Tuesday, March 18, 2008

Design Guidelines for Extension Methods and LINQ'ers

I'm a huge fan of the .NET 3.5 language extensions. I want more: give me mix-ins, please!

Ingrate that I am, I think perhaps we should celebrate what we have before resuming our whining ways. And we should take stock too ... because new technology invites abuse.

"With great power comes great responsibility". My spidey sense is tingling already as villainous extension methods threaten the city. Wanna fight back?

Check out these new design guidelines, specifically for extension methods and LINQ implementations, from the good folks at Microsoft who brought us the invaluable Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries.

Don't have this book? Go get it and discover not just what is recommended but why. The guidelines are not perfect ... as evidenced by recent debates on the relative merits of interfaces and abstract classes ... but I love 'em nonetheless. 'Nuf said.

Monday, March 17, 2008

From PowerBuilder to .NET

They say all good things come to an end. It looks like PowerBuilder is one of those good things.

Rapid application development has always been important to business and PowerBuilder has to be among the most successful RAD products of all times. But it's time to move on. "PowerBuilder for .NET" doesn't seem to be gaining traction. What's a career PowerBuilder jockey supposed to do? Do you have to take a big productivity hit and also climb the whole .NET learning curve all at once?

Maybe it's not so bad. My company, IdeaBlade, is hosting a MSDN Webinar on Migrating from PowerBuilder to .NET It's a panel discussion among some experts who've been down this path before. I can vouch for two of them personally, Sean Flynn and Chuck Miller, having spent long hours with both. I haven't met the Microsoft guy, Terry Clancy, but I've heard good things about him. The fourth fellow, Jay Traband, is our CTO so you know I've got to be kind despite the fact that he and I have been jawing at each other since 1986 :-)

If PowerBuilder is your cup of tea, you definitely should register and attend on Wednesday, April 2, 2008, 2:00 PM ET / 11:00 AM PT.

Thursday, March 13, 2008

First drops of Prism

I wrote before about how Patterns and Practices is taking the lessons of CAB and applying them to a new project - the development of a compositional application architecture for WPF. The project is called Prism.

Prism is catching the first rays of public sunshine. Glenn Block announced the a Prism drop in his blog post today. It comes with a reference implementation (RI) to motivate the design decisions and demonstrate Prism at work.

I'm looking forward to digging in. I've seen some earlier Prism facets and been pleased with the direction.

I'll repeat Glenn's caveats: these are not real Prism bits - not beta, not CTP. They are more like spikes. Assume almost everything will change. Don't even dream of using it in production.

On the other hand, I think it's well worth examining to see where Prism is headed and to confirm for yourself that "by gosh, they might actually deliver something useful and intelligible in reasonable time." They've made no delivery promises - not about content, not about timing. But the auguries are very good.

Friday, February 22, 2008

Silverlight 2.0 is almost here!

You simply must check out Scott Gutherie's post from today (22 Feb '08) describing the imminent release of Silverlight 2.0 Beta 1.

Walk through the eight tutorials; it will take you all of 30 minutes.

I say Silverlight absolutely blows Web development out of the water. You’d have to be crazy to program in ASP.NET or JSP after Silverlight is released.

Game over. Lights out. Go home.

This is transformative technology.

Monday, January 28, 2008

On the effectiveness of TDD

There's a fascinating exchange between Phil Haack and Jacob Proffit on the implications of a National Research Council of Canada paper titled "The Effectiveness of Test-first Approach to Programming".

The experimenters divided 24 third-year CS students into two groups, one practicing Test-First development and one practicing Test-Last development. Each implemented the same functionality. The Test-First group always wrote unit tests before writing each feature. The Test-Last group wrote unit tests after writing all of the features. It wasn't a long running experiment so I would be tempted to describe the "Test-Last" group as the "Test-Soon" group; but that's quibbling. The researchers arrive at conclusions favorable to Test-First ... which you should gather from reading the report (and the commentary) on your own.

Phil discusses this study in his post "Research Supports the Effectiveness of TDD". While he doesn't say that the study actually proves that TDD is effective, he clearly thinks highly of it.

Jacob responds in a post on his own blog, "TDD Proven Effective. Or is it?", with a devastating (IMHO) critique of the study and obliquely criticizes Phil for succumbing to "Confirmation Bias".

The essence of Jacob's argument (if I may) is that (a) the study data do not confirm the thesis that Test-First is more "effective" than Test-Soon, (b) there are disturbing data suggesting that the Test-Soon control group produced better code, (c) there is substantial cost to changing one's programming practice to Test-First and (d) he is reluctant to make such a switch until there is decent evidence for it.

I think the study, Phil's post, and Jacob's critique make superb reading so you shouldn't rely on my commentary for anything other than inspiration to check it out for yourself.

I think Jacob has the best of it here (and it appears that Phil eventually accepts Jacob's critique while keeping faith with TDD). I especially appreciate the manner in which the two of them move the debate along. Their exchange is full of passion and intelligence but never strays from civility.

In the end they agree (without evidence) that, whether you prefer "Test-First" or "Test-Soon", the outcome is substantially better than "Test-Never".

Sadly, the NCR experimenters didn’t include a "Test-Never" group but I think there may be good evidence to support the notion that some testing (whether "first" or "soon") beats no testing. I intend to read one of the referenced papers on this subject: "A Longitudinal Study of the Use of a TestDriven Development Practice in Industry"

Don't look to me for conclusions. I am generally persuaded that we should have more social research before we start telling everyone what they have to do. That said, I am persuaded of the merits of testing and have a good feeling about TDD ... when I take time to practice it.

Tuesday, January 22, 2008

Prism Camp: Reflections on the "Composite WPF Review"

I'm just back from Redmond where a group of us devoted a well-spent week to opinionating on the future of a framework for composite applications.

That would be the "Composite WPF" framework (code named "Prism") that Patterns and Practices will build to fill the gap between the Composite UI Application Block (CAB) of 2005 and whatever someday blooms among the bleached bones of Acropolis.

What is Prism?

"Prism" is the code name for forthcoming guidance and code aimed at builders of composite applications with WPF client UIs.

It's tempting to call it a framework because Prism will ship with many collaborating components that collectively constitute a foundation upon which to build a WPF application.

Like any framework, it will promote a certain way of doing things by making some paths easy and other paths difficult. A framework makes choices so you don't have to. That is its virtue and its curse.

Prism aspires to be the better CAB. CAB is both a reference point and the point of departure. If you know what CAB is, even approximately, then you understand the perspective and proclivities of the Prism gang.

Acropolis was supposed to be the better CAB. Now that that project is gone - its team dispersed - Prism holds the "better CAB" baton. The aspirations and budget are far less grand; for many the reduced expectations offer a better promise of success.

Prism would address the same concerns as CAB and hope to do no worse. Yet, at least within this group, we believe we can do better than CAB and that, in order to do better, Prism must start over, unencumbered by CAB code or APIs.

What is "better"? The PnP people had a few months to kick that around. Then they called all of us in to kick it around some more.

The Campers

We were a good mix of experiences and prejudices.

The Patterns and Practices side was well represented of course. The indomitable Glenn Block, product planner for Prism, presided as camp director. Blaine Wastell, the "Client UX" program manager, maintained a sharp focus on gathering our priorities. Frances Cheung, the dev lead , helped keep the proceedings in line and Ezequiel Jadib of Southworks captured the artifacts. Later in the week I was reintroduced to Bob Brumfield in his newly appointed role as chief architect.

Other Microsofties made cameo appearances throughout the week. Some came from related wings of Patterns and Practices (e.g., Peter Provost, Shaun Hayes, Ade Miller, Chris Tavares); others dropped in from outside teams (e.g., David Hill, Brad Abrams, Rob Relyia, Jaime Rodriguez ). It was comforting to have their interest and support.

The main body of us were a mix of consultants, corporate developers, and third party product managers who have been watching, teaching, or building with CAB for a considerable time.

[Please accept my apologies for omissions, misspellings, and missed assignments. Out of respect for the companies and individuals involved, I'm comfortable identifying only the few who I am certain would agree to be named. I'm striving to convey my sense of the room rather than take accurate inventory. ]

A Great Event

I thought this was a wonderful mini-conference, the kind of event Microsoft should be proud of. Let me try to explain why.

But before I do, a few word to those who did not attend. You were missed … and you didn't really miss out. There was nothing finally decided. It was simply a gathering of reasonable size trying to advance the cause. Think of it as a focus group. No one says "man, how could they not invite me to that focus group." Your input will be needed over the coming year and you should contribute your ideas and enthusiasm. PnP needs you. Trust me.

The Right Timing

It's early days for Prism. PnP has had enough time to set goals and give it some shape. But PnP is not so far along that they are set in their ways. They came to listen while there is still time to listen … while it is still possible to say "that's a terrible idea … a complete waste of time" without wounding the poor sod who gave months of his soul to it.

The Right People

Most of us are experienced CAB developers. Many have placed big bets on CAB. We have a stake. We know what we like and what we dislike. And we all feel passionately about having a good foundation for building applications by composition.

That doesn't make us right. It just means that we have common ground … the appropriate experience and mindset for this phase of Prism evolution. It was fantastic comparing notes with people who knew what they were talking about and could speak with conviction about what is right for their employers, their customers, and their colleagues.

I hasten to add that these were not CAB bigots. Of course most of us are unconscious prisoners of the CAB-way to some degree. But everything is on the table. We lobbied for capabilities, not implementations.

We were fortunate to be joined by Jeremy Miller. Jeremy is a long time critic of CAB and while his ignorance of CAB is profound, so are his ideas for alternative approaches to application development. Jeremy is not some kid waving a little red book of Agile practices; he's grounded in real world experience and he listens. He earned the respect of everyone in the room and I am hopeful that he'll keep listening, criticizing, and contributing as Prism evolves.

We kind of blew it by not probing David Hill for more on Acropolis. As I understand it, he drove the original CAB effort and its Acropolis successor. Going through this composite framework development process not once but twice - and with WPF no less - makes him an invaluable source for do's and don'ts. David offered several insightful comments throughout the proceedings. But in his one shot at the podium he had barely enough time to present his 30,000 foot perspective before he was wisked off stage in favor of the next topic. No one's fault really. Still, he has a host of experience and we should find a way to tap it over the coming months.

The Right Focus

I think we covered what could reasonably be covered at this point. I'll probably be drilling deeper into this in some of my next posts but in brief we covered

What PnP thinks are the essential patterns of a composite UI.

We got to agree and disagree. Too much to say in this post; will follow up soon.

Why we (or our customers) are moving to WPF and what we will do with it

PnP has been flogging the notion that we have some compulsion to build "Differentiated UIs" (read: introducing novel effects into business applications). I'm sure there's a post or two coming on this one. My sense is that differentiated UIs are low on the wish list for most of us outside PnP. There were more than a few sneers at pointless eye candy. So it surprised me when we were able to dream up a few WPF effects that could, in David Platt's memorable phrase, "help the user shovel dung faster." There's more to this differentiated UI angle than I cynically thought.

What we value and de-value in CAB

There was vigorous discussion as expected; there probably should have been more.

I have the strong impression that PnP wants to burn it all back while many of us are saying "not so fast". Some of the bigger customers with substantial investments in CAB are properly insistent that there be some road from CAB to Prism. It needn't be automated. But we can't just ignore the CAB goodness either.

In this regard, there was talk of a CAB emulator, written on top of Prism, that could help bridge the gap. It's doubtful that PnP would build it but Ohad Israeli and I are keen on the idea and, at minimum, it's an essential thought experiment for Prism developers: "we can get rid of CAB feature 'X' because, if you really need it, you can emulate it in Prism by doing 'Y'".

There are a few of us who worry about the too hasty demise of the WorkItem. It's evident that PnP would like to kill it and Jeremy has been beating the poor animal savagely. But at least a few of us think the WorkItem has great value and is simply misunderstood. Actually, I'll bet it's more than a few of us. Look for my defense of the WorkItem coming soon.

But I have to concede that it's a darn good thing to be pressed so hard to defend it. If it survives in some form, it will be because it deserves to live and has been adequately justified.

The Prism Quality Attributes

We talked at length about what qualities Prism must exhibit. These became known as the "ilities" as in "flexibility, extensibility, scalability, debugability, testability, subsetability (sic), learnability (sic) …" Of course we considered many important qualities that don't fit the form (simplicity, performance, etc.).

We stack ranked them. I don't remember how it came out but I think extensibility or subsetability came first and learnability came second. I find these preferences fascinating. There was clearly a high premium on the notion that even core features should be removable and replaceable. Note also that learnability trumped simplicity by a wide margin. Evidently people will put up with something that is not simple as long as it can be justified, understood, and explained (and replaced if it doesn't seem useful).

Just because a quality didn't score well didn't mean we don't care about it. There were vigorous discussions about the ability to test and debug Prism applications. Debugging WPF is still a dark art and the room was hungry for tools and guidance. Those are not PnP's responsibility; but we look to them anyway.

I must observe that the audience love for testing was tepid at best. This had to drive Jeremy nuts - as it should. The honest truth is that almost none of us are close to following agile practices. I'd bet we'd all be staring at our shoes if he asked us point blank "do you unit test your MVP triads?" "what's your code coverage percentage?" or "are you using automated builds?" This is our reality folks. Jeremy can argue forever that sound practices both simply the code and remove the need for most of the crutches we cling to. It just isn't happening.

[Note to Chris Holmes and Kent Boogaart: yes, I know you guys are the worthy exceptions.]

I think Prism can show some leadership here by making Prism apps easy to test and providing real guidance on testing. There's a good reason why virtually no one tests their CAB UIs: it is a total bitch to do it and there are few (if any) examples.

Delivering Prism

Schedule and deliverables were two elephants in the room. I'm hoping for a complete package at the end of 2008. Anything later implies overreaching. In other words, if Prism can't be delivered by end of 2008 than its too big and too late.

We all heard something about four months to an alpha. Now that's encouraging.

What's in that alpha? Don't know. It's too early for that. We should expect some clues soon.

We do have a feel for what's out of scope.

Prism targets WPF and there will be no special effort to support alternative client platforms. The Acropolis team already broke their pick on the dream of a single framework for Web, Windows Forms, Mobile and WPF.

On the other hand, I expect PnP to be diligent in relegating WPF references to separate assemblies and to avoid patently unnecessary reliance on WPF constructs. I begged for sensitivity to Silverlight whose XAML-based presentation layer adopts the WPF paradigm, if not its code base.

Prism will be friendly to the graphically gifted designers who can wield Expression and XAML to gorgeous effect. But Prism will not offer a drag-and-drop developer experience in the manner of Acropolis.

Prism will be a coder's framework and it will rely on patterns and practices which, while proven, are not as widely known as they should be. DependencyInjection, for example, is a given. Remember, we stressed the importance of "learnability" … and that implies a willingness to learn.

We settled on a variant of the Woodgrove sample application (http://windowsclient.net/downloads/folders/wpfsamples/entry3756.aspx) as the reference implementation (aka, "RI") that demonstrates the core capabilities. Yeah, the eye candy in the center is a bit much but, supplement it with an editable grid and some pop-up data entry and the there's proof enough.

PnP stressed that the primary objective of the first RI is proof of Prism capability. There is concern (borne of sad experience) that some folks will treat it as a framework in its own right and try to shoe horn their own applications into it. We may expect the RI to demonstrate good practices and offer early recommendations. But to ask for more is to ask for too much too soon. Caveat Scriptor!

What's Next?

A few of us were able to hang out for a few days to help identify the core capabilities and commitments. We got off to a pretty good start, having identified a manageable list of roughly twelve key concepts. The PnP team should be fleshing these out and prioritizing them in the next week or so. Then they'll probably spike on them and blog about them.

PnP showed a couple of their spikes at the meeting and promised to publish them soon. I expect we'll see an active cycle of spikes and blog posts at first. Then I'm sure they'll come up with a better way to get this kind of material out there.

What You Can Do

There is something very important that you can do … and that I intend to do: send the team the use cases that you believe really matter. We shouldn't leave them to develop Prism in a vacuum. We have to tell them what we would like to see.

Here's an example use case.

I can add a view to a region without holding a direct reference to that region.

Example: I want to add my view to the "WorkingPane" region in the Shell. I don't hold a reference to the Shell and I don't know exactly what kind of thing the "WorkingPane" is. I write

aRegionManager.InRegion("WorkingPane").ShowView(myView).

You CAB fans would say "a SmartPart can be added to a Workspace by knowing the name of the Workspace".

The syntax is wholly fictional and besides the point. I don't know if there is a RegionManager class and I don't care. I'm just trying to be understood.

There is no fancy form to fill out. It's just a matter of expressing your thoughts as simply and clearly as you can. If you can't say it in a sentence or two, break it up into smaller cases.

It's probably smart to supply some brief justification too.

Building by composition implies the ability to stuff a view into some container control in another layout view that I know nothing about. The layout could have been written by someone else and delivered to the application in a module unknown to me. All that matters is that I believe my view should be put in that container control, whatever it happens to be.


That's the kind of statement that supports many cases so economize accordingly.

Wrap Up

I think we should commend Microsoft - PnP in particular - for reaching out. Like all of us, they have a budget and a deadline. They spent a chunk of money and time to do what we've been begging them to do: talk to us. They could have thrown a PowerPoint and a one hour LiveMeeting our way. Instead they hosted a three day event. Let's show them they did the right thing by rewarding their openness with our constructive feedback.