Wednesday, October 17, 2012

Add the Visual Studio Command Prompt to VS2012

Several times this week, I wanted to launch a Windows command prompt (not the VS Command Window which is different) while in Visual Studio 2012. More specifically, I wanted to open the command prompt in the directory of the item I had selected in Solution Explorer. I swear I could do that in VS2010 but I can’t find that in VS2012.

I gave up and did a two step dance:

  1. Right-click selected folder | “Open Folder in File Explorer” [alternatively: “Open Containing Folder”]
  2. Ctrl-Shift-right-click | “Open command window here”

That works for most purposes although I don’t benefit from the VS-specific environment variables.

Then I stumbled across an old blog post by V K Sumesh (2008) that describes how to add the Visual Studio Command Prompt (VSCP) to the tools menu. That’s worth a read for background. I’ve updated the steps here for VS 2012 and to suit my preferences.

Add VSCP to the Tools menu

  1. Tools | External Tools …
  2. Click [Add]
  3. Title: Command Prompt
  4. Command: C:\Windows\System32\cmd.exe
  5. Arguments: "%programfiles%\Microsoft Visual Studio 11.0\Common7\Tools\vsvars32.bat"
  6. Initial directory: $(ItemDir)
  7. Click [Move Up] to position the command (I put mine at the top)

In step #5 I’ve specified vsvars32.bat, a batch file that supplements the Windows environment variables with environment variables for the .NET framework tools.

In step #6 I picked the “Item directory” because that’s my preference but the dialog offers other choices which may suit you better.

Here’s what it looks like before I click [OK]

VSCP

Use it

  1. In Solution Explorer select the folder or item where you want the command window to open
  2. Tools | Command Prompt

Hope that helps. Let me know if there’s a better way.

Update

The “Open command prompt” feature that I remembered from VS2010 came by way of the Microsoft “PowerCommands for VS 2010” extension.

Apparently the 2010 extension works for VS2012 as well. Take note: there are a ton of features in that extension, many of them already in VS2012. I was worried about redundancy and bloating my context menu with ever more rarely used options. But it seems well-behaved and you can disable features you don’t want via Tools | Options | PowerCommands. It’s a worthy alternative to the technique I described above.

Tuesday, October 16, 2012

Update a NuGet package with MSBuild

In this post I’ll show you how to add a prebuild MSBuild target to your project that updates a NuGet package in the project when you rebuild.

Background

We ship a zip file of BreezeJs samples . Some (soon to be all) of them rely on a NuGet package to supply the Breeze JavaScript files and other dependencies.

Breeze changes regularly (for the better we think) and so must the NuGet package version.

Our sample solutions are set to restore all NuGet packages so that we don’t have to ship them as part of the zip. Unfortunately, the “packages.config” file that identifies the BreezeJs NuGet package is stuck with the old version. Package restore simply grabs the old version of that package.

We could not find a way to markup the items in the packages.config so that NuGet restored the latest version of the package. It always restores the exact version identified in package.config.

We don’t want to modify those samples every time we update the NuGet package. We want the samples to update to the latest BreezeJs automatically.

We don’t want to update every package in the solution; just our BreezeJs package and its dependencies.

Solution

We added a prebuild target to the bottom of the project file. The target invokes nuget.exe from the command line, telling it to update only our package (“Breeze.MVC4WebApi”) and its dependencies.

We know that nuget.exe is in the project sibling “.nuget” directory because that’s where the package restore facility puts it.

Here’s our target:

<Target Name="BeforeBuild">
  <Exec Command="&quot;$(SolutionDir).nuget\NuGet&quot; update &quot;$(ProjectDir)packages.config&quot; -Id Breeze.MVC4WebApi" />
</Target>

Sunday, October 7, 2012

Tribute to Code Camp

Let us sing praise to code camps everywhere and in particular to the Silicon Valley Code Camp here in the San Francisco bay area. I’m just home from SVCC which, at 2500 attendees, is one of the largest (if not the largest) code camps in the country. At any hour I could choose from twenty-five sessions touching a wide range of technology interests.

Yet SVCC retains an intimacy and immediacy unmatched by formal conferences of equal size. Like all code camps, SVCC is free to everyone, supported by an army of volunteers and industry sponsors (thank you, sponsors!). People flock to camp to share their enthusiasms and discover something unexpected. The mood is jolly and infectious.

It’s a wonderful place to speak. It’s a terrific opportunity to learn to speak. Never spoken publically before? Do you have the urge? Feeling a little shy? Don’t hold back … bring your talk to a Code Camp. Code camp welcomes all speakers and every speaker, novice or veteran, finds a respectful audience. Code Camp is the place to lose your stage fright and speak your mind.

You will connect! At many conferences, the room is dark, the faces are lit by laptops, and it is painfully evident that many in your audience are twittering, emailing, or doing something other than listening. At Code Camp, the lights are up and they’re paying attention. They interrupt constantly with questions and observations. I know exactly how my talk is going, what points are resonating, which are falling flat. I go where my people want me to go. My talk becomes conversational. My nerves calm, my fear of failure dissipates … I’m having a conversation. You really must try it!

Are all the talks good? No, of course not. You’re bound to think, “geez, I could do better than that!” Maybe you can. You won’t know until you put yourself on the line … and you owe that experience to yourself.

Even the inept talk has much to offer. When the speaker cares … and at camp they really care … something of interest always bubbles to the surface. I imagine myself trying to tell the same story, wondering how a different image, a different phrase, a dramatic gesture might make it more compelling. I always come away with some fresh tidbits on the speaker’s subject and a page full of ideas for improving my next presentation.

Finally, a big thank you to the organizers and volunteers at SVCC. An effective conference is no accident. It’s a lot of details and asses-and-elbows. I don’t know about you but I’m always either lost or anxious about getting lost. Driving onto the sprawling Foothill College Campus, a volunteer greets me at the gate and points the way to 4 parking lots, all free thanks to sponsors. Signs every 100 yards along the long winding road lead me confidently to these lots. I step out of the car and hundreds of signs, on the ground and on walls, always in sight, guide me to registration and from there to session rooms. There’s a map on the back of my badge.

Lunch for 2500? No problem … lines move swiftly through the hall; in minutes I’m out on a grassy knoll (not the grassy knoll), under sunny skies, deep in conversation.

For us, speakers and attendees, the day flows effortlessly; we are oblivious to the many things that are going wrong. Maybe the coffee is late. Or all the badges disappeared. The volunteer team scrambles and all is set right. The illusion of calm is sustained.

It’s a magic act made possible by hard work, years of organizing experience, and tons of passion. I urge you to be a part of it. Attend a code camp, speak at a code camp, volunteer at a code camp. You need code camp and code camp needs you.

Friday, September 28, 2012

The SPA as Horseless Carriage

Lately I’ve been talking a lot about rich client applications written in HTML and JavaScript. These are frequently referred to as “SPAs” (Single Page Applications). I call them SPAs myself – it’s cute and flows easily off the tongue.

Unfortunately the phrase “single page application” badly misrepresents the true nature of this architectural style. It reminds Jeremy Ashkenaz of the “horseless carriage”. Both notions capture a small truth while overlooking the larger significance of the technologies involved. No one in the 21st century describes the automobile as a vehicle without a horse. Someday we won’t describe a JavaScript client application as an “app hosted in a single web page.” 

What really matters is that the client application resides and executes on the client in the same way that desktop applications do. In every important respect these are desktop apps; they just happen to be written in HTML and JavaScript.

The single page host is a mere artifact, the app’s least interesting characteristic.What matters is the rich, responsive, productive user experience made possible by execution on the client, state on the client, and dynamic composition of the UI on the client. These apps go to the server only for resources and services that they cannot obtain locally. They communicate with the server mostly to get the latest data and to store user changes.They are otherwise self-reliant and (if designed for it) can function without a server connection for extended periods. This is what distinguishes them from the now-traditional thin client model that is the web form or MVC application – the carriage drawn by a horse.

The carriage has become something  else, a new form of locomotion. The UI has become something else, a new form of web application. The transformation is so sudden and disorienting that we cling to the thing that is lost: the horse, the web page.Eventually we will regain our balance and take for granted what seems novel today. We will find betters words to describe what this is.

Until then, I’ll call them SPAs, however quaint that will seem in a few years time.

Wednesday, August 29, 2012

Single Page App course just published on Pluralsight

This is a momentous event. John Papa has been working for months on a video course that covers building a JavaScript Application from end-to-end with today’s JS technologies. And he just published it.

Get it here: http://pluralsight.com/training/Courses/TableOfContents/spa

It’s free for the next 48 hours (free access ends on Friday, August 31, 2012 at 5pm MDT) so I’m rushing to announce it now. Honestly,even if you miss the 48 hour window, it’s worth subscribing to Pluralsight for at least a month just to watch it. Throw a little more change in the meter to get the “Plus” subscription so you can download the Code Camper source code.

I’ll have much more to say about the course and the code over the coming months. I wish I could do so right now … but I’ve got a product to release.  After that … I’m on it!

CodeCamper

Wednesday, July 4, 2012

HTML or XAML? A chat with Jesse Liberty

It depends … but you knew that. Depends on what? Jesse and I approach this question from many angles on his show, “Yet Another Podcast #69” which aired on July 1st.

I’ve been spending a lot of time in the world of JavaScript Single Page Apps this year … an experience that has been entertaining, thrilling, confounding … and confirms (for me anyway) that HTML/JS clients have a real future in LOB apps. I emphasize future. In the present, you’d better think twice, especially if you’ve got a big application to deliver this year and you don’t absolutely have to run it on every kind of device. If you can target windows devices exclusively (and many business apps can), you’ll be more productive and save money by building a XAML client that will last for years.

I cover this ground and more in our 30 minute podcast.

jsFiddle in 6 minutes

I produced a short video introduction jsFiddle, one of my favorite free tools for JavaScript developers. I published it back in May and forgot to blog about it. It still holds up (despite the regrettably harsh sound quality; turn down your volume). Check it out.

Monday, June 18, 2012

In Answer to your Query

The Service Department explains Microsoft’s latest shift in strategy:

We are sorry to inform you
the item you ordered
is no longer being produced.
It has not gone out of style
nor have people lost interest in it.
In fact, it has become
one of our most desired products.
Its popularity is still growing.
Orders for it come in
at an ever increasing rate.

However, a top-level decision
has caused this product
to be discontinued forever.

Instead of the item you ordered
we are sending you something else.
It is not the same thing,
nor is it a reasonable facsimile.
It is what we have in stock,
the very best we can offer.

If you are not happy
with this substitution
let us know as soon as possible.

As you can imagine
we already have quite an accumulation
of letters such as the one
you may or may not write.
To be totally fair
We respond to these complaints
as they come in.
Yours will be filed accordingly,
answered in its turn.

One of Naomi Lazard's poems from a faceless bureaucracy in Ordinances, first published in The Ohio Review (Ardis 1984). Discovered by me in Garrison Keillor’s Good Poems for Hard Times.

Monday, April 9, 2012

DevForce and Second Level Caching

We are asked occasionally whether DevForce supports second level caching, that is, does DevForce have some means on the server of remembering previously queried entities between client requests.

This is rarely a real problem in a DevForce application because (a) DevForce applications are usually rich client applications and (b) the DevForce EntityManager cache typically delivers the performance benefits folks seek from a server side cache.

My response, although grounded in long experience, is not always persuasive. Second level caching should improve scalability in theory and theory often trumps reality.

In this post I discuss how to tell if you would benefit from server-side caching and how you might be able to use an Entity Framework second level cache to achieve it.

I haven't tried to install an Entity Framework second level cache. In this post I provide links to information about how to implement second level caching. The links come from reliable sources and it looks like this kind of caching should work. If you try it, please let me know how it goes. I haven’t tried it myself because I find that, for almost all of our customers, this is a solution looking for a problem. But if it makes sense for your application and you give it a try, I'm counting on you to get back to me with your results… and maybe contribute some guidance and code to help others.

Perceived Problem

You have a great many users who repeatedly query for the same entities. Those entities hardly ever change but for some reason they keep asking for them and for some reason you can’t cache them on the client.

You’ve measured and these queries account for a significant percentage of database hits. Moreover, they’re really bogging the database down. You’ve determined conclusively, after careful study of production traffic, that these repetitive database queries are choking your database. You’re pretty sure that server-side caching would provide significant relief.

Are you sure?

Honestly, I don’t think this happens often … which is why you should have the measurements that prove poor performance is traceable to this cause. Don’t guess that this is the problem. Don’t forecast that it is going to be a problem. You need proof.

The interest in second level caches arises most often among people who are evaluating DevForce and haven’t yet built an application with it. Such inquiries are typically speculative. Trust me, you can waste a lot of time investigating something that isn’t going to make any real difference in your application. It might make matters worse.

But suppose you’ve demonstrated that this is a real problem in your working application. You’ve established that the client app can’t cache these entities locally for some reason (perhaps it’s a web client) … which may be why you’re looking into caching on the server.

Maybe it really is time to consider EF “Second Level Caching

You could try caching query results in a Query Interceptor. But that will require code you must write and if an EntityServer (aka, BOS) is involved you’ll have to make it thread safe. Consider EF “Second Level Caching” before rolling your own.

If there’s a 2nd, there must be a 1st

Time for some definitions. The “first level cache” is the local cache of entities retrieved by some persistence manager. The DevForce EntityManager is a first level cache on the client. EF’s ObjectContext is a first level cache on the server.

When writing with EF Code First, you create a DbContext which is wrapper around an ObjectContext. You can think of your DbContext as a first level cache if you wish.

On the EntityServer (aka, the BOS) DevForce creates a new ObjectContext for each client request.

This EntityServer is in-process in a 2-tier deployment.

These first level caches do a great job of holding frequently requested entities. But they (and their entities) disappear when the EntityManager or ObjectContext disappear. The EntityManager on the client can live a long time, the life of the user session perhaps. The ObjectContext, on the other hand, evaporates after each client request.

If you had a “second level cache” it would sit outside of the EF ObjectContext. It would outlive the ObjectContext and would be shared by multiple instances of ObjectContexts. When your query reaches a new, empty ObjectContext, EF makes a request to the database. A second level cache could intercept that database request and satisfy it with previously retrieved results.

That sounds like the perfect resolution to your problem. If you had a second level cache, it could hold query results for the entities that clients are clamoring for … and the database pressure would be reduced.

Unfortunately EF doesn’t have an out-of-the-box second level cache.

From time to time you’ll hear someone argue that NHibernate is better than EF because NHibernate does have a second level cache. Often the person making this argument (a) is unaware of the limitations of a second level cache, (b) doesn’t realize that DevForce client-side caching usually eliminates the need for a second level cache and (c) has no evidence that the application would benefit from a second level cache. There’s the whiff of FUD in the air.

Let’s deodorize. As it happens, there is an open source second level cache for EF!

In brief, it’s a plug-in that intercepts EF requests to the EF “Store Provider” (the component that turns EF store queries into SQL queries on a database, be it SQL Server, Oracle, or something else).

clip_image001

The second level cache checks if it’s holding results for that query; if so, it returns them from its cache, short circuiting the call to the database; if not, the query passes through to the Store Provider. Then the second level cache intercepts and caches the returned results for next time. I’m simplifying of course; you’ll want to dig into the resources mentioned below for full details.

Notice that the component includes a tracing interceptor as well.

You don’t have to design your application for second level caching up front. You can add the second level cache component later … when you know you need it. It’s presence (or absence) is largely transparent to DevForce and EF. You’re just “wrapping” the Store Provider in this caching component; it looks like a normal Store Provider to EF.

Learn about Second Level Caching in EF

If this approach sounds like it would help, you can learn more about it from these sources:

Second-Level Caching in the Entity Framework and Windows Azure (Julie Lerman, Sept 2011)

EF Caching with Jarek Kowalski's Provider (EF Team, Sept 2010)

Using tracing and caching provider wrappers with Code First

Thursday, March 22, 2012

Squash Entity Framework startup time with pre-compiled views

In brief

Your application can stall for several minutes while Entity Framework gathers the information it needs to perform queries and saves, a lengthy process it performs twice: before the first query and before the first save. Those minutes pile up, wasting developer time and angering your customers. You can drastically reduce these delays by pre-compiling the Entity Framework’s “views” of your model … as I explain in this post and demonstrate in its 14 minute accompanying video.



Costly EF startup

If you’ve used Entity Framework for a line-of-business application model, you’ve suffered a lengthy delay before the first query completes and a similar delay before the first save completes. Subsequent queries and saves finish are much quicker, completing in an amount of time commensurate with the request.

The delay is a non-linear function of the number of entities in the model. It often feels exponential. You probably won’t notice it in a toy model (every demo you’ll ever see) because the delay is lost in the wash of everything else that you’re thinking and learning about. But when the model grows to normal size – 100 or more entities – the delay mushrooms to a minute, two minutes or more. And you suffer this delay every time run the application … which you do all day, every day during development. Multiply that by the number of developers on the project and you’re wasting a lot of time … and money.

The cost is far worse than the time lost. Make a developer wait two or three minutes per iteration and she’s bound to forget why she ran the app in the first place. Two minutes is a long time. The mind wanders. The mind turns to email, Twitter, and Facebook. Productivity is shot.

Now I don’t think you should be going near a database during normal development iterations. I recommend that you toggle the app to run against an in-memory representation of your data layer such as the DevForce “Fake Backing Store”. But maybe you’ll disregard my suggestion. And everyone has to hit the database occasionally just to confirm that the app works end-to-end.

So the development cost is terrible no matter what you do … unless your developers’ time is free; perhaps you price them at zero dollars and you’re response to every productivity decline is to hire more developers. Your second instinct is to outsource.

What about your customers and internal end users? If the app runs 2-tier, they suffer the delay every time they launch the app. Does their time matter to you? I’ll bet someone will make sure it matters to you.

You won’t field customer complaints if your application runs n-tier (e.g., in a Silverlight application) because the Entity Framework runs on the server. The startup penalty is paid only by the first user to query and save. If you run n-tier and you don’t care about developer productivity, turn the page and move along.

Pre-compiled Views to the rescue

I’ve been wondering what to do about this for a long time. I’d heard that “Entity Framework Pre-compiled Views” might help. I also had heard that it was troublesome and might not work. It seemed like one more thing to get around to someday.

Then one of our professional services customers called and complained. His project had started fine but hit the wall at around 200 entity types. The first query and first save each took about 50 seconds on most machines. Team productivity had sunk, morale was sinking, and he was catching serious political flak internally. Our own staff confirmed that the problem was real. Since we (IdeaBlade) had recommended EF Code First, we had to do something.

My colleague, Steven Schmitt, did the leg work that proved EF pre-compiled views (a) work for Code First models, (b) were easy to create, and (c) improved performance dramatically: the 50 second first query dropped to seven seconds; the 50 second first save dropped to less than one second.

He deserves the credit … I’m taking the glory by blogging about it.

The accompanying 14 minute video shows EF’s slow launch times for a 200+ entity model, demonstrates how to create pre-compiled Views, and explains a bit about how they work.

I produced the video to spare you a parade of screen shots. I think it also conveys the seriousness of the problem and the practical benefit of pre-compiled Views more effectively than I can in spare prose.

The EF view generation tool does not work with EF 4.3 yet. Microsoft sources report that an update is in the works.

For those of you who want just the facts, here they are:
  1. Ensure that SQL Server Express is installed. You can get around it with a DefaultConnectionFactory but it’s such a pain. Save your energy for better things and just install the thing.

  2. In Visual Studio 2010, open the Extension Manager (Tools | Extension Manager).

  3. Search for “Entity Framework Power Tools”. The version as I write is “Entity Framework Power Tools CTP1 0.5.0.0”.

  4. [optional] Review the online information about it. These tools do more than pre-compile EF views.

  5. Locate your custom DbContext class in Solution Explorer [note: we’re describing how to pre-compile views for an EF Code First model. You follow a similar approach for an EDMX-based model although I haven’t tried it personally.]

  6. Make sure that your DbContext class has a public parameterless constructor … or the tool will fail in a mysterious way.

  7. Select your DbContext class, right-click, and select “Entity Framework”

  8. Select the “Optimize Entity Data Model” sub-item

  9. Wait … the tool takes a while to compile the "views”.

  10. When it’s done, your DbContext has a companion DbContext.Views class file.

  11. Build and run.
You should notice an immediate improvement in start time. There is still a delay before the first query completes. But it should be a fraction of the former delay … around 1/7th of the time. The delay for the first save should be gone; it takes no longer than the second save.

DevForce Developer Notes

Your DevForce application benefits from EF Pre-compiled views when you follow these steps. A DevForce Code First model doesn’t have to have a custom DbContext class … but you will have to create one to use this tool.

DevForce developers typically don’t define a parameterless constructor because DevForce wants a constructor that takes a connection string. Add the parameterless constructor anyway. Don’t worry, we will pickup the appropriate constructor at runtime.

When the model changes

Entity Framework detects if your entity model classes have changed since you compiled the EF views class. When you attempt your first query, you’ll get a clear runtime exception telling you to re-compile the views class.

Only database-related changes to persisted data and navigation properties matter. You can add UI hint attributes (e.g., [Display…]) and non-persisted custom properties (e.g., FullName) without triggering an exception. Any change that would affect the mapping between your entity classes and the database will trigger the exception.

How does EF know that the model has changed? I’m not certain but I have a pretty good guess. Ignore the views class filename and look at the name of the views class itself. It will be something like “ViewsForBaseEntitySets72E6108A34B7DB042DBA3C465F35B967B4E3C76051DFBAB958B69CB0D23EA8B7”.

The hex suffix at the end looks like a hash. I’m guessing it is a hash of your entity model classes and that Entity Framework spends the initial seconds before the first query reflecting over and hashing your entity model classes before comparing that hash to this views class suffix. Inside the class itself are a couple more hash values. Maybe it's using those too or instead. Someday I'll find out. It's evident that its doing some kind of comparison between the entity model classes and this views class to ascertain if there is a disconnect.

Anyway, at runtime, if EF detects a difference, it throws an exception which should terminate your app. You'll encounter the exception quickly and unmistakeably when your app first requests data. I presume that will be before you push to production :). Just re-run the tool and you should be back in business.

At IdeaBlade we’re looking into a way to detect the views/model incompatibility at build time and regenerate the pre-compiled views automatically.

Meanwhile, it’s good to know that EF fails fast when the pre-compiled views and your model are out of sync … and the remedy is as simple as re-running the tool.

Hope this helps real-world EF developers everywhere.

Update - March 23

My buddy Steve Schmitt reminds me of a few more points.
  • Rowan Miller and the EF team deserve credit for developing the EF Power Tools; we just downloaded it.
  • There’s a bit more info about the tool online.
  • If you don’t want to regenerate the views for whatever reason, you can just delete the views file and you’re back to “normal”.

Update - April 6

The EF team published this month an important white paper on performance in EF 4 and 5 that bears on pre-compiled views and other tactics that could make a significant difference for your project.

Thursday, March 8, 2012

Synchronous tasks with Task<T>

I extracted this thought from an email by Microsoft’s Brad Wilson and circulated within my company. Why not share it with you?

Brad starts with an important piece of advice: don’t make a synchronous activity async!

Ok, but how do you construct a Task<T> that you’ll consume within the context of a bundle of tasks? Brad shows how. Hey … thanks Brad!

-----
The following is an anti-pattern with tasks on a server:
return Task.Factory.StartNew(
    () => model.Deserialize(stream, null, type));
This will run your code on a new thread, forcing a context switch, which is unnecessary because your code is fundamentally synchronous. If you’re going to run synchronously, you should just run synchronously, and return a TaskCompletionSource that’s populated with your result. For example:
object result = model.Deserialize(stream, null, type);
var tcs = new TaskCompletionSource<object>();
tcs.SetResult(result);
return tcs.Task;
If Deserialize might throw, then a version with try/catch would be a better implementation of the Task contract:
 var tcs = new TaskCompletionSource<object>();

 try
 {
     object result = model.Deserialize(stream, null, type);
     tcs.SetResult(result);
 }
 catch(Exception ex)
 {
     tcs.SetException(ex);
 }

 return tcs.Task;

What do you mean by "fundamentally synchronous"?


I can assure you that the Deserialize method in question is synchronous.
model.Deserialize(stream, null, type));
That expression blocks until it returns the deserialized object. You see the stream parameter and think "this should be an asynchronous method". Maybe it should be, smarty pants; come back when you have written a deserializer that can reliably produce an object graph without reading the entire stream first.

While we're waiting for your DeserializeAsync implementation, let's push on the proposition that we should not move the execution of Deserialize to another thread.

Clearly, if this method is reading a stream, it could take "a long time" to complete. If we are running on the client, we'll freeze the UI until the deserialization completes. That can't be good. And it isn't. If we're running on the client, you should consider moving the execution of this method to another thread.

In this case, we're running on the server. There is no user waiting for the method to return so we don't care about speed on any particular thread. I'm sure the client cares about a fast response but the response isn't coming until the work is done ... on one thread or another.

We do care about total server throughput. We gain nothing by moving execution to another thread; in fact, we lose because of the thread context switching cost. This is why Brad says that spawning a new task with "Task.Factory.StartNew" is "an anti-pattern ... on a server". It's cool on the client; not cool on the server.

I'm ready with DeserializeAsync; now what?

Should we invoke the async method within a delegate passed to "Task.Factory.StartNew"? No, we should not!

This surprised me too ... until someone walked me through it ... until someone asked me "what do you think will happen on the thread you spawn?" I realized that all I would do on that new thread is dream up some way to wait for DeserializeAsync to finish. Of course DeserializeAsync spawns its own thread so I've got an original thread waiting for my task thread which is waiting for the DeserializeAsync thread. That's a complete waste of time ... and a pointless, resource-wasting context switch.

What's the point of TaskCompletionSource?

We're in this situation because for some (good) reason we want to expose a method - synchronous or asynchronous - as a Task. We don't want or need to spawn a new thread to run that method. We just want to consume it as a Task. The TaskCompletionSource is the wrapper we need for this purpose. It lets us return a Task object with the Task API that we, like puppeteers, can manipulate while staying on the current thread.

For another, perhaps better, and certainly excellent explanation of this, I recommend the following Phil Pennington video on TaskCompletionSource. Happy coding!