DLinq (Linq to SQL) Performance (Part 3)

 

I’d like to start with a little housekeeping. Some readers asked me how I made the nifty table in part 2 that showed the costs broken down by major area.

It was actually pretty easy to create that table using our profiler. I did 500 iterations of the test case in sampled mode and that gave me plenty of samples. I could see which callstacks ended in mscorjit.dll – even without symbols -- which I never bothered using -- that gave me a good idea how much jit time there was. I could see the data-fetching functions from the underlying provider being called -- the same ones that appear in the no-linq version of the code (GetInt32, GetString and so forth) so I knew what the costs of actually getting the data were. I could see the path that creates the expression tree for the query and I could see stub dispatch functions. So I added up the related ones, broke it into 5 categories and then showed one more line for the bits that didn’t fit into any of those categories. Then I scaled the numbers up so that the part of the benchmark I cared about was 100% (there was other junk in my hardness not relevant to the benchmark).  That’s it :)

One more gotcha though, when I talked to Matt about this we realized that I had reported the breakdown from an internal build that included one change he had made for me from the May 2006 CTP. The breakdown you would see if you did this experiment again, on a May 2006 CTP build, would have reflection costs instead of jitting costs.  I'll discuss that more when I go into the specific improvements we made which are coming soon...

But, meanwhile, let's move on to the real topic of Part 3…

Per Row Costs

In this area there were two things that I worried about.

Entity/Identity Managment

The data binding problem used to be at the top of my list but entity creation costs started to worry me more. This query is sort of an example of a troublesome one:

var q = from o in nw.Orders
select o;

Doing foreach over the above is much less efficient than the below, and the astonishment factor for that might be pretty high.

var q = from o in nw.Orders
select new {o.everything …};

In general all these “read a bunch of data” queries are likely to suffer significant overhead because of temporary allocations and the mid-to-long life of the entities being enumerated. In the first formulation, the objects have to be stored because they might be modified as you foreach over them (or later). In the second formulation no object management is required because you’re newing up a synthetic object not associated with a table.

But, there is a strong expectation that code is just a forward-only read operation with at most one temporary business object created per iteration. Especially if you don’t modify the objects in the foreach loop is read-only. That expectation also implies a nifty solution.

Since the Connection is a property of the context you could reasonably have multiple contexts associated with one connection. In particular a read-only context could be just the thing to avoid all this entity creation. Importantly the read-only context can be connected to the same Connection which means isolation concerns go away – it looks like one logical view of the database. This is important because otherwise a thread owning two DataContext objects could self-deadlock.

So I recommend creating some kind of read-only context to avoid the overhead of object management when it was not necessary.

Data Binding

The second item is overhead associated with extracting the data. In the then current implementation there were several virtual calls between the MoveNext/GetCurrent on the enumerator and the actual data field fetches.

However, if we do query compilation as described in the last installment we have the opportunity to significantly limit the dispatches – as low as one delegate call.

To do this we need to pre-build the necessary helper for getting and storing the fields so that there is effectively straight line code calling the underlying provider. Basically the same code you would have to write if you were doing the data access manually.

Pre-compiling it also means we don’t have to jit the helper every time the query runs – we have a place to store the compiled helper function that stores the data.

At this point we had a pretty good idea what kinds of actions to take to make things a whole lot better.