July 2010

Volume 25 Number 07

Smart Client - Building Distributed Apps with NHibernate and Rhino Service Bus

By Oren Eini | July 2010

For a long time, I dealt almost exclusively in Web applications. When I moved over to build a smart client application, at first I was at quite a loss as to how to approach building such an application. How do I handle data access? How do I communicate between the smart client application and the server?

Furthermore, I already had a deep investment in an existing toolset that drastically reduced the time and cost for development, and I really wanted to be able to continue using those tools. It took me a while to figure out the details to my satisfaction, and during that time, I kept thinking how much simpler a Web app would be—if only because I knew how to handle such apps already.

There are advantages and disadvantages to smart client applications. On the plus side, smart clients are responsive and promote interactivity with the user. You also reduce server load by moving processing to a client machine, and enable users to work even while disconnected from back-end systems.

On the other hand, there are the challenges inherent in such smart clients, including contending with the speed, security, and bandwidth limitations of data access over the intranet or Internet. You’re also responsible for synchronizing data between front-end and back-end systems, distributed change-tracking, and handling the issues of working in an occasionally connected environment.

A smart client application, as discussed in this article, can be built with either Windows Presentation Foundation (WPF) or Silverlight. Because Silverlight exposes a subset of WPF features, the techniques and approaches I outline here are applicable to both.

In this article, I start the processes of planning and building a smart client application using NHibernate for data access and Rhino Service Bus for reliable communication with the server. The application will function as the front end for an online lending library, which I called Alexandra. The application itself is split into two major pieces. First, there’s an application server running a set of services (where most of the business logic will reside), accessing the database using NHibernate. Second, the smart client UI will make exposing those services to the user easy.

NHibernate is an object-relational mapping (O/RM) framework designed to make it as easy to work with relational databases as it is to work with in-memory data. Rhino Service Bus is an open source service bus implementation built on the Microsoft .NET Framework, focusing primarily on ease of development, deployment and use.

Distribution of Responsibilities

The first task in building the lending library is to decide on the proper distribution of responsibility between the front-end and back-end systems. One path is to focus the application primarily on the UI so that most of the processing is done on the client machine. In this case the back end serves mostly as a data repository.

In essence, this is just a repetition of the traditional client/server application, with the back end serving as a mere proxy for the data store. This is a valid design choice if the back-end system is just a data repository. A personal book catalog, for example, might benefit from such architecture, because the behavior of the application is limited to managing data for the users, with no manipulation of the data on the server side.

For such applications, I recommend making use of WCF RIA Services or WCF Data Services.If you want the back-end server to expose a CRUD interface for the outside world, then leveraging WCF RIA Services or WCF Data Services allows you to drastically cut down the time required to build the application. But while both technologies let you add your own business logic to the CRUD interface, any attempt to implement significant application behavior using this approach would likely result in an unmaintainable, brittle mess.

I won’t cover building such an application in this article, but Brad Adams has shown a step-by-step approach for building just such an application using NHibernate and WCF RIA Services on his blog at blogs.msdn.com/brada/archive/2009/08/06/business-apps-example-for-silverlight-3-rtm-and-net-ria-services-july-update-part-nhibernate.aspx.

Going all the way to the other extreme, you can choose to implement most of the application behavior on the back end, leaving the front end with purely presentation concerns. While this seems reasonable at first, because this is how you typically write Web-based applications, it means that you can’t take advantage of running a real application on the client side. State management would be harder. Essentially you’re back writing a Web application, with all the complexities this entails. You won’t be able to shift processing to the client machine and you won’t be able to handle interruptions in connectivity.

Worse, from the user perspective, this approach means that you present a more sluggish UI since all actions require a roundtrip to the server.

I’m sure it won’t surprise you that the approach I’m taking in this example is somewhere in the middle. I’m going to take advantage of the possibilities offered by running on the client machine, but at the same time significant parts of the application run as services on the back end, as shown in Figure 1.

Figure 2 The Application's Architecture

Figure 1 The Application’s Architecture

The sample solution is composed of three projects, which you can download from github.com/ayende/alexandria. Alexandria.Backend is a console application that hosts the back-end code. Alexandria.Client contains the front-end code, and Alexandria.Messages contains the message definitions shared between them. To run the sample, both Alexandria.Backend and Alexandria.Client need to be running.

One advantage of hosting the back end in a console application is that it allows you to easily simulate disconnected scenarios by simply shutting down the back-end console application and starting it up at a later time.

Fallacies of Distributed Computing

With the architectural basics in hand, let’s take a look at the implications of writing a smart client application. Communication with the back end is going to be through an intranet or the Internet. Considering the fact that the main source for remote calls in most Web applications is a database or another application server located in the same datacenter (and often in the same rack), this is a drastic change with several implications.

Intranet and Internet connections suffer from issues of speed, bandwidth limitations and security. The vast difference in the costs of communication dictate a different communication structure than the one you’d adopt if all the major pieces in the application were residing in the same datacenter.

Among the biggest hurdles you have to deal with in distributed applications are the fallacies of distributed computing. These are a set of assumptions that developers tend to make when building distributed applications, which ultimately prove false. Relying on these false assumptions usually results in reduced capabilities or a very high cost to redesign and rebuild the system. There are eight fallacies:

  • The network is reliable.
  • Latency is zero.
  • Bandwidth is infinite.
  • The network is secure.
  • Topology doesn’t change.
  • There is one administrator.
  • Transport cost is zero.
  • The network is homogeneous.

Any distributed application that doesn’t take these fallacies into account is going to run into sever problems. A smart client application needs to deal with those issues head on. The use of caching is a topic of great importance in such circumstances. Even if you aren’t interested in working in a disconnected fashion, a cache is almost always useful for increasing application responsiveness.

Another aspect you need to consider is the communication model for the application. It may seem that the simplest model is a standard service proxy that allows you to perform remote procedure calls (RPCs), but this tends to cause problems down the road. It leads to more-complex code to handle a disconnected state and requires you to explicitly handle asynchronous calls if you want to avoid blocking in the UI thread.

Back-End Basics

Next, there’s the problem of how to structure the back end of the application in a way that provides both good performance and a degree of separation from the way the UI is structured.

The ideal scenario from a performance and responsiveness perspective is to make a single call to the back end to get all the data you need for the presented screen. The problem with going this route is that you end up with a service interface that mimics the smart client UI exactly. This is bad for a whole host of reasons. 
Mainly, the UI is the most changeable part in an application. Tying the service interface to the UI in this fashion results in frequent changes to the service, driven by purely UI changes.

That, in turn, means deployment of the application just got a lot harder. You have to deploy both the front end and the back end at the same time, and trying to support multiple versions at the same time is likely to result in greater complexity. In addition, the service interface can’t be used to build additional UIs or as an integration point for third-party or additional services.

If you try going the other route—building a standard, fine-grained interface—you’ll run head on into the fallacies (a fine-grained interface leads to a high number of remote calls, resulting in issues with latency, reliability and bandwidth).

The answer to this challenge is to break away from the common RPC model. Instead of exposing methods to be called remotely, let’s use a local cache and a message-oriented communication model.

Figure 2 shows how you pack several requests from the front end to the back end. This allows you to make a single remote call, but keep a programming model on the server side that isn’t tightly coupled to the needs of the UI.

Figure 2 A Single Request to the Server Contains Several Messages

Figure 2 A Single Request to the Server Contains Several Messages

To increase responsiveness, you can include a local cache that can answer some queries immediately, leading to a more-responsive application.

One of the things you have to consider in these scenarios is what types of data you have and the freshness requirements for any data you display. In the Alexandria application, I lean heavily on the local cache because it is acceptable to show the user cached data while the application requests fresh data from the back-end system. Other applications—stock trading, for example—should probably show nothing at all rather than stale data.

Disconnected Operations

The next problem you have to face is handling disconnected scenarios. In many applications, you can specify that a connection is mandatory, which means you can simply show the user an error if the back-end servers are unavailable. But one benefit of a smart client application is that it canwork in a disconnected manner, and the Alexandria application takes full advantage of that.

However, this means the cache becomes even more important because it’s used both to speed communication and to serve data from the cache if the back-end system is unreachable.

By now, I believe you have a good understanding of the challenges involved in building such an application, so let’s move on to see how to solve those challenges.

Queues Are One of My Favorite Things

In Alexandria, there’s no RPC communication between the front end and the back end. Instead, as shown in Figure 3, all communication is handled via one-way messages going through queues.

Figure 3 The Alexandria Communication Model

Figure 3 The Alexandria Communication Model

Queues provide a rather elegant way of solving the communication issues identified earlier. Instead of communicating directly between the front end and the back end (which means supporting disconnected scenarios is hard), you can let the queuing subsystem handle all of that.

Using queues is quite simple. You ask your local queuing subsystem to send a message to some queue. The queuing subsystem takes ownership of the message and ensures that it reaches its destination at some point. Your application, however, doesn’t wait for the message to reach its destination and can carry on doing its work.

If the destination queue is not currently available, the queuing subsystem will wait until the destination queue becomes available again, then deliver the message. The queuing subsystem usually persists the message to disk until it’s delivered, so pending messages will still arrive at their destination even if the source machine has been restarted.

When using queues, it’s easy to think in terms of messages and destinations. A message arriving at a back-end system will trigger some action, which may then result in a reply sent to the original sender. Note that there’s no blocking on either end, because each system is completely independent.

Queuing subsystems include MSMQ, ActiveMQ, RabbitMQ, and others. The Alexandria application uses Rhino Queues (github.com/ayende/rhino-queues), an open source, xcopy-deployed queuing subsystem. I chose Rhino Queues for the simple reason that it requires no installation or administration, making it ideal for use in samples and in applications that you need to deploy to many machines. It’s also worth noting that I wrote Rhino Queues. I hope you like it.

Putting Queues to Work

Let’s see how you can handle getting the data for the main screen using queues. Here’s the ApplicationModel initialization routine:

protected override void OnInitialize() {
  bus.Send(
    new MyBooksQuery { UserId = userId },
    new MyQueueQuery { UserId = userId },
    new MyRecommendationsQuery { UserId = userId },
    new SubscriptionDetailsQuery { UserId = userId });
}

I’m sending a batch of messages to the server, requesting several pieces of information. There are a number of things to notice here. The granularity of the messages sent is high. Rather than sending a single, general message such as MainWindowQuery, I send many messages, (MyBooksQuery, MyQueueQuery, and so on), each for a very specific piece of information. As discussed previously, this allows you to benefit both from sending several messages in a single batch (reducing network roundtrips) and reducing the coupling between the front end and the back end.

RPC Is Thy Enemy

One of the most common mistakes in building a distributed application is to ignore the distribution aspect of the application. WCF, for example, makes it easy to ignore the fact that you’re making a method call over the network. While that’s a very simple programming model, it means you need to be extremely careful not to violate one of the fallacies of distributed computing.

Indeed, it’s the very fact that the programming model offered by frameworks such as WCF is so similar to the one you use for calling methods on the local machine that leads you to make those false assumptions.

A standard RPC API means blocking when making a call over the network, higher cost for each remote method call and the potential for failure if the back-end server is not available. It’s certainly possible to build a good distributed application on top of this foundation, but it takes greater care.

Taking a different approach leads you to a programming model based on explicit message exchanges (as opposed to the implicit message exchanges common in most SOAP-based RPC stacks). That model may look strange at first, and it does require you to shift your thinking a bit, but it turns out that by making this shift, you significantly reduce the amount of complexity to worry about overall.

My example Alexandria application is built on top of a one-way messaging platform, and it makes full use of this platform so the application is aware of the fact it’s distributed and actually takes advantage of that.

Note that all of the messages end with the term Query. This is a simple convention I use to denote pure query messages that change no state and expect some sort of response.

Finally, notice that I don’t seem to be getting any sort of reply from the server. Because I’m using queues, the mode of communication is fire and forget. I fire off a message (or a batch of messages) now, and I deal with the replies at a later stage.

Before looking at how the front end deals with the responses, let’s see how the back end processes the messages I just sent. Figure 4 shows how the back-end server consumes a query for books. And here, for the first time, you can see how I use both NHibernate and Rhino Service Bus.

Figure 4 Consuming a Query on the Back-End System

public class MyBooksQueryConsumer : 
  ConsumerOf<MyBooksQuery> {

  private readonly ISession session;
  private readonly IServiceBus bus;

  public MyBooksQueryConsumer(
    ISession session, IServiceBus bus) {

    this.session = session;
    this.bus = bus;
  }

  public void Consume(MyBooksQuery message) {
    var user = session.Get<User>(message.UserId);
    
    Console.WriteLine("{0}'s has {1} books at home", 
      user.Name, user.CurrentlyReading.Count);

    bus.Reply(new MyBooksResponse {
      UserId = message.UserId,
      Timestamp = DateTime.Now,
      Books = user.CurrentlyReading.ToBookDtoArray()
    });
  }
}

But before diving into the actual code that handles this message, let’s discuss the structure in which this code is running.

It’s All About Messages

Rhino Service Bus (hibernatingrhinos.com/open-source/rhino-service-bus) is, unsurprisingly, a service bus implementation. It’s a communication framework based on a one-way queued message exchange, heavily inspired by NServiceBus (nservicebus.com).

A message sent on the bus will arrive at its destination queue, where a message consumer will be invoked. That message consumer in Figure 4 is MyBooksQueryConsumer. A message consumer is a class that implements ConsumerOf<TMsg>, and the Consume method is invoked with the appropriate message instance to handle the message.

You can probably surmise from the MyBooksQueryConsumer constructor that I’m using an Inversion of Control (IoC) container to supply dependencies for the message consumer. In the case of MyBooksQueryConsumer, those dependencies are the bus itself and the NHibernate session.

The actual code for consuming the message is straightforward. You get the appropriate user from the NHibernate session and send a reply back to the message originator with the requested data.

The front end also has a message consumer. This consumer is for MyBooksResponse:

public class MyBooksResponseConsumer : 
  ConsumerOf<MyBooksResponse> {

  private readonly ApplicationModel applicationModel;

  public MyBooksResponseConsumer(
    ApplicationModel applicationModel) {
    this.applicationModel = applicationModel;
  }

  public void Consume(MyBooksResponse message) {
    applicationModel.MyBooks.UpdateFrom(message.Books);
  }
}

This simply updates the application model with the data from the message. One thing, however, should be noted: the consume method is notcalled on the UI thread. Instead, it’s called on a background thread. The application model is bound to the UI, however, so updating it must happen on the UI thread. The UpdateFrom method is aware of that and will switch to the UI thread to update the application model in the correct thread.

The code for handling the other messages on both the back end and the front end is similar. This communication is purely asynchronous. At no point are you waiting for a reply from the back end, and you aren’t using the .NET Framework’s asynchronous API. Instead, you have an explicit message exchange, which usually happens almost instantaneously, but can also stretch over a longer time period if you’re working in a disconnected mode.

Earlier, when I sent the queries to the back end, I just told the bus to send the messages, but I didn’t say where to send them. In Figure 4, I just called Reply, again not specifying where the message should be sent. How does the bus know where to send those messages?

In the case of sending messages to the back end, the answer is: configuration. In the App.config, you’ll find the following configuration:

<messages>
  <add name="Alexandria.Messages"
    endpoint="rhino.queues://localhost:51231/alexandria_backend"/>
</messages>

This tells the bus that all messages whose namespace starts with Alexandria.Messages should be sent to the alexandria_backend endpoint.

In the handling of the messages in the back-end system, calling Reply simply means sending the message back to its originator.

This configuration specifies the ownership of a message, that is, to whom to send this message when it’s placed on the bus and where to send a subscription request so you’ll be included in the distribution list when messages of this type are published. I’m not using message publication in the Alexandria application, so I won’t cover that.

Session Management

You’ve seen how the communication mechanism works now, but there are infrastructure concerns to address before moving forward. As in any NHibernate application, you need some way of managing the session lifecycle and handling transactions properly.

The standard approach for Web applications is to create a session per request, so each request has its own session. For a messaging application, the behavior is almost identical. Instead of having a session per request, you have a session per message batch.

It turns out that this is handled almost completely by the infrastructure. Figure 5 shows the initialization code for the back-end system.

Figure 5 Initializing Messaging Sessions

public class AlexandriaBootStrapper : 
  AbstractBootStrapper {

  public AlexandriaBootStrapper() {
    NHibernateProfiler.Initialize();
  }

  protected override void ConfigureContainer() {
    var cfg = new Configuration()
      .Configure("nhibernate.config");
    var sessionFactory = cfg.BuildSessionFactory();

    container.Kernel.AddFacility(
      "factory", new FactorySupportFacility());

    container.Register(
      Component.For<ISessionFactory>()
        .Instance(sessionFactory),
      Component.For<IMessageModule>()
        .ImplementedBy<NHibernateMessageModule>(),
      Component.For<ISession>()
        .UsingFactoryMethod(() => 
          NHibernateMessageModule.CurrentSession)
        .LifeStyle.Is(LifestyleType.Transient));

    base.ConfigureContainer();
  }
}

Bootstrapping is an explicit concept in Rhino Service Bus, implemented by classes deriving from AbstractBootStrapper. The bootstrapper has the same job as the Global.asax in a typical Web application. In Figure 5 I first build the NHibernate session factory, then set up the container (Castle Windsor) to provide the NHibernate session from the NHibenrateMessageModule.

A message module has the same purpose as an HTTP module in a Web application: to handle cross-cutting concerns across all requests. I use the NHibernateMessageModule to manage the session lifetime, as shown in Figure 6.

Figure 6 Managing Session Lifetime

public class NHibernateMessageModule : IMessageModule {
  private readonly ISessionFactory sessionFactory;
  [ThreadStatic]
  private static ISession currentSession;

  public static ISession CurrentSession {
    get { return currentSession; }
  }

  public NHibernateMessageModule(
    ISessionFactory sessionFactory) {

    this.sessionFactory = sessionFactory;
  }

  public void Init(ITransport transport, 
    IServiceBus serviceBus) {

    transport.MessageArrived += TransportOnMessageArrived;
    transport.MessageProcessingCompleted 
      += TransportOnMessageProcessingCompleted;
  }

  private static void 
    TransportOnMessageProcessingCompleted(
    CurrentMessageInformation currentMessageInformation, 
    Exception exception) {

    if (currentSession != null)
        currentSession.Dispose();
    currentSession = null;
  }

  private bool TransportOnMessageArrived(
    CurrentMessageInformation currentMessageInformation) {

    if (currentSession == null)
        currentSession = sessionFactory.OpenSession();
    return false;
  }
}

The code is pretty simple: register for the appropriate events, create and dispose of the session in the appropriate places and you’re done.

One interesting implication of this approach is that all messages in a batch will share the same session, which means that in many cases you can take advantage of NHibernate’s first-level cache.

Transaction Management

That’s it for session management, but what about transactions?

A best practice for NHibernate is that all interactions with the database should be handled via transactions. But I’m not using NHibernate’s transactions here. Why?

The answer is that transactions are handled by Rhino Service Bus. Instead of making each consumer manage its own transactions, Rhino Service Bus takes a different approach. It makes use of System.Transactions.TransactionScope to create a single transaction that encompasses all the consumers for messages in the batch.

That means all the actions taken in a response to a message batch (as opposed to a single message) are part of the same transaction. NHibernate will automatically enlist a session in the ambient transaction, so when you’re using Rhino Service Bus you have no need to explicitly deal with transactions.

The combination of a single session and a single transaction makes it easy to combine multiple operations into a single transactional unit. It also means you can directly benefit from NHibernate’s first-level cache. For example, here’s the relevant code to handle MyQueueQuery:

public void Consume(MyQueueQuery message) {
  var user = session.Get<User>(message.UserId);

  Console.WriteLine("{0}'s has {1} books queued for reading",
    user.Name, user.Queue.Count);

  bus.Reply(new MyQueueResponse {
    UserId = message.UserId,
    Timestamp = DateTime.Now,
    Queue = user.Queue.ToBookDtoArray()
  });
}

The actual code for handling a MyQueueQuery and MyBooksQuery is nearly identical. So, what’s the performance implication of a single transaction per session for the following code?

bus.Send(
  new MyBooksQuery {
    UserId = userId
  },
  new MyQueueQuery {
    UserId = userId
  });

At first glance, it looks like it would take four queries to gather all the required information. In MyBookQuery, one query to get the appropriate user and another to load the user’s books. The same appears to be the case in MyQueueQuery: one query to get the user and another to load the user’s queue.

The use of a single session for the entire batch, however, shows that you’re actually using the first-level cache to avoid unnecessary queries, as you can see in the NHibernate Profiler (nhprof.com) output in Figure 7.

image: The NHibnerate Profiler View of Processing RequestsFigure 7 The NHibnerate Profiler View of Processing Requests

Supporting Occasionally Connected Scenarios

As it stands, the application won’t throw an error if the back-end server can’t be reached, but it wouldn’t be very useful, either.

The next step in the evolution of this application is to turn it into a real occasionally connected client by introducing a cache that allows the application to continue operating even if the back-end server is not responding. However, I won’t use the traditional caching architecture in which the application code makes explicit calls to the cache. Instead, I’ll apply the cache at the infrastructure level.

Figure 8shows the sequence of operations when the cache is implemented as part of the messaging infrastructure when you send a single message requesting data about a user’s books.

image: Using the Cache in Concurrent Messaging Operations

Figure 8 Using the Cache in Concurrent Messaging Operations

The client sends a MyBooksQuery message. The message is sent on the bus while, at the same time, the cache is queried to see if it has the response for this request. If the cache contains the response for the previous request, the cache immediately causes the cached message to be consumed as if it just arrived on the bus.

The response from the back-end system arrives. The message is consumed normally and is also placed in the cache. On the surface, this approach seems to be complicated, but it results in effective caching behavior and allows you to almost completely ignore caching concerns in the application code. With a persistent cache (one that survives application restarts), you can operate the application completely independently without requiring any data from the back-end server.

Now let’s implement this functionality. I assume a persistent cache (the sample code provides a simple implementation that uses binary serialization to save the values to disk) and define the following conventions:

  • A message can be cached if it’s part of a request/response message exchange.
  • Both the request and response messages carry the cache key for the message exchange.

The message exchange is defined by an ICacheableQuery interface with a single Key property and an ICacheableResponse interface with Key and Timestamp properties.

To implement this convention, I write a CachingMessageModule that will run on the front end, intercepting incoming and outgoing messages. Figure 9 shows how incoming messages are handled.

Figure 9 Caching Incoming Connections

private bool TransportOnMessageArrived(
  CurrentMessageInformation
  currentMessageInformation) {

  var cachableResponse = 
    currentMessageInformation.Message as 
    ICacheableResponse;
  if (cachableResponse == null)
    return false;

  var alreadyInCache = cache.Get(cachableResponse.Key);
  if (alreadyInCache == null || 
    alreadyInCache.Timestamp < 
    cachableResponse.Timestamp) {

    cache.Put(cachableResponse.Key, 
      cachableResponse.Timestamp, cachableResponse);
  }
  return false;
}

There isn’t much going on here—if the message is a cacheable response, I put it in the cache. But there is one thing to note: I handle the case of out-of-order messages—messages that have an earlier timestamp arriving after messages with later timestamps. This ensures that only the latest information is stored in the cache.

Handling outgoing messages and dispatching the messages from the cache is more interesting, as you can see in Figure 10.

Figure 10 Dispatching Messages

private void TransportOnMessageSent(
  CurrentMessageInformation 
  currentMessageInformation) {

  var cacheableQuerys = 
    currentMessageInformation.AllMessages.OfType<
    ICacheableQuery>();
  var responses =
    from msg in cacheableQuerys
    let response = cache.Get(msg.Key)
    where response != null
    select response.Value;

  var array = responses.ToArray();
  if (array.Length == 0)
    return;
  bus.ConsumeMessages(array);
}

I gather the cached responses from the cache and call ConsumeMessages on them. That causes the bus to invoke the usual message invocation logic, so it looks like the message has arrived again.

Note, however, that even though there’s a cached response, you still send the message. The reasoning is that you can provide a quick (cached) response for the user, and update the information shown to the user when the back end replies to new messages.

Next Steps

I have covered the basic building blocks of a smart client application: how to structure the back end and the communication mode between the smart client application and the back end. The latter is important because choosing the wrong communication mode can lead to the fallacies of distributed computing. I also touched on batching and caching, two very important approaches to improving the performance of a smart client application.

On the back end, you’ve seen how to manage transactions and the NHibernate session, how to consume and reply to messages from the client and how everything comes together in the bootstrapper.

In this article, I focused primarily on infrastructure concerns; in the next installment I’ll cover best practices for sending data between the back end and the smart client application, and patterns for distributed change management.


Oren Eini (who works under the pseudonym Ayende Rahien) is an active member of several open source projects (NHibernate and Castle among them) and is the founder of many others (Rhino Mocks, NHibernate Query Analyzer and Rhino Commons among them). Eini is also responsible for the NHibernate Profiler (nhprof.com), a visual debugger for NHibernate. You can follow Eini’s work at ayende.com/blog.