Logging in Enterprise Library for .NET 2.0

Today I’d like to talk about a few of the key changes we are planning in the Logging & Instrumentation Application Block which will be included in Enterprise Library for .NET 2.0. A lot of this will probably make more sense once we release the preview drop later in the month – but I’m going to go through this anyway so you know what to expect in the preview and so you can provide us with earlier feedback.

I’m sure the first question on everyone’s minds is how different the new version will be to the current one. A somewhat accurate but cryptic answer is that it will be very different while staying very similar. By this I mean that we are making some major changes to the internals of the block (for reasons described below), but the public API will remain largely unchanged. Probably the next question will be about the performance of the new block. However you’ll need to wait for an answer to this – the preview is still early code and so far we haven’t done enough testing or optimizing to get a good picture of this. Rest assured that we’ll be spending time on this later to ensure the performance is in improvement over the current block for the core scenarios.

So here are some of the main changes we are planning for the next release. Nothing is set in stone yet so if you have any comments about any this, let us know!

Alignment with System.Diagnostics

The System.Diagnostics namespace isn’t new, but before .NET 2.0 it only targeted some pretty narrow scenarios. However in .NET 2.0, there are a bunch of enhancements, most notably the TraceSource (which aggregates a bunch of TraceListeners into a named instance) and the CorrelationManager (which helps keep track of nested activities).

With all of this new stuff, you may be wondering whether we still need a Logging Block – well at least we wondered this. We determined that the block still provides a bunch of value – particularly in its ability to support strongly-typed events and formatted messages, simple configuration and good support for non-tracing scenarios such as operational events, audits and business events. Still, the new System.Diagnostics functionality provides a lot of useful plumbing pieces that we’ve been able to leverage in the block. Most significantly:

  • Our Log Sinks will be changed to derive from TraceListener, but they will still support formatting and work in much the same way as the current sinks. This is cool as you will be able to use our Trace Listeners outside of Enterprise Library, and you can use system or third-party Trace Listeners with the block
  • We’ll use TraceSources to aggregate the set of listeners that belong to a single category, and rely on the platform functionality to distribute messages to sinks
  • Our Tracer class will make use of the CorrelationManager to start and stop activities. This will help align this behavior with other parts of the platform (such as support for the end-to-end tracing schema via the XmlWriterTraceListener).

These changes are pretty significant, but almost entirely under the covers. If you just use the block via the Logger and Tracer classes, you will not notice any significant changes. If you have built your own log sinks, you will need to modify them to derive from TraceListener (but most of the interesting code will remain the same).

Support for Multiple Categories

In the current version of the Logging block, each LogEntry can only be in one category. While this is OK for simple scenarios, many customers want to use categories across more than one dimension. For example, you may want to use categories to define the event type (audit, operational, diagnostic) as well as to describe which part of the application the event was raised (UI, Business, Data). There are no clean ways of doing this (Audit_UI, Audit_Business, …? Yuk!). And the problem was compounded by the Tracer class’s use of categories to indicate the current activities.

To provide some additional flexibility, we will allow a LogEntry to be in as many categories as you wish (yes, you can still choose only one if that’s all you need!). In the configuration file, you still specify where you want events to go on a per-category basis, and you can enable and disable categories the same way you can today.

So you could configure the block to say:

  • Audit events get logged using the Database listener
  • UI events get logged to a text file

If a single log entry is in both categories, it will go to both places. Simple!

Things get a little complicated when you consider how the Category Filters will work. Consider the following potential category filters:

  1. Allow all categories, except deny any events in the Audit category
  2. Deny all categories, except allow any events in the UI category.

Our code currently takes these statements as literally as possible. So if the first filter was set, and an event is in the UI and Audit categories, it would be denied (since it is in the Audit category). If the second filter was set, that event would be allowed, since it is in the UI category. We’d love to hear your thoughts on whether we’ve got this right.

Finally, we want to update the Tracer class so that any messages being logged in the context of an active trace session will automatically inherit the category specified for the trace. For example:

 using (new Tracer("CheckoutCart"))
{
    Logger.Write("My Message", "UI");
}

In this example, “CheckoutCart” is the category for the tracer (which the start and end messages would be sent to), and the nested Logger call sends a message to the “UI” category. However since that call is in the context of the CheckOutCart tracer, the CheckOutCart category would be implicitly added to that same message. If you had two nested Tracer blocks (most likely in different methods, of course), both categories would be added to any enclosed Logger calls.  This will make it very easy to send all messages related to a particular category to a single place, which should be great for debugging.

Goodbye Distribution Strategies?

The last major change we are considering is the elimination of Distribution Strategies – at least in their current form. We still think the idea behind Distribution Strategies is pretty sound – which is that we wanted a way of configuring the block so that the distribution of log events can occur in some centralized place, distributing events from multiple processes. Out of the box we supply an MSMQ distribution strategy and associated Windows service, but we’ve heard of people using other techniques such as web services or MQSeries.

The basic architecture in the current Logging block looks like this:

Client code -> Logger Client -> Distribution Strategy -> Distributor -> Sinks

When the MSMQ Distribution Strategy is chosen, the block looks like this:

Client code -> Logger Client -> MSMQ Distribution Strategy -> (MSMQ Queue) -> MSMQ Distributor Service Listener -> Distributor -> Sinks

While the idea is sound, we’ve heard people say that the complexity of the design can outweigh its benefits, and that there are a few limitations to the design. For example, many people are confused as to why we have an MSMQ distribution strategy as well as an MSMQ sink. Also some people have wanted to process some events in-process and others in a remote process, which isn’t possible since all events must go through the same distribution strategy.

So our idea is to simplify things by eliminating distribution strategies, and to update the existing MSMQ sink (soon to become a trace listener) so that it works with our out-of-proc distributor service. That would make the architecture look like this:

Client code -> Log Writer -> MSMQ Trace Listener -> (MSMQ Queue) -> MSMQ Distributor Service Listener -> Log Writer -> Trace Listeners

In a nutshell, we would support the same scenarios by chaining instances of the block together through trace listeners, rather than have the separate distribution strategy concept. There are just a couple of other pieces we need to make this work. As mentioned above, the distribution strategies currently process all events, regardless of which category they are in. If you want to process all (or most) messages remotely, it shouldn’t be necessary to configure every single category on the client. To support this, we are thinking of supporting two special trace sources in configuration:

  • If you configure a trace source called *, all events will be sent to it
  • If you configure a trace source called %, all events that have not been processed by other trace sources will be sent to it.

For example, you could configure the block like this on the client:

  • Category A -> Event Log
  • Category B -> Text File
  • % -> MSMQ Sink

This would process categories A and B locally, and send everything else to the MSMQ sink for remote distribution.

Or you could just do this:

  • * -> MSMQ Sink

This would be equivalent with using an MSMQ Distribution Strategy in the current versions of Enterprise Library.

That’s all for now – sorry for the long post. Please let us know what you think about the proposed changed, and whether you have any other great ideas.

This posting is provided "AS IS" with no warranties, and confers no rights.