June 2014

Volume 29 Number 6

ASP.NET : Topshelf and Katana: A Unified Web and Service Architecture

Wes McClure

Using IIS to host ASP.NET Web applications has been the de facto standard for more than a decade. Building such applications is a relatively simple process, but deploying them is not. Deploying requires curated knowledge of application configuration hierarchies and nuances of the history of IIS, and the tedious provisioning of sites, applications and virtual directories. Many of the critical pieces of infrastructure often end up living outside the application in manually configured IIS components.

When applications outgrow simple Web requests and need to support long-running requests, recurring jobs and other processing work, they become very difficult to support within IIS. Often, the solution is to create a separate Windows Service to host these components. But this requires an entirely separate deployment process, doubling the effort involved. The last straw is getting the Web and service processes to communicate. What could be a very simple application quickly becomes extremely complex.

Figure 1 shows what this architecture typically looks like. The Web layer is responsible for handling fast requests and providing a UI to the system. Long-running requests are delegated to the service, which also handles recurring jobs and processing. In addition, the service provides status about current and future work to the Web layer to be included in the UI.

Traditional Separated Web and Service Architecture
Figure 1 Traditional Separated Web and Service Architecture

A New Approach

Fortunately, new technologies are emerging that can make developing and deploying Web and service applications much simpler. Thanks to the Katana project (katanaproject.codeplex.com) and the specifications provided by OWIN (owin.org), it’s now possible to self-host Web applications, taking IIS out of the equation, and still support many of the ubiquitous ASP.NET components such as WebApi and SignalR. The Web self-host can be embedded in a rudimentary console application along with Topshelf (topshelf-project.com) to create a Windows service with ease. As a result, Web and service components can live side-by-side in the same process, as shown in Figure 2. This eliminates the overhead of developing extraneous communication layers, separate projects and separate deployment procedures.

Unified Web and Service Architecture with Katana and Topshelf
Figure 2 Unified Web and Service Architecture with Katana and Topshelf

This capability isn’t entirely new. Topshelf has been around for years, helping to simplify Windows Service development, and there are many open source self-host Web frameworks, such as Nancy. However, until OWIN blossomed into the Katana project, nothing has shown as much promise of becoming the de facto standard alternative for hosting Web applications in IIS. Moreover, Nancy and many open source components work with the Katana project, allowing you to curate an eclectic, supple framework.

Topshelf may seem optional, but that’s not the case. Without the ability to simplify service development, self-hosted Web development can be extremely cumbersome. Topshelf simplifies developing the service by treating it as a console application and abstracting the fact that it will be hosted as a service. When it’s time to deploy, Topshelf automatically handles installing and starting the application as a Windows Service—all without the overhead of dealing with InstallUtil; the nuances of a service project and service components; and attaching debuggers to services when something goes wrong. Topshelf also allows many parameters, such as the service name, to be specified in code or configured during installation via the command line.

To illustrate how to unify Web and service components with Katana and Topshelf, I’ll build a simple SMS messaging application. I’ll start out with an API to receive messages and queue them for sending. This will demonstrate how easy it is to handle long-running requests. Then, I’ll add an API query method to return the count of pending messages, showing it’s also easy to query service status from Web components.

Next, I’ll add an administrative interface to demonstrate self-hosted Web components still afford building rich Web interfaces. To round out the message processing, I’ll add a component to send messages as they’re queued, to showcase including service components.

And to highlight one of the best parts of this architecture, I’ll create a psake script to expose the simplicity of deployments.

To focus on the combined benefits of Katana and Topshelf, I won’t go into detail about either project. Check out “Getting Started with the Katana Project” (bit.ly/1h9XaBL) and “Create Windows Services Easily with Topshelf” (bit.ly/1h9XReh) to learn more.

A Console Application Is All You Need To Get Started

Topshelf exists to easily develop and deploy a Windows service from the starting point of a simple console application. To get started with the SMS messaging application, I create a C# console application and then install the Topshelf NuGet package from the Package Manager Console.

When the console application starts, I need to configure the Topshelf HostFactory to abstract hosting the application as a console in development and as a service in production:

private static int Main()
{
  var exitCode = HostFactory.Run(host =>
  {
  });
  return (int) exitCode;
}

The HostFactory will return an exit code, which is helpful during service installation to detect and stop on failure. The host configurator provides a service method to specify a custom type that represents the entry point to your application code. Topshelf refers to this as the service it’s hosting, because Topshelf is a framework for simplifying Windows Service creation:

host.Service<SmsApplication>(service =>
{
});

Next, I create an SmsApplication type to contain logic to spin up the self-hosted Web server and the traditional Windows service components. At a minimum, this custom type will contain behavior to execute when the application starts or stops:

public class SmsApplication
{
  public void Start()
  {
  }
  public void Stop()
  {
  }
}

Because I chose to use a plain old CLR object (POCO) for the service type, I provide a lambda expression to Topshelf to construct an instance of the SmsApplication type, and I specify the start and stop methods:

service.ConstructUsing(() => new SmsApplication());
service.WhenStarted(a => a.Start());
service.WhenStopped(a => a.Stop());

Topshelf allows many service param­eters to be configured in code, so I use SetDescription, SetDisplayName and SetServiceName to describe and name the service that will be installed in production:

host.SetDescription("An application to manage
     sending sms messages and provide message status.");
  host.SetDisplayName("Sms Messaging");
  host.SetServiceName("SmsMessaging");
  host.RunAsNetworkService();

Finally, I use RunAsNetworkService to instruct Topshelf to configure the service to run as the network service account. You can change this to whatever account fits your environment. For more service options, see the Topshelf configuration documentation at bit.ly/1rAfMiQ.

Running the service as a console application is as simple as launching the executable. Figure 3 shows the output of starting and stopping the SMS executable. Because this is a console application, when you launch your application in Visual Studio, you’ll experience the same behavior.

Running the Service as a Console Application
Figure 3 Running the Service as a Console Application

Incorporating an API

With the Topshelf plumbing in place I can begin work on the application’s API. The Katana project provides components to self-host an OWIN pipeline, so I install the Microsoft.Owin.SelfHost package to incorporate the self-host components. This package references several packages, two of which are important for self-hosting. First, Microsoft.Owin.Hosting provides a set of components to host and run an OWIN pipeline. Second, Microsoft.Owin.Host.HttpListener supplies an implementation of an HTTP server.

Inside the SmsApplication, I create a self-hosted Web application using the WebApp type provided by the Hosting package:

protected IDisposable WebApplication;
public void Start()
{
  WebApplication = WebApp.Start<WebPipeline>("http://localhost:5000");
}

The WebApp Start method requires two parameters, a generic parameter to specify a type that will configure the OWIN pipeline, and a URL to listen for requests. The Web application is a disposable resource. When the SmsApplication instance is stopped, I dispose of the Web application:

public void Stop()
{
  WebApplication.Dispose();
}

One benefit of using OWIN is that I can leverage a variety of familiar components. First, I’ll use WebApi to create the API. I need to install the Microsoft.AspNet.WebApi.Owin package to incorporate WebApi in the OWIN pipeline. Then, I’ll create the WebPipeline type to configure the OWIN pipeline and inject the WebApi middleware. Additionally, I’ll configure WebApi to use attribute routing:

public class WebPipeline
{
  public void Configuration(IAppBuilder application)
  {
    var config = new HttpConfiguration();
    config.MapHttpAttributeRoutes();
    application.UseWebApi(config);
  }
}

And now I can create an API method to receive messages and queue them to be sent:

public class MessageController : ApiController
{
  [Route("api/messages/send")]
  public void Send([FromUri] SmsMessageDetails message)
  {
    MessageQueue.Messages.Add(message);
  }
}

SmsMessageDetails contains the payload of the message. The send action adds the message to a queue to be asynchronously processed at a later time. The MessageQueue is a global BlockingCollection. In a real application this might mean you need to factor in other concerns such as durability and scalability:

public static readonly BlockingCollection<SmsMessageDetails> Messages;

In a separated Web and service architecture, handing off asynchronous processing of long-running requests, such as sending a message, requires communication between the Web and service processes. And adding API methods to query the status of the service means even more communication overhead. A unified approach makes sharing status information between Web and service components simple. To demonstrate this, I add a PendingCount query to the API:

[Route("api/messages/pending")]
public int PendingCount()
{
  return MessageQueue.Messages.Count;
}

Building a Rich UI

APIs are convenient, but self-hosted Web applications still need to support a visual interface. In the future, I suspect, the ASP.NET MVC framework or a derivative will be available as OWIN middleware. For now, Nancy is compatible and has a package to support the core of the Razor view engine.

I’ll install the Nancy.Owin package to add support for Nancy, and the Nancy.Viewengines.Razor to incorporate the Razor view engine. To plug Nancy into the OWIN pipeline, I need to register it after the registration for WebApi so it doesn’t capture the routes I mapped to the API. By default, Nancy returns an error if a resource isn’t found, whereas WebApi passes requests it can’t handle back to the pipeline:

application.UseNancy();

To learn more about using Nancy with an OWIN pipeline, refer to “Hosting Nancy with OWIN” at bit.ly/1gqjIye.

To build an administrative status interface, I add a Nancy module and map a status route to render a status view, passing the pending message count as the view model:

public class StatusModule : NancyModule
{
  public StatusModule()
  {
    Get["/status"] =
      _ => View["status", MessageQueue.Messages.Count];
  }
}

The view isn’t very glamorous at this point, just a simple count of pending messages:

<h2>Status</h2>
There are <strong>@Model</strong> messages pending.

I’m going to jazz up the view a bit with a simple Bootstrap navbar, as shown in Figure 4. Using Bootstrap requires hosting static content for the Bootstrap style sheet.

Administrative Status Page
Figure 4 Administrative Status Page

I could use Nancy to host static content, but the advantage of OWIN is mixing and matching middleware, so instead I’m going to use the newly released Microsoft.Owin.StaticFiles package, which is part of the Katana project. The StaticFiles package provides file-serving middleware. I’ll add it to the start of the OWIN pipeline so the Nancy static file serving doesn’t kick in.

application.UseFileServer(new FileServerOptions
{
  FileSystem = new PhysicalFileSystem("static"),
  RequestPath = new PathString("/static")
});

The FileSystem parameter tells the file server where to look for files to serve. I’m using a folder named static. The RequestPath specifies the route prefix to listen for requests for this content. In this case, I chose to mirror the name static, but these don’t have to match. I use the following link in the layout to reference the bootstrap style sheet (naturally, I place the Bootstrap style sheet in a CSS folder inside the static folder):

<link rel="stylesheet" href="/static/css/bootstrap.min.css">

A Word About Static Content and Views

Before I move on, I want to tell you about a tip I find helpful when developing a self-hosted Web application. Normally, you’d set the static content and MVC views to be copied to the output directory so the self-hosted Web components can find them relative to the current executing assembly. Not only is this a burden and easy to forget, changing the views and static content requires recompiling the application—which absolutely kills productivity. Therefore, I recommend not copying the static content and views to the output directory; instead, configure middleware such as Nancy and the FileServer to map to the development folders.

By default, the debug output directory of a console application is bin/Debug, so in development I tell the FileServer to look two directories above the current directory to find the static folder that contains the Bootstrap style sheet:

FileSystem = new PhysicalFileSystem(IsDevelopment() ? 
  "../../static" : "static")

Then, to tell Nancy where to look for views, I’ll create a custom NancyPathProvider:

public class NancyPathProvider : IRootPathProvider
{
  public string GetRootPath()
  {
    return WebPipeline.IsDevelopment()
      ? Path.Combine(AppDomain.CurrentDomain.BaseDirectory, 
      @"..\..\")
      : AppDomain.CurrentDomain.BaseDirectory;
  }
}

Again, I use the same check to look two directories above the base directory if I’m running in development mode in Visual Studio. I’ve left the implementation of IsDevelopment up to you; it could be a simple configuration setting or you could write code to detect when the application was launched from Visual Studio.

To register this custom root path provider, I create a custom NancyBootstrapper and override the default RootPathProvider property to create an instance of NancyPathProvider:

public class NancyBootstrapper : DefaultNancyBootstrapper
{
  protected override IRootPathProvider RootPathProvider
  {
    get { return new NancyPathProvider(); }
  }
}

And when I add Nancy to the OWIN pipeline, I pass an instance of NancyBootstrapper in the options:

application.UseNancy(options => 
  options.Bootstrapper = new NancyBootstrapper());

Sending Messages

Receiving messages is half the work, but the application still needs a process to send them. This is a process that traditionally would live in an isolated service. In this unified solution I can simply add an SmsSender that launches when the application starts. I’ll add this to the SmsApplication Start method (in a real application, you should add the capability to stop and dispose of this resource):

public void Start()
{
  WebApplication = WebApp.Start<WebPipeline>("http://localhost:5000");
  new SmsSender().Start();
}

Inside the Start method on SmsSender, I start a long-running task to send messages:

public class SmsSender
{
  public void Start()
  {
    Task.Factory.StartNew(SendMessages, 
      TaskCreationOptions.LongRunning);
  }
}

When the WebApi send action receives a message, it adds it to a message queue that’s a blocking collection. I create the SendMessages method to block until messages arrive. This is possible thanks to the abstractions behind GetConsumingEnumerable. When a set of messages arrives, it immediately starts sending them:

private static void SendMessages()
{
  foreach (var message in MessageQueue.Messages.GetConsumingEnumerable())
  {
    Console.WriteLine("Sending: " + message.Text);
  }
}

It would be trivial to spin up multiple instances of SmsSender to expand the capacity to send messages. In a real application, you’d want to pass a CancellationToken to GetConsumingEnumerable to safely stop the enumeration. If you want to learn more about blocking collections, you’ll find good information at bit.ly/QgiCM7 and bit.ly/1m6sqlI.

Easy, Breezy Deploys

Developing a combined service and Web application is pretty simple and straightforward, thanks to Katana and Topshelf. One of the awesome benefits of this powerful combination is a ridiculously simple deployment process. I’m going to show you a simple two-step deployment using psake (github.com/psake/psake). This isn’t meant to be a robust script for actual production use; I just want to demonstrate how truly simple the process is, regardless of what tool you use.

The first step is to build the application. I create a build task that will call msbuild with the path to the solution and create a release build (the output will end up in bin/Release):

properties {
  $solution_file = "Sms.sln"
}
task build {
  exec { msbuild $solution_file /t:Clean /t:Build /p:Configuration=Release /v:q }
}

The second step is to deploy the application as a service. I create a deploy task that depends on the build task and declare a delivery directory to hold a path to the installation location. For simplicity I just deploy to a local directory. Then, I create an executable variable to point to the console application executable in the delivery directory:

task deploy -depends build {
  $delivery_directory = "C:\delivery"
  $executable = join-path $delivery_directory 'Sms.exe'

First, the deploy task will check if the delivery directory exists. If it finds a delivery directory, it will assume the service is already deployed. In this case, the deploy task will uninstall the service and remove the delivery directory:

if (test-path $delivery_directory) {
  exec { & $executable uninstall }
  rd $delivery_directory -rec -force 
}

Next, the deploy task copies the build output to the delivery directory to deploy the new code, and then copies the views and static folders into the delivery directory:

copy-item 'Sms\bin\Release' $delivery_directory -force -recurse -verbose
copy-item 'Sms\views' $delivery_directory -force -recurse -verbose
copy-item 'Sms\static' $delivery_directory -force -recurse –verbose

Finally, the deploy task will install and start the service:

exec { & $executable install start }

When you deploy the service, make sure your IsDevelopment implementation returns false or you’ll get an Access Denied exception if the file server can’t find the static folder. Also, sometimes reinstalling the service on each deploy can be problematic. Another tactic is to stop, update and then start the service if it’s already installed.

As you can see, the deployment is ridiculously simple. IIS and InstallUtil are completely removed from the equation; there’s one deployment process instead of two; and there’s no need to worry about how the Web and service layer will communicate. This deploy task can be run repeatedly as you build your unified Web and service application!

Looking Forward

The best way to determine if this combined model will work for you is to find a low-risk project and try it out. It’s so simple to develop and deploy an application with this architecture. There’s going to be an occasional learning curve (if you use Nancy for MVC, for example). But the great thing about using OWIN, even if the combined approach doesn’t work out, is that you can still host the OWIN pipeline inside IIS using the ASP.NET host (Microsoft.Owin.Host.SystemWeb). Give it a try and see what you think.


Wes McClure leverages his expertise to help clients rapidly deliver high-quality software to exponentially increase the value they create for customers. He enjoys speaking about everything related to software development, is a Pluralsight author and writes about his experiences at devblog.wesmcclure.com. Reach him at wes.mcclure@gmail.com.

Thanks to the following technical experts for reviewing this article: Howard Dierking (Microsoft), Damian Hickey, Chris Patterson (RelayHealth), Chris Ross (Microsoft) and Travis Smith
Howard Dierking is a program manager on the Windows Azure Frameworks and Tools team where his focus is on ASP.NET, NuGet and Web APIs. Previously, Dierking served as the editor in chief of MSDN Magazine, and also ran the developer certification program for Microsoft Learning. He spent 10 years prior to Microsoft as a developer and application architect with a focus on distributed systems.

Chris Ross is a software design engineer at Microsoft where he focuses on all things networking and OWIN.

Chris Patterson is an architect for RelayHealth, the connectivity business of McKesson Corporation, and is responsible for the architecture and development of applications and services that accelerate care delivery by connecting patients, providers, pharmacies, and financial institutions. Chris is a primary contributor to Topshelf and MassTransit, and has been awarded the Most Valuable Professional award by Microsoft for his technical community contributions.

Damian Hickey is a software developer with a focus on DDD\CQRS\ES based applications. He is an advocate of .NET open source software and contributes to various projects such as Nancy, NEventStore and others. He occasionally speaks when people are bothered to listen, and occasionally blogs at http://dhickey.ie. Reach him at dhickey@gmail.com / @randompunter

Travis Smith is a developer advocate for Atlassian and the Atlassian Marketplace. Travis helps to promote an open web, polyglotism, and emerging web technologies. Travis is a contributor to a number of open source projects including Topshelf and MassTransit. Travis can be found at developer events talking passionately about making awesome software or on the Internet at http://travisthetechie.com.