Chapter 5 - A Cache Advance for your Applications

Introduction

How do you make your applications perform faster? You could simply throw hardware at the problem, but with the increasing move towards green data centers, soaking up more electricity and generating more heat that you have to get rid of is not exactly a great way to showcase your environmental awareness. Of course, you should always endeavor to write efficient code and take full advantage of the capabilities of the platform and operating system, but what does that entail?

One of the ways that you may be able to make your application more efficient is to ensure you employ an appropriate level of caching for data that you reuse, and which is expensive to create. However, caching every scrap of data that you use may be counterproductive. For example, I once installed a photo screensaver that used caching to store the transformed versions of the original images and reduce processing requirements as it repeatedly cycled through the collection of photos. It probably works fine if you only have a few dozen images, but with my vast collection of high-resolution photos it very quickly soaked up three gigabytes of memory, bringing my machine (with only one gig of memory installed) to its knees.

So, before you blindly implement caching across your whole application, think about what, how, where, and when you should implement caching. Table 1 contains some pointers.

Table 1

Defining a caching strategy



What?

Data that applies to all users of the application and does not change frequently, or data that you can use to optimize reference data lookups, avoid network round-trips, and avoid unnecessary and duplicate processing. Examples are data such as product lists, constant values, and values read from configuration or a database. Where possible, cache data in a ready-to-use format. Do not cache volatile data, and do not cache sensitive data unless you encrypt it.

When?

You can cache data when the application starts if you know it will be required and it is unlikely to change. However, you should cache data that may or may not be used, or data that is relatively volatile, only when your application first accesses it.

Where?

Ideally, you should cache data as near as possible to the code that will use it, especially in a layered application that is distributed across physical tiers. For example, cache data you use for controls in your user interface in the presentation layer, cache business data in the business layer, and cache parameters for stored procedures in your data layer. If your application runs on multiple servers and the data may change as the application runs, you will usually need to use a distributed cache accessible from all servers. If you are caching data for a user interface, you can usually cache the data on the client.

How ?

Caching is a crosscutting concern—you are likely to implement caching in several places, and in many of your applications. Therefore, a reusable and configurable caching mechanism that you can install in the appropriate locations is the obvious choice. The Caching Application Block is an ideal solution for non-distributed caching. It supports both an in-memory cache and, optionally, a backing store that can be either a database or isolated storage. The block provides all the functionality needed to retrieve, add, and remove cached data, and supports configurable expiration and scavenging policies.

This chapter concentrates (obviously) on the patterns & practices Caching Application Block, which is designed for use as a non-distributed cache on a client machine. It is ideal for caching data in Windows® Forms, Windows Presentation Foundation (WPF), and console-based applications. You can use it in server-based roles such as ASP.NET applications, services, business layer code, or data layer code; but only where you have a single instance of the code running.

Note

Out of the box, the Caching Application Block does not provide the features required for distributed caching across multiple servers. Other solutions you may consider for caching are the ASP.NET cache mechanism, which can be used on a single server (in-process) and on multiple servers (using a state server or a SQL Server® database), or a third party solution that uses the Caching Application Block extension points.
Also keep in mind that version 4.0 of the .NET Framework includes the System.Runtime.Caching namespace, which provides features to support in-memory caching. The current version of the Caching block is likely to be deprecated after this release, and Enterprise Library will instead make use of the caching features of the .NET Framework.

What Does the Caching Block Do?

The Caching Application Block provides high-performance and scalable caching capabilities, and is both thread safe and exception safe. It caches data in memory, and optionally maintains a synchronized backing store that, by default, can be isolated storage or a database. It also provides a wide range of expiration features, including the use of multiple expiration settings for cached items (including both time-based and notification-based policies).

Even better, if the cache locations are not suitable for your requirements, or the caching mechanism doesn't do quite what you want in terms of storing or retrieving cache items, you can modify or extend it. For example, you can create your own custom expiration policies and backing store providers, and plug them in using the built-in extension points. This means that you can implement caching operations throughout your applications that you access from code using a single simple API.

On top of all that, the caches you implement are configurable at design time and run time, so that administrators can change the caching behavior as required both before and after deployment. Administrators can change the backing store that the caching mechanism uses, configure encryption of the cached contents, and change the scavenging behavior—all through configuration settings.

Flushed or Expired?

One of the main factors that can affect application performance is memory availability. While caching data can improve performance, caching too much data can (as you saw earlier) reduce performance if the cache uses too much of the available memory. To counter this, the Caching block performs scavenging on a fixed cycle in order remove items when memory is in short supply. Items may be removed from the cache in two ways:

  • When they expire. If you specify an expiration setting, the item is removed from the cache during the next scavenging cycle if they have expired. You can specify a combination of settings based on the absolute time, sliding time, extended time format (for example, every evening at midnight), file dependency, or never. You can also specify a priority, so that lower priority items are scavenged first. The scavenging interval and the maximum number of items to scavenge on each pass are configurable.
  • When they are flushed. You can explicitly expire (mark for removal) individual items in the cache, or explicitly expire all items, using methods exposed by the Caching block. This allows you to control which items are available from the cache. The scavenging mechanism removes items that it detects have expired and are no longer valid. However, until the scavenging cycle occurs, the items remain in the cache but are marked as expired, and you cannot retrieve them.

The difference is that flushing might remove valid cache items to make space for more frequently used items, whereas expiration removes invalid and expired items. Remember that items may have been removed from the cache by the scavenging mechanism even if they haven't expired, and you should always check that the cached item exists when you try to retrieve and use it. You may choose to recreate the item and re-cache it at this point.

Which Expiration Policy?

If you have data that is relatively volatile, is updated regularly, or is valid for only a specific time or interval, you can use a time-based expiration policy to ensure that items do not remain in the cache beyond their useful valid lifetime. You can specify how long an item should remain in the cache if not accessed (effectively the timer starts at zero again each time it is accessed), or specify the absolute time that it should be removed irrespective of whether it has been accessed in the meantime.

If the data you cache depends on changes to another resource, such as a disk file, you can improve caching efficiency by using a notification-based expiration policy. The Caching block contains an expiration provider that detects changes to disk files. You can create your own custom expiration policy providers that detect, for example, WMI events, database events, or business logic operations and invalidate the cached item when they occur.

How Do I Configure the Caching Block?

Like all of the Enterprise Library application blocks, you start by configuring your application to use the block. Chapter 1, "Introduction," demonstrates the basic principles for using the configuration tool. To configure the Caching block, you add the Caching Settings section to the tool, which adds a default cache manager. The cache manager exposes the caching API and is responsible for manipulating the cached items. You can add more than one cache manager to the configuration if you want to implement multiple caches, or change the default cache manager for a custom one that you create. For example, you may decide to replace it with a custom or third party cache manager that supports distributed caching for a Web farm or application farm containing multiple servers.

Figure 1 shows the configuration for the examples in this chapter of the guide. You can see the four cache managers we use, with the section for the EncryptedCacheManager expanded to show its property settings.

Figure 1

Configuring caching in Enterprise Library

Ff953179.00c76322-59db-4469-8a44-c62a1599a7f0-thumb(en-us,PandP.50).png

For each cache manager, you can specify the expiration poll frequency (the interval in seconds at which the block will check for expired items and remove them), the maximum number of items in the cache before scavenging will occur irrespective of the polling frequency, and the number of items to remove when scavenging the cache.

You can also specify, in the configuration properties of the Caching Application Block root node, which of the cache managers you configure should be the default. The Caching block will use the one you specify if you instantiate a cache manager without providing the name of that cache manager.

Persistent Caching

The cache manager caches items in memory only. If you want to persist cached items across application and system restarts, you can add a persistent backing store to your configuration. You can specify only a single backing store for each cache manager (obviously, or it would get extremely confused), and the Caching block contains providers for caching in both a database and isolated storage. You can specify a partition name for each persistent backing store, which allows you to target multiple cache storage providers at isolated storage or at the same database.

If you add a data cache store to your configuration, the configuration tool automatically adds the Data Access Application Block to the configuration. You configure a database connection in the Data Access block configuration section, and then select this connection in the properties of the data cache store provider. For details of how you configure the Data Access Application Block, see Chapter 2 "Much ADO about Data Access."

Encrypting Cached Items

You can add a provider that implements symmetric storage encryption to each persistent backing store you configure if you want to encrypt the stored items. This is a really good plan if you must store sensitive information. When you add a symmetric storage encryption provider to your configuration, the configuration tool automatically adds the Cryptography Application Block to the configuration.

You configure a symmetric cryptography provider in the Cryptography block configuration section. You can use the Windows Data Protection API (DPAPI) symmetric provider, or select from other providers such as AES, Triple DES, and Rijndael. For details of how you configure the Cryptography Application Block, see Chapter 7, "Relieving Cryptography Complexity." Then in the properties of the symmetric storage encryption provider in the Caching block section, select the provider you just configured.

Note

Note that the Caching Application Block does not encrypt data in the in-memory cache, even if you configure encryption for the associated backing store. If it is possible that a malicious user could access the application process's memory, do not store sensitive information, such as credit card numbers or passwords, in the cache.

And now, at last, you are ready to write code that uses the Caching block. You'll see the ways that you can use it demonstrated in the examples in this chapter.

Initializing the Caching Block

When you create a project that uses the Caching block, you must edit the project and code to add references to the appropriate Enterprise Library assemblies and namespaces. The examples in this chapter demonstrate caching to a database and encrypting cached data, as well as writing to the isolated storage backing store.

The assemblies you must add to your project (in addition to the assemblies listed in Chapter 1, "Introduction," that are required for all Enterprise Library projects) are:

  • Microsoft.Practices.EnterpriseLibrary.Caching.dll
  • Microsoft.Practices.EnterpriseLibrary.Caching.Cryptography.dll
  • Microsoft.Practices.EnterpriseLibrary.Caching.Database.dll
  • Microsoft.Practices.EnterpriseLibrary.Data.dll
  • Microsoft.Practices.EnterpriseLibrary.Security.Cryptography.dll

Note

If you do not wish to cache items in a database, you don't need to add the Database and Data assemblies. If you do not wish to encrypt cached items, you don't need to add the two Cryptography assemblies.

To make it easier to use the objects in the Caching block, you can add references to the relevant namespaces to your project. Then you are ready to write some code.

How Do I Use the Caching Block?

You manipulate your caches using the interface of the Cache Manager. It is a relatively simple interface. There are two overloads of the Add method for adding items to the cache; plus methods to retrieve a cached item, remove a single item, flush all items, and check if the cache contains a specified item. The single property, Count, returns the number of items currently in the cache.

About the Example Application

The code you can download for this guide contains a sample application named Caching that demonstrates the techniques described in this chapter. The sample provides a number of different examples that you can run.

Note

Before you attempt to run the example, you must create a new encryption key for the Caching block to use to encrypt the data in one of the examples that uses a symmetric encryption provider. This is because the key is tied to either the user or the machine, and so the key included in the sample files will not work on your machine. In the configuration console, navigate to the Symmetric Cryptography Providers section of the Cryptography Application Block Settings and select the RijndaelManaged provider. Click the "..." button next to the Key property to start the Cryptographic Key Wizard. Use this wizard to generate a new key, save the key file, and automatically update the contents of App.config.

The first of the examples, Cache data in memory using the null backing store, demonstrates some of the options you have when adding items to the cache.

Adding Items to and Retrieving Items from the Cache

To add an item to the cache, you can use the simple approach of specifying just the key for the item and the value to cache as parameters to the Add method. The item is cached with a never expired lifetime, and normal priority. If you want more control over the way an item is cached, you can use the other overload of the Add method, which additionally accepts a value for the priority, a reference to a callback that will execute when the cached item expires, and an array of expirations that specify when the item should expire.

Possible values for the priority, as defined in the CacheItemPriority enumeration, are None, Low, Normal, High, and NotRemovable. In addition to the NeverExpired value for the expirations, you can use AbsoluteTime, SlidingTime, FileDependency, and ExtendedFormatTime expirations. If you create an array containing more than one expiration instance, the block will expire the item when any one of these indicates that it has expired.

The example starts by obtaining a reference to an instance of a CacheManager—in this case one that has no backing store defined in its configuration (or, to be more precise, it has the NullBackingStore class defined) and so uses only the in-memory cache. It stores this reference as the interface type ICacheManager.

Next, it calls a separate routine that adds items to the cache and then displays the contents of the cache. This routine is reused in many of the examples in this chapter.

// Resolve the default CacheManager object from the container.
// The actual concrete type is determined by the configuration settings.
// In this example, the default is the InMemoryCacheManager instance.
ICacheManager defaultCache 
    = EnterpriseLibraryContainer.Current.GetInstance<ICacheManager>();

// Store some items in the cache and show the contents using a separate routine.
CacheItemsAndShowCacheContents(defaultCache);

The CacheItemsAndShowCacheContents routine uses the cache manager passed to it; in this first example, this is the in-memory only cache manager. However, the code to add items to the cache and manipulate the cache is (as you would expect) identical for all configurations of cache managers. Notice that the code defines a set of string values that it uses as the cache keys. This makes it easier for the code later on to examine the contents of the cache. This is the declaration of the cache keys array and the first part of the code in the CacheItemsAndShowCacheContents routine.

// Declare an array of string values to use as the keys of the cached items.
string[] DemoCacheKeys 
         = {"ItemOne", "ItemTwo", "ItemThree", "ItemFour", "ItemFive"};

void CacheItemsAndShowCacheContents(ICacheManager theCache)
{
  // Add some items to the cache using the key names in the DemoCacheKeys array.
  theCache.Add(DemoCacheKeys[0], "Some Text");
  theCache.Add(DemoCacheKeys[1], 
               new StringBuilder("Some text in a StringBuilder"));
  theCache.Add(DemoCacheKeys[2], 42, CacheItemPriority.High, null, 
               new NeverExpired());
  theCache.Add(DemoCacheKeys[3], new DataSet(), CacheItemPriority.Normal, 
               null, new AbsoluteTime(new DateTime(2099, 12, 31)));

  // Note that the next item will expire after three seconds
  theCache.Add(DemoCacheKeys[4], 
               new Product(10, "Exciting Thing", "Useful for everything"),
               CacheItemPriority.Low, null, 
               new SlidingTime(new TimeSpan(0, 0, 3)));

  // Display the contents of the cache.
  ShowCacheContents(theCache);
  ...

In the code shown above, you can see that the CacheItemsAndShowCacheContents routine uses the simplest overload to cache the first two items; a String value and an instance of the StringBuilder class. For the third item, the code specifies the item to cache as the Integer value 42 and indicates that it should have high priority (it will remain in the cache after lower priority items when the cache has to be minimized due to memory or other constraints). There is no callback required, and the item will never expire.

The fourth item cached by the code is a new instance of the DataSet class, with normal priority and no callback. However, the expiry of the cached item is set to an absolute date and time (which should be well after the time that you run the example).

The final item added to the cache is a new instance of a custom class defined within the application. The Product class is a simple class with just three properties: ID, Name, and Description. The class has a constructor that accepts these three values and sets the properties in the usual way. It is cached with low priority, and a sliding time expiration set to three seconds.

The final line of code above calls another routine named ShowCacheContents that displays the contents of the cache. Not shown here is code that forces execution of the main application to halt for five seconds, redisplay the contents of the cache, and repeat this process again. This is the output you see when you run this example.

The cache contains the following 5 item(s):
Item key 'ItemOne' (System.String) = Some Text
Item key 'ItemTwo' (System.Text.StringBuilder) = Some text in a StringBuilder
Item key 'ItemThree' (System.Int32) = 42
Item key 'ItemFour' (System.Data.DataSet) = System.Data.DataSet
Item key 'ItemFive' (CachingExample.Product) = CachingExample.Product

Waiting for last item to expire...
Waiting... Waiting... Waiting... Waiting... Waiting...

The cache contains the following 5 item(s):
Item key 'ItemOne' (System.String) = Some Text
Item key 'ItemTwo' (System.Text.StringBuilder) = Some text in a StringBuilder
Item key 'ItemThree' (System.Int32) = 42
Item key 'ItemFour' (System.Data.DataSet) = System.Data.DataSet
Item with key 'ItemFive' has been invalidated.

Waiting for the cache to be scavenged...
Waiting... Waiting... Waiting... Waiting... Waiting...

The cache contains the following 4 item(s):
Item key 'ItemOne' (System.String) = Some Text
Item key 'ItemTwo' (System.Text.StringBuilder) = Some text in a StringBuilder
Item key 'ItemThree' (System.Int32) = 42
Item key 'ItemFour' (System.Data.DataSet) = System.Data.DataSet

You can see in this output that the cache initially contains the five items we added to it. However, after a few seconds, the last one expires. When the code examines the contents of the cache again, the last item (with key ItemFive) has expired but is still in the cache. However, the code detects this and shows it as invalidated. After a further five seconds, the code checks the contents of the cache again, and you can see that the invalidated item has been removed.

Note

Depending on the performance of your machine, you may need to change the value configured for the expiration poll frequency of the cache manager in order to see the invalidated item in the cache and the contents after the scavenging cycle completes.

What's In My Cache?

The example you've just seen displays the contents of the cache, indicating which items are still available in the cache, and which (if any) are in the cache but not available because they are waiting to be scavenged. So how can you tell what is actually in the cache and available for use? In the time-honored way, you might like to answer "Yes" or "No" to the following questions:

  • Can I use the Contains method to check if an item with the key I specify is available in the cache?
  • Can I query the Count property and retrieve each item using its index?
  • Can I iterate over the collection of cached items, reading each one in turn?

If you answered "Yes" to any of these, the bad news is that you are wrong. All of these are false. Why? Because the cache is managed by more than one process. The cache manager you are using is responsible for adding items to the cache and retrieving them through the public methods available to your code. However, a background process also manages the cache, checking for any items that have expired and removing (scavenging) those that are no longer valid. Cached items may be removed when memory is scarce, or in response to dependencies on other items, as well as when the expiry date and time you specified when you added an item to the cache has passed.

So, even if the Contains method returns true for a specified cache key, that item might have been invalidated and is only in the cache until the next scavenging operation. You can see this in the output for the previous example, where the two waits force the code to halt until the item has been flagged as expired, and then halt again until it is scavenged. The actual delay before scavenging takes place is determined by the expiration poll frequency configuration setting of the cache manager. In the previous example, this is 10 seconds.

The correct approach to extracting cached items is to simply call the GetData method and check that it did not return null. However, you can use the Contains method to see if an item was previously cached and will (in most cases) still be available in the cache. This is efficient, but you must still (and always) check that the returned item is not null after you attempt to retrieve it from the cache.

The code used in the examples to read the cached items depends on the fact that we use an array of cache keys throughout the examples, and we can therefore check if any of these items are in the cache. The code we use is shown here.

void ShowCacheContents(ICacheManager theCache)
{
  if (theCache.Count > 0)
  {
    Console.WriteLine("Cache contains the following {0} item(s):",
                       theCache.Count);
    // Cannot iterate the cache, so use the five known keys 
    foreach (string key in DemoCacheKeys)
    {
      if (theCache.Contains(key))
      {
        // Try and get the item from the cache
        object theData = theCache.GetData(key);

        // If item has expired but not yet been scavenged, it will still show 
        // in the count of the number of cached items, but the GetData method
        // will return null.
        if (null != theData)
          Console.WriteLine("Item key '{0}' ({1}) = {2}", key,
                            theData.GetType().ToString(), theData.ToString());
        else
          Console.WriteLine("Item with key '{0}' has been invalidated.", key);
      }
    }
  }
  else
  {
    Console.WriteLine("The cache is empty.");
  }
}

Using the Isolated Storage Backing Store

The previous example showed how you can use the Caching Block as a powerful in-memory caching mechanism. However, often you will want to store the items in the cache in some type of persistent backing store. The Caching block contains a provider that uses Windows Isolated Storage on the local machine. This stores data in a separate area for each user, which means that different users will be able to see and retrieve only their own cached data.

One point to note is that objects to be cached in any of the physical backing stores must be serializable. The only case where this does not apply is when you use the in-memory only (null backing store) approach. The Product class used in these examples contains only standard value types as its properties, and carries the Serializable attribute. For more information about serialization, see "Object Serialization in the .NET Framework" at https://msdn.microsoft.com/en-us/library/ms973893.aspx.

To use isolated storage as your backing store, you simply add the isolated storage backing store provider to your cache manager using the configuration tools, as shown in Figure 2.

Figure 2

Adding the isolated storage backing store

Ff953179.e0665218-c98f-4b8a-a311-8e05d3c03f85-thumb(en-us,PandP.50).png

Notice that you can specify a partition name for your cache. This allows you to separate the cached data for different applications (or different cache managers) for the same user by effectively segregating each one in a different partition within that user’s isolated storage area.

Other than the configuration of the cache manager to use the isolated storage backing store, the code you use to cache and retrieve data is identical. The example, Cache data locally in the isolated storage backing store, uses a cache manager named IsoStorageCacheManager that is configured with an isolated storage backing store. It retrieves a reference to this cache manager by specifying the name when calling the GetInstance method of the current Enterprise Library container.

// Resolve a named CacheManager object from the container.
// In this example, this one uses the Isolated Storage Backing Store.
ICacheManager isoStorageCache 
    = EnterpriseLibraryContainer.Current.GetInstance<ICacheManager>(
                                                     "IsoStorageCacheManager");
...
CacheItemsAndShowCacheContents(isoStorageCache);

The code then executes the same CacheItemsAndShowCacheContents routine you saw in the first example, and passes to it the reference to the isoStorageCache cache manager. The result you see when you run this example is the same as you saw in the first example in this chapter.

Note

If you find that you get an error when you re-run this example, it may be because the backing store provider cannot correctly access your local isolated storage store. In most cases, you can resolve this by deleting the previously cached contents. Open the folder Users</STRONG><your-user-name>\AppData\Local\IsolatedStorage, and expand each of the subfolders until you find the Files\CachingExample subfolder. Then delete this entire folder tree. You should avoid deleting all of the folders in your IsolatedStorage folder as these may contain data used by other applications.

Encrypting the Cached Data

By default, the Caching block does not encrypt the data that it stores in memory or in a persistent backing store. However, you can configure the block to use an encryption provider that will encrypt the data that the cache manager stores in the backing store—but be aware that data in the in-memory cache is never encrypted.

To use encryption, you simple add an encryption provider to the configuration of the backing store. When you first add an encryption provider, the configuration tool automatically adds the Cryptography block to your configuration. Therefore, you must ensure that the relevant assembly, Microsoft.Practices.EnterpriseLibrary.Security.Cryptography.dll, is referenced in your project.

After you add the encryption provider to the configuration of the backing store, configure the Cryptography section by adding a new symmetric provider, and use the Key wizard to generate a new encryption key file or import an existing key. Then, back in the configuration for the Caching block, select the new symmetric provider you added for the symmetric encryption property of the backing store. For more information about configuring the Cryptography block, see Chapter 7, "Relieving Cryptography Complexity."

The examples provided for this chapter include one named Encrypt cached data in a backing store, which demonstrates how you can encrypt the persisted data. It instantiates the cache manager defined in the configuration of the application with the name EncryptedCacheManager:

// Resolve a CacheManager instance that encrypts the cached data.
ICacheManager encryptedCache 
    = EnterpriseLibraryContainer.Current.GetInstance<ICacheManager>(
                                                     "EncryptedCacheManager");
...
CacheItemsAndShowCacheContents(encryptedCache);

The code then executes the same CacheItemsAndShowCacheContents routine you saw in the first example, and passes to it the reference to the encryptedCache cache manager. And, again, the result you see when you run this example is the same as you saw in the first example in this chapter.

Note

If you find that you get an error when you run this example, it is likely to be that you have not created a suitable encryption key that the Cryptography block can use, or the absolute path to the key file in the App.config file is not correct. To resolve this, open the configuration console, navigate to the Symmetric Providers section of the Cryptography Application Block Settings, and select the RijndaelManaged provider. Click the "..." button in the Key property to start the Cryptographic Key Wizard. Use this wizard to generate a new key, save the key file, and automatically update the contents of App.config.

Using the Database Backing Store

You can easily and quickly configure the Caching block to use a database as your persistent backing store for cached data if you wish. Enterprise Library contains a script and a command file that you can run to create the database (located in the \Blocks\Caching\Src\Database\Scripts folder of the Enterprise Library source code). We also include these scripts with the example for this chapter.

The scripts assume that you will use the locally installed SQL Server Express database, but you can edit the CreateCachingDb.cmd file to change the target to a different database server. The SQL script that the command file executes creates a database named Caching, and adds the required tables and stored procedures to it.

However, if you only want to run the example application we provide for this chapter, you do not need to create a database. The project contains a preconfigured database file (located in the bin\Debug folder) that is auto-attached to your local SQL Server Express instance. You can connect to this database using the Microsoft® Visual Studio® Server Explorer to see the contents, as shown in Figure 3.

Figure 3

Viewing the contents of the cache in the database table

Ff953179.37ce9da2-4e69-46ab-8c01-dc0f57613d93-thumb(en-us,PandP.50).png

To configure caching to a database, you simply add the database cache storage provider to the cache manager using the configuration console, and specify the connection string and ADO.NET data provider type (the default is System.Data.SqlClient, though you can change this if you are using a different database system).

You can also specify a partition name for your cache, in the same way as you can for the isolated storage backing store provider. This allows you to separate the cached data for different applications (or different cache managers) for the same user by effectively segregating each one in a different partition within the database table.

Other than the configuration of the cache manager to use the database backing store, the code you use to cache and retrieve data is identical. The example, Cache data in a database backing store, uses a cache manager named DatabaseCacheManager that is configured with a data cache storage backing store. As with the earlier example, the code retrieves a reference to this cache manager by specifying the name when calling the GetInstance method of the current Enterprise Library container.

// Resolve a CacheManager instance that uses a Database Backing Store.
ICacheManager databaseCache 
    = EnterpriseLibraryContainer.Current.GetInstance<ICacheManager>(
                                                     "DatabaseCacheManager");
...
CacheItemsAndShowCacheContents(databaseCache);

The code then executes the same CacheItemsAndShowCacheContents routine you saw in the first example, and passes to it the reference to the databaseCache cache manager. As you will be expecting by now, the result you see when you run this example is the same as you saw in the first example in this chapter.

Note

The connection string for the database we provide with this example is:
Data Source=.\SQLEXPRESS; AttachDbFilename=|DataDirectory|\Caching.mdf; Integrated Security=True; User Instance=True
If you have configured a different database using the scripts provided with the example, you may find that you get an error when you run this example. It is likely to be that you have an invalid connection string in your App.config file for your database. In addition, use the Services applet in your Administrative Tools folder to check that the SQL Server (SQLEXPRESS) database service (the service is named MSSQL$SQLEXPRESS) is running.

Removing Items From and Flushing the Cache

Having seen how you can add items to your cache, and use a variety of backing store options and encryption, it's time now to see how you can manipulate the cache to remove items, or clear it completely by flushing it. Items are removed from the cache automatically based on their expiration or dependencies, but you can also remove individual items or remove all items.

The example, Remove and flush cached items, actually demonstrates more than just removing and flushing items—it shows how you can use a dependency to remove related items from your cache, how to create extended time expirations, and how to use an array of expirations. There is quite a lot of code in this example, so we'll step through it and explain each part in turn.

Using a File Dependency and Extended Time Expiration

The example starts by creating a NeverExpired expiration instance, followed by writing a text file to the current execution folder. It then creates a FileDependency on that file. This is a typical scenario where you read data from a file, such as a text file or an XML document, which you will access frequently in your code. However, if the original file is changed or deleted, you want the equivalent cached item to be removed from the cache.

// Create an expiration that never expires
NeverExpired never = new NeverExpired();

// Create a text file to use in a FileDependency
File.AppendAllText("ATextFile.txt", "Some contents for the file");

// Create an expiration dependency on the new text file
FileDependency fileDep = new FileDependency("ATextFile.txt");

Next, the code creates an instance of the ExtendedFormatTime class. This class allows you to specify expiration times for the cached item based on a repeating schedule. It provides additional opportunities compared to the more common SlidingTime and AbsoluteTime expiration types you have seen so far.

The constructor of the ExtendedFormatTime class accepts a string value that it parses into individual values for the minute, hour, day, month, and weekday (where zero is Sunday) that together specify the frequency with which the cached item will expire. Each value is delimited by a space. An asterisk indicates that there is no value for that part of the format string, and effectively means that expiration will occur for every occurrence of that item. It all sounds very complicated, so some examples will no doubt be useful (see Table 2).

Table 2

Expiration

Extended Format String

Meaning

* * * * *

Expires every minute.

5 * * * *

Expires at the 5th minute of every hour.

* 21 * * *

Expires every minute of the 21st hour of every day.

31 15 * * *

Expires at 3:31 PM every day.

7 4 * * 6

Expires every Saturday 4:07 AM.

15 21 4 7 *

Expires at 9:15 PM on every 4th of July.

The example generates an ExtendedFormatTime that expires at 30 minutes past every hour. Then it creates an array of type ICacheItemExpiration that contains the FileDependency created earlier and the new ExtendedFormatTime instance.

// Create an extended expiration for 30 minutes past every hour
ExtendedFormatTime extTime = new ExtendedFormatTime("30 * * * *");

// Create array of expirations containing the file dependency and extended format
ICacheItemExpiration[] expirations 
    = new ICacheItemExpiration[] { fileDep, extTime };

Adding the Items to the Cache

Now (at last) the code can add some items to the cache. It adds four items: the first uses the NeverExpired expiration, the second uses the array that contains the file dependency and extended format time expiration, and the other two just use the simple approach to caching items that you saw in the first example of this chapter. The code then displays the contents of the cache and waits for you to press a key.

// Add items to the cache using the key string names in the DemoCacheKeys array.
defaultCache.Add(DemoCacheKeys[0], "A cached item that never expires",
                 CacheItemPriority.NotRemovable, null, never);
defaultCache.Add(DemoCacheKeys[1], "A cached item that depends on both "
                 + "a disk file and an hourly extended time expiration.",
                 CacheItemPriority.Normal, null, expirations);
defaultCache.Add(DemoCacheKeys[2], "Another cached item");
defaultCache.Add(DemoCacheKeys[3], "And yet another cached item.");

ShowCacheContents(defaultCache);
Console.Write("Press any key to delete the text file...");
Console.ReadKey(true);

The following is the output you see at this point in the execution.

Created a 'never expired' dependency.
Created a text file named ATextFile.txt to use as a dependency.
Created an expiration for 30 minutes past every hour.

Cache contains the following 4 item(s):
Item key 'ItemOne' (System.String) = A cached item that never expires
Item key 'ItemTwo' (System.String) = A cached item that depends on both a disk file and an hourly extended time expiration.
Item key 'ItemThree' (System.String) = Another cached item
Item key 'ItemFour' (System.String) = And yet another cached item.

When you press a key, the code continues by deleting the text file, and then re-displaying the contents of the cache. Then, as in earlier examples, it waits for the items to be scavenged from the cache. The output you see is shown here.

Cache contains the following 4 item(s):
Item key 'ItemOne' (System.String) = A cached item that never expires
Item with key 'ItemTwo' has been invalidated.
Item key 'ItemThree' (System.String) = Another cached item
Item key 'ItemFour' (System.String) = And yet another cached item.

Waiting for the dependent item to be scavenged from the cache...
Waiting... Waiting... Waiting... Waiting...

Cache contains the following 3 item(s):
Item key 'ItemOne' (System.String) = A cached item that never expires
Item key 'ItemThree' (System.String) = Another cached item
Item key 'ItemFour' (System.String) = And yet another cached item.

You can see that deleting the text file caused the item with key ItemTwo that depended on it to be invalidated and removed during the next scavenging cycle.

At this point, the code is again waiting for you to press a key. When you do, it continues by calling the Remove method of the cache manager to remove the item having the key ItemOne, and displays the cache contents again. Then, after you press a key for the third time, it calls the Flush method of the cache manager to remove all the items from the cache, and again calls the method that displays the contents of the cache. This is the code for this part of the example.

Console.Write("Press any key to remove {0} from the cache...", DemoCacheKeys[0]);
Console.ReadKey(true);
defaultCache.Remove(DemoCacheKeys[0]);
ShowCacheContents(defaultCache);

Console.Write("Press any key to flush the cache...");
Console.ReadKey(true);
defaultCache.Flush();
ShowCacheContents(defaultCache);

The result you see as this code executes is shown here.

Press any key to remove ItemOne from the cache...
Cache contains the following 2 item(s):
Item key 'ItemThree' (System.String) = Another cached item
Item key 'ItemFour' (System.String) = And yet another cached item.

Press any key to flush the cache...
The cache is empty.

Refreshing the Cache

So far, when we used the Add method to add items to the cache, we passed a null value for the refreshAction parameter. You can use this parameter to detect when an item is removed from the cache, and discover the value of that item and the reason it was removed.

You must create a class that implements the ICacheItemRefreshAction interface, and contains a method named Refresh that accepts as parameters the key of the item being removed, the value as an Object type, and a value from the CacheItemRemovedReason enumeration. The values from this enumeration are Expired, Removed (typically by your code or a dependency), Scavenged (typically in response to shortage of available memory), and Unknown (a reserved value you should avoid using).

Therefore, inside your Refresh method, you can query the parameter values passed to it to obtain the key and the final cached value of the item, and see why it was removed from the cache. At this point, you can make a decision on what to do about it. In some cases, it may make sense to insert the item into the cache again (such as when a file on which the item depends has changed, or if the data is vital to your application). Of course, you should generally only do this if it expired or was removed. If items are being scavenged because your machine is short of memory, you should think carefully about what you want to put back into the cache!

The example, Detect and refresh expired or removed cache items, illustrates how you can capture items being removed from the cache, and re-cache them when appropriate. The example uses the following implementation of the ICacheItemRefreshAction interface to handle the case when the cache contains instances of the Product type. For a general situation where you cache different types, you would probably want to check the type before attempting to cast it to the required target type. Also notice that the class carries the Serializable attribute. All classes that implement the ICacheItemRefreshAction interface must be marked as serializable.

[Serializable]
public class MyCacheRefreshAction : ICacheItemRefreshAction
{
  public void Refresh(string key, object expiredValue, 
                      CacheItemRemovedReason removalReason)
  {
    // Item has been removed from cache. Perform desired actions here, based on
    // the removal reason (for example, refresh the cache with the item).
    Product expiredItem = (Product)expiredValue;
    Console.WriteLine("Cached item {0} was expired in the cache with "
                      + "the reason '{1}'", key, removalReason);
    Console.WriteLine("Item values were: ID = {0}, Name = '{1}', "
                      + "Description = {2}", expiredItem.ID, 
                      expiredItem.Name, expiredItem.Description);

    // Refresh the cache if it expired, but not if it was explicitly removed
    if (removalReason == CacheItemRemovedReason.Expired)
    {
      CacheManager defaultCache = EnterpriseLibraryContainer.Current.GetInstance
                                  <CacheManager>("InMemoryCacheManager");
      defaultCache.Add(key, new Product(10, "Exciting Thing", 
                       "Useful for everything"), CacheItemPriority.Low, 
                       new MyCacheRefreshAction(), 
                       new SlidingTime(new TimeSpan(0, 0, 10)));
      Console.WriteLine("Refreshed the item by adding it to the cache again.");
    }
  }
}

To use the implementation of the ICacheItemRefreshAction interface, you simply specify it as the refreshAction parameter of the Add method when you add an item to the cache. The example uses the following code to cache an instance of the Product class that will expire after three seconds.

defaultCache.Add(DemoCacheKeys[0], new Product(10, "Exciting Thing", 
                 "Useful for everything"), 
                 CacheItemPriority.Low, new MyCacheRefreshAction(), 
                 new SlidingTime(new TimeSpan(0, 0, 3)));

The code then does the same as the earlier examples: it displays the contents of the cache, waits five seconds for the item to expire, displays the contents again, waits five more seconds until the item is scavenged, and then displays the contents for the third time. However, this time the Caching block executes the Refresh method of our ICacheItemRefreshAction callback as soon as the item is removed from the cache. This callback displays a message indicating that the cached item was removed because it had expired, and that it has been added back into the cache. You can see it in the final listing of the cache contents shown here.

The cache contains the following 1 item(s):
Item key 'ItemOne' (CachingExample.Product) = CachingExample.Product

Waiting... Waiting... Waiting... Waiting... Waiting...

The cache contains the following 1 item(s):
Item with key 'ItemOne' has been invalidated.

Cached item ItemOne was expired in the cache with the reason 'Expired'
Item values were: ID = 10, Name = 'Exciting Thing', Description = Useful for everything
Refreshed the item by adding it to the cache again.

Waiting... Waiting... Waiting...

The cache contains the following 1 item(s):
Item key 'ItemOne' (CachingExample.Product) = CachingExample.Product

Loading the Cache

If you have configured a persistent backing store for a cache manager, the Caching block will automatically load the in-memory cache from the backing store when you instantiate that cache manager. Usually, this will occur when the application starts up. This is an example of proactive cache loading. Proactive cache loading is useful if you know that the data will be required, and it is unlikely to change much. Another approach is to create a class with a method that reads data you require from some data source, such as a database or an XML file, and loads this into the cache by calling the Add method for each item. If you execute this on a background or worker thread, you can load the cache without affecting the interactivity of the application or blocking the user interface.

Alternatively, you may prefer to use reactive cache loading. This approach is useful for data that may or may not be used, or data that is relatively volatile. In this case (if you are using a persistent backing store), you may choose to instantiate the cache manager only when you need to load the data. Alternatively, you can flush the cache (probably when your application ends) and then load specific items into it as required and when required. For example, you might find that you need to retrieve the details of a specific product from your corporate data store for display in your application. At this point, you could choose to cache it if it may be used again within a reasonable period and is unlikely to change during that period.

Proactive Cache Loading

The example, Load the cache proactively on application startup, provides a simple demonstration of proactive cache loading. In the startup code of your application you add code to load the cache with the items your application will require. The example creates a list of Product items, and then iterates through the list calling the Add method of the cache manager for each one. You would, of course, fetch the items to cache from the location (such as a database) appropriate for your own application. It may be that the items are available as a list, or—for example—by iterating through the rows in a DataSet or a DataReader.

// Create a list of products - may come from a database or other repository
List<Product> products = new List<Product>();
products.Add(new Product(42, "Exciting Thing", 
                         "Something that will change your view of life."));
products.Add(new Product(79, "Useful Thing", 
                         "Something that is useful for everything."));
products.Add(new Product(412, "Fun Thing", 
                         "Something that will keep the grandchildren quiet."));

// Iterate the list loading each one into the cache
for (int i = 0; i < products.Count; i++)
{
  theCache.Add(DemoCacheKeys[i], products[i]);
}

Reactive Cache Loading

Reactive cache loading simply means that you check if an item is in the cache when you actually need it, and—if not—fetch it and then cache it for future use. You may decide at this point to fetch several items if the one you want is not in the cache. For example, you may decide to load the complete product list the first time that a price lookup determines that the products are not in the cache.

The example, Load the cache reactively on demand, demonstrates the general pattern for reactive cache loading. After displaying the contents of the cache (to show that it is, in fact, empty) the code attempts to retrieve a cached instance of the Product class. Notice that this is a two-step process in that you must check that the returned value is not null. As we explained in the section "What's In My Cache?" earlier in this chapter, the Contains method may return true if the item has recently expired or been removed.

If the item is in the cache, the code displays the values of its properties. If it is not in the cache, the code executes a routine to load the cache with all of the products. This routine is the same as you saw in the previous example of loading the cache proactively.

Console.WriteLine("Getting an item from the cache...");
Product theItem = (Product)defaultCache.GetData(DemoCacheKeys[1]);

// You could test for the item in the cache using CacheManager.Contains(key) 
// method, but you still must check if the retrieved item is null even
// if the Contains method indicates that the item is in the cache:
if (null != theItem)
{
  Console.WriteLine("Cached item values are: ID = {0}, Name = '{1}', "
                    + "Description = {2}", theItem.ID, theItem.Name,
                    theItem.Description);
}
else
{
  Console.WriteLine("The item could not be obtained from the cache.");

  // Item not found, so reactively load the cache
  LoadCacheWithProductList(defaultCache);
  Console.WriteLine("Loaded the cache with the list of products.");
  ShowCacheContents(defaultCache);
}

After displaying the contents of the cache after loading the list of products, the example code then continues by attempting once again to retrieve the value and display its properties. You can see the entire output from this example here.

The cache is empty.

Getting an item from the cache...
The item could not be obtained from the cache.
Loaded the cache with the list of products.

The cache contains the following 3 item(s):
Item key 'ItemOne' (CachingExample.Product) = CachingExample.Product
Item key 'ItemTwo' (CachingExample.Product) = CachingExample.Product
Item key 'ItemThree' (CachingExample.Product) = CachingExample.Product

Getting an item from the cache...
Cached item values are: ID = 79, Name = 'Useful Thing', Description = Something that is useful for everything.

In general, the pattern for a function that performs reactive cache loading is:

  1. Check if the item is in the cache and the value returned is not null.
  2. If it is found in the cache, return it to the calling code.
  3. If it is not found in the cache, create or obtain the object or value and cache it.
  4. Return this new value or object to the calling code.

Extending Your Cache Advance

The Caching block, like all the other blocks in Enterprise Library, contains extension points that allow you to create custom providers and integrate them with the block. You can also replace the default cache manager if you want to use a different caching mechanism, or modify the source code to otherwise change the behavior of the block.

The cache manager is responsible for loading items from a persistent backing store into memory when you instantiate the application block. It also exposes the methods that manipulate the cache. If you want to change the way that the Caching block loads or manages cached items, for example to implement a distributed or specialist caching mechanism, or perform asynchronous or delayed cache loading, you can use the ICacheManager interface and implement the methods and properties it defines.

Alternatively, if you just want to use a different backing store or add a new expiration policy, you can create custom backing store providers and expiration policies and use these instead of the built-in providers and policies. To create a custom backing store provider, you can implement the IBackingStore interface or inherit from the BaseBackingStore abstract class. To create a custom expiration policy, you can implement the ICacheItemExpiration interface and, optionally, the ICacheItemRefreshAction interface for a class that refreshes an expired cache item.

For more information about extending the Caching block, see the online documentation and the help files installed with Enterprise Library.

Summary

This chapter looked at the ways that you can implement caching across your application and your enterprise in a consistent and configurable way by using the Caching Application Block. The block provides a non-distributed cache that can cache items in memory, and optionally in a persistent backing store such as isolated storage or a database. You can also easily add new backing stores if required, and even replace the cache manager if you want to create a mechanism that does support other features, such as distributed caching.

The Caching block is flexible in order to meet most requirements for most types of applications. You can define multiple caches and partition each one, which is useful if you want to use a single database for multiple caches. And you can easily add encryption to the caching mechanism for items stored in a persistent backing store.

The block also provides a wide range of expiration mechanisms, including several time-based expirations as well as file-based expiration. Unlike some caching mechanisms, you can specify multiple expirations for each cached item, and even create your own custom expiration policies.

On top of all of this flexibility, the block makes it easy for administrators and operators to change the behavior through configuration using the configuration tools provided with Enterprise Library. They can change the settings for the cache, such as the polling frequency, change the backing stores that the block uses, and change the algorithms that it uses to encrypt cached data.

This chapter discussed all of these features, and contained detailed examples of how you can use the block in your own applications. For more information about the Caching block, see the online documentation and the help files installed with Enterprise Library.

Next | Previous | Home | Community