Wicked Code

Power Programming Tips for ASP.NET 2.0

Jeff Prosise

Code download available at:WickedCode0506.exe(118 KB)

Contents

Storing Site Maps in SQL Server
Forwarding and Capturing Trace Output
Fetching Images from Databases
Building Asynchronous Pages

In the February 2005 issue, I introduced five lesser-known features of ASP.NET 2.0 that have the potential to make a significant impact on the security, performance, and robustness of your code (see Wicked Code: Five Undiscovered Features on ASP.NET 2.0). This month I'd like to continue down that path by offering still more power programming tips for ASP.NET 2.0.

The code samples presented in this column were written and tested using the Beta 1 Refresh version of Visual Studio 2005 and the .NET Framework 2.0. They could require changes when the final versions of the products ship.

Storing Site Maps in SQL Server

One of the major new features in ASP.NET 2.0 is data-driven site navigation. You start by providing an XML site map detailing your site's structure. Then you add a SiteMapDataSource control to the page and bind it to a TreeView or Menu control to produce a navigation UI. The SiteMapDataSource control reads the XML site map and provides the data to the TreeView or Menu as well as to any SiteMapPath controls also present in the page. If the structure of your site changes, you update the site map and the navigation UI updates automatically.

One shortcoming of the site navigation bits is that they include only one built-in site map provider. That provider, currently named XmlSiteMapProvider, requires that site maps be stored in XML files conforming to a particular schema. Developers are already asking for a SQL site map provider so they can store site maps in databases alongside other content on their site.

Fortunately, site map providers are not incredibly difficult to write. Figure 1 contains the source code for a custom site map provider named SqlSiteMapProvider that retrieves site map information from a SQL Server™ database. It supports hierarchical site maps of unlimited depth. It also supports security trimming—a feature of XmlSiteMapProvider that enables site map nodes to be hidden from users who lack permission to view them. SqlSiteMapProvider looks and behaves like XmlSiteMapProvider in almost every way, but because it uses SQL Server as the site map data store, it adds a useful new capability to ASP.NET 2.0.

Figure 1 SqlSiteMapProvider.cs

public class SqlSiteMapProvider : StaticSiteMapProvider { static readonly string _errmsg1 = "Missing connectionStringName attribute"; static readonly string _errmsg2 = "Duplicate node ID"; SiteMapNode _root = null; string _connect; public override void Initialize (string name, NameValueCollection attributes) { base.Initialize (name, attributes); if (attributes == null) throw new ConfigurationException (_errmsg1); _connect = attributes["connectionStringName"]; if (String.IsNullOrEmpty (_connect)) throw new ConfigurationException (_errmsg1); } [MethodImpl(MethodImplOptions.Synchronized)] public override SiteMapNode BuildSiteMap() { // Return immediately if this method has been called before if (_root != null) return _root; // Create a dictionary for temporary node storage and lookup Dictionary<int, SiteMapNode> nodes = new Dictionary<int, SiteMapNode> (16); // Query the database for site map nodes SqlConnection connection = new SqlConnection( ConfigurationSettings.ConnectionStrings[ _connect].ConnectionString); try { connection.Open (); SqlCommand command = new SqlCommand( "SELECT ID, Title, Description, Url, " + "Roles, Parent FROM SiteMap ORDER BY ID", connection); SqlDataReader reader = command.ExecuteReader (); int id = reader.GetOrdinal("ID"); int url = reader.GetOrdinal ("Url"); int title = reader.GetOrdinal ("Title"); int desc = reader.GetOrdinal("Description"); int roles = reader.GetOrdinal ("Roles"); int parent = reader.GetOrdinal("Parent"); if (reader.Read()) { // Create the root SiteMapNode _root = new SiteMapNode(this, reader.GetInt32(id).ToString(), reader.IsDBNull(url) ? null : reader.GetString(url), reader.GetString(title), reader.IsDBNull(desc) ? null : reader.GetString(desc)); if (!reader.IsDBNull(roles)) { string rolenames = reader.GetString(roles).Trim (); if (!String.IsNullOrEmpty(rolenames)) { string[] rolelist = rolenames.Split( new char[] { ',', ';' }, 512); _root.Roles = rolelist; } } // Add "*" to the roles list if no roles are specified if (_root.Roles == null) _root.Roles = new string[]{"*"}; // Record the root node in the dictionary if (nodes.ContainsKey (reader.GetInt32(id))) throw new ConfigurationException (_errmsg2); nodes.Add(reader.GetInt32(id), _root); // Add the node to the site map AddNode(_root, null); // Build a tree of SiteMapNodes underneath the root while (reader.Read()) { SiteMapNode node = new SiteMapNode(this, reader.GetInt32(id).ToString(), reader.IsDBNull(url) ? null : reader.GetString(url), reader.GetString(title), reader.IsDBNull(desc) ? null : reader.GetString(desc)); if (!reader.IsDBNull(roles)) { string rolenames = reader.GetString(roles).Trim (); if (!String.IsNullOrEmpty(rolenames)) { string[] rolelist = rolenames.Split (new char[] { ',', ';' }, 512); node.Roles = rolelist; } } // If the node lacks roles information, // "inherit" that information from its parent SiteMapNode parentnode = nodes[reader.GetInt32(parent)]; if (node.Roles == null) node.Roles = parentnode.Roles; // Record the node in the dictionary if (nodes.ContainsKey(reader.GetInt32(id))) throw new ConfigurationException(_errmsg2); nodes.Add(reader.GetInt32(id), node); // Add the node to the site map AddNode(node, parentnode); } } } finally { connection.Close (); } // Return the root SiteMapNode return _root; } protected override SiteMapNode GetRootNodeCore () { BuildSiteMap (); return _root; } }

The key method in SqlSiteMapProvider is BuildSiteMap. This method is called by ASP.NET shortly after the provider is loaded to build the site map, which is basically a collection of SiteMapNodes linked together to form a tree. Each SiteMapNode represents one node in the site map. Figure 2 shows the properties.

Figure 2 Site Map Node Properties

Property Description
Title Specifies the text that a navigation control displays for the node.
Url Specifies the URL the user is sent to when the node is clicked.
Description Specifies the descriptive text that's displayed if the cursor hovers over the node.
Roles Specifies the role or roles that are permitted to view the node if security trimming is enabled ("*" if anyone can view it). Multiple roles can be specified using commas or semicolons as separators.

SqlSiteMapProvider's implementation of BuildSiteMap queries the site map database. Then it iterates over the records, one by one transforming them into SiteMapNodes. At the end, it hands the site map over to ASP.NET by returning a reference to the root site map node. And because providers must be thread-safe, SqlSiteMapProvider wraps the entire BuildSiteMap method in a MethodImpl attribute that serializes concurrent thread accesses.

This raises the question: how do you structure the site map database? The SQL script in Figure 3 creates a SiteMap table that's compatible with SqlSiteMapProvider. Each record added to the table represents one site map node, and each has fields that map to the SiteMapNode properties mentioned previously as well as fields used to denote the relationship between nodes. Each node must have a unique ID, which is stored in the ID field. To parent one node to another, you set the child node's Parent field equal to the ID of the parent. All nodes except the root node must have a parent. In addition, because of the way BuildSiteMap is implemented, a node can only be parented to another node with a lesser ID. (Note the ORDER BY clause in the database query.) For example, a node with an ID of 100 can be the child of a node with an ID of 99, but it can't be the child of a node with an ID of 101.

Figure 3 Site Map Database Schema

CREATE TABLE [dbo].[SiteMap] ( [ID] [int] NOT NULL, [Title] [varchar] (32) NOT NULL, [Description] [varchar] (512), [Url] [varchar] (512), [Roles] [varchar] (512), [Parent] [int] ) ON [PRIMARY] GO ALTER TABLE [dbo].[SiteMap] ADD CONSTRAINT [PK_ID] PRIMARY KEY CLUSTERED ( [ID] ) ON [PRIMARY] GO

Figure 4 shows one of the databases that I used to test SqlSiteMapProvider. Note that I didn't explicitly initialize the Roles field of every node. Nodes that lack Roles values inherit the Roles values of their parents, and if the root node's Roles value isn't specified, then "*" is assumed. Roles values are ignored if security trimming is disabled, and like XmlSiteMapProvider, SqlSiteMapProvider disables security trimming by default. You can enable it by including a securityTrimmingEnabled="true" attribute in the element that registers SqlSiteMapProvider.

Figure 4 Sample Site Map Database

Figure 5 shows how to register SqlSiteMapProvider in Web.config and activate it by making it the default provider. Note the connectionStringName attribute, which specifies the name of a connection string in the <connectionStrings> section of Web.config. This required attribute identifies the connection string that SqlSiteMapProvider uses to connect to the site map database.

Figure 5 Registering SqlSiteMapProvider

<configuration> <connectionStrings> <add name="SiteMapConnectionString" connectionString="..." providerName="System.Data.SqlClient" /> </connectionStrings> <system.web> <siteMap defaultProvider="SqlSiteMapProvider" enabled="true"> <providers> <add name="SqlSiteMapProvider" type="SqlSiteMapProvider" securityTrimmingEnabled="true" connectionStringName="SiteMapConnectionString" /> </providers> </siteMap> </system.web> </configuration>

Forwarding and Capturing Trace Output

ASP.NET 1.x includes a handy tracing feature that lets you produce diagnostic output using the TraceContext class, which is normally accessed through Page.Trace. Trace output can be directed to the page or to the trace log front-ended by Trace.axd. It cannot be directed elsewhere without resorting to rather exotic means, such as custom HTTP modules incorporating response filters built from custom Stream-derived classes.

ASP.NET 2.0 introduces flexibility to trace output. For starters, you can declaratively forward ASP.NET trace output to System.Diagnostics.Trace output and vice versa. The following Web.config file shows how. Apply this configuration to a project and run it in Visual Studio® 2005 in debug mode, and the trace output will magically appear in the Visual Studio output window:

<configuration> <system.web> <trace enabled="true" writeToDiagnosticsTrace="true" /> </system.web> </configuration>

Here's another example of the ASP.NET 2.0 kinder, gentler approach to trace output:

void Page_Load (object sender, EventArgs e) { Trace.TraceFinished += new TraceContextEventHandler (OnTraceFinished); } void OnTraceFinished (object sender, TraceContextEventArgs e) { foreach (TraceContextRecord record in e.TraceRecords) { Response.Write (String.Format ("{0}: {1}<br>", record.Category, record.Message)); } }

The TraceContext class now features a TraceFinished event that provides programmatic access to trace output. The sample just shown echoes trace output to the page using Response.Write. You could just as easily write the trace output to a database, an XML file, or another kind of storage medium, perhaps for viewing it with a custom diagnostic tool.

Fetching Images from Databases

One of my favorite features originally scheduled for ASP.NET 2.0 is the new DynamicImage control, which, among other things, makes it very easy to display images retrieved from databases. The related ImageField type makes it a snap to display database images in GridView and DetailsView controls, and the new ASIX file type simplifies dynamic image generation.

Unfortunately, these features have been cut from the product for Beta 2 (ImageField won't be removed, but it will lose its ability to bind to database images.) So what's a poor developer to do? You can still display database images the old-fashioned way with custom HTTP handlers, but all too often I see developers use ASPX files as image generators. That impedes performance. And while writing HTTP handlers isn't terribly difficult, it's something most developers would prefer not to do if given a choice.

That's why I decided to build the SqlImage control (see Figure 6). While certainly not the last word in displaying database images in Web pages, it's capable of handling routine image retrieval and display chores that might make it useful as a replacement for DynamicImage. SqlImage derives from System.Web.UI.WebControls.Image and extends it by adding the properties that are listed in Figure 7.

Figure 7 SqlImage Properties

Property Description
ConnectionString Specifies the connection string SqlImage uses to query the database for images
SelectCommand Specifies the SQL command SqlImage uses to query the database for images
MimeType Specifies the image format (default="image/jpeg")
EnableCaching Tells SqlImage whether to store the images it retrieves in the ASP.NET application cache (default=false)
CacheDuration Specifies how long (in seconds) images should be cached if EnableCaching is true

Figure 6 SqlImage.cs

namespace MsdnMag { [ToolboxData ("<{0}:SqlImage Runat=\"server\">")] public class SqlImage : System.Web.UI.WebControls.Image { [Browsable(true), Category("Database"), DefaultValue(""), Description("Database connection string")] public string ConnectionString { get { object o = ViewState["ConnectionString"]; return (o == null) ? String.Empty : (string)o; } set { ViewState["ConnectionString"] = value; } } ... // implementations of the other properties listed in Figure 7 protected override void OnPreRender(EventArgs e) { if (!String.IsNullOrEmpty(ConnectionString) && !String.IsNullOrEmpty(SelectCommand)) { ImageUrl = String.Format ( "~/__ImageGrabber.axd?n={0}&s={1}&m={2}&c={3}&d={4}", EncryptionHelper.Encrypt(ConnectionString), EncryptionHelper.Encrypt(SelectCommand), MimeType, EnableCaching ? "1" : "0", CacheDuration); } } } public class ImageGrabber : IHttpHandler { public void ProcessRequest(HttpContext context) { // Read the query string parameters string connectionString = context.Request["n"]; if (String.IsNullOrEmpty(connectionString)) return; string selectCommand = context.Request["s"]; if (String.IsNullOrEmpty(selectCommand)) return; string mimeType = context.Request["m"]; if (String.IsNullOrEmpty(mimeType)) mimeType = "image/jpeg"; bool enableCaching = false; string caching = context.Request["c"]; if (!String.IsNullOrEmpty(caching) && caching == "1") enableCaching = true; int cacheDuration = 0; string duration = context.Request["d"]; if (!String.IsNullOrEmpty(duration)) cacheDuration = Convert.ToInt32 (duration); byte[] image = null; string cacheKey = String.Format("__ImageGrabber_{0}_{1}", connectionString, selectCommand); // Read the image from the cache if caching is enabled if (enableCaching) image = (byte[])context.Cache[cacheKey]; // If image isn't cached, query the database if (image == null) { SqlConnection connection = new SqlConnection( EncryptionHelper.Decrypt(connectionString)); try { connection.Open(); SqlCommand command = new SqlCommand( EncryptionHelper.Decrypt (selectCommand), connection); image = (byte[])command.ExecuteScalar(); if (enableCaching) context.Cache.Insert(cacheKey, image, null, DateTime.Now.AddSeconds(cacheDuration), Cache.NoSlidingExpiration); } finally { connection.Close(); } } // Stream the image bits back to the browser context.Response.ContentType = mimeType; context.Response.OutputStream.Write(image, 0, image.Length); } public bool IsReusable { get { return true; } } } class EncryptionHelper { ... } }

Using the control is simplicity itself:

<%@ Register TagPrefix="msdn" Namespace="MsdnMag" Assembly="__code" %> ... <msdn:SqlImage ID="SqlImage1" Runat="server" ConnectionString="<%$ ConnectionStrings:Contoso %>" SelectCommand= "SELECT Photo FROM Employees WHERE EmployeeID=1313" MimeType="image/gif" EnableCaching="true" CacheDuration="60" />

Simply declare a control instance and initialize it with a SelectCommand that extracts an image from a SQL Server database. Be sure to provide a connection string, too. You can declare it explicitly or use a "$ ConnectionStrings" expression like the one in the example. If you'd like to cache the image to avoid redundant database accesses, initialize the EnableCaching and CacheDuration properties as well.

If you're curious about the reference to "__code" in the "@ Register" directive, that's the ASP.NET 2.0 way of referring to an autocompiled assembly. I took this example from a site in which I deployed SqlImage.cs in the Code directory, where it was autocompiled by ASP.NET. If you build the assembly yourself and deploy it in the bin directory, or in the Global Assembly Cache (GAC), substitute your assembly name for "__code".

Before you use the SqlImage control, you'll need to make one change to Web.config. Add an element to <httpHandlers> mapping GET requests for __ImageGrabber.axd to MsdnMag.ImageGrabber, as shown here:

<configuration> <system.web> <httpHandlers> <add verb="GET" path="__ImageGrabber.axd" type="MsdnMag.ImageGrabber" /> </httpHandlers> </system.web> </configuration>

The reason for the change is simple. SqlImage outputs <img> tags with src attributes that refer to __ImageGrabber.axd. This Web.config entry configures ASP.NET to activate the MsdnMag.ImageGrabber class whenever it receives a request for __ImageGrabber.axd. MsdnMag.ImageGrabber is implemented in the same source code file as the SqlImage control, and it's the class that fetches the image from the database and streams it back to the browser. The bulk of the work done in displaying a database image is performed by MsdnMag.ImageGrabber, not by SqlImage.

You might have noticed that SqlImage.cs contains a private helper class named EncryptionHelper. Here's why: Beta 1's DynamicImage control generates a GUID for each image it references. Then it emits <img> tags referencing an internal HTTP handler that uses the GUID, which is passed in a query string, to return the image bits. There are several problems with this approach; for one, the framework has to cache that GUID for an indeterminate period of time since it doesn't know when or how often the image will be requested. (That's why DynamicImage tends to work reliably for a while and then starts returning images containing text indicating that the image is no longer available. Argh!)

SqlImage takes a more robust approach to image identification: it passes connection strings and SQL commands (the ConnectionString and SelectCommand values you provide to it) to __ImageGrabber.axd in query strings. Of course, passing these values in plaintext would pose a potentially serious security problem. To prevent the connection string and SQL command from being exposed to users smart enough to do a View Source, SqlImage encrypts these strings and MsdnMag.ImageGrabber decrypts them. EncryptionHelper methods provide the encryption and decryption logic.

EncryptionHelper uses the same symmetric encryption/decryption key that ASP.NET uses—the one specified with <machineKey>'s decryptionKey attribute. Therefore, if you deploy an application that uses SqlImage on a Web farm, you'll need to set decryptionKey to the same value on every machine. That shouldn't require any additional work because ASP.NET applications deployed on Web farms already need identical decryptionKeys, even if they don't employ SqlImage controls.

Building Asynchronous Pages

The fact that ADO.NET 2.0 contains cool new support for asynchronous database operations has been well documented. But it's not as well known that ASP.NET 2.0 features built-in support for asynchronous pages. This information rarely appears in bulleted lists of new and notable features.

By default, page processing in ASP.NET is synchronous. A request comes in, it's assigned to a thread, and that thread does nothing else until the request completes. The problem is that if the thread becomes I/O-bound (that is, if it calls out to a Web service or queries a remote database or does something else to cause it to wait for a response) it's unavailable to ASP.NET for the duration. Because ASP.NET has a limited number of threads at its disposal to process requests, other requests pile up in the application's request queue. If the queue fills to capacity (by default, it's limited to 100 requests), additional requests are rejected with 503 "Server Unavailable" errors. Even in less extreme circumstances, too many I/O-bound threads can significantly reduce throughput.

ASP.NET has supported asynchronous HTTP handlers from the beginning, and even in version 1.x, one could build asynchronous pages (pages that free request threads during I/O-intensive operations) with a bit of ingenuity. But ASP.NET 2.0 makes asynchronous pages much simpler. You begin by attributing the page as asynchronous with the following @ Page directive:

<%@ Page Async="true" ... %>

Then, sometime before the page's OnPreRenderComplete event, you register a pair of methods: a begin method to start an asynchronous operation and an end method to end the operation. You wrap the former method in a BeginEventHandler delegate, and the latter in an EndEventHandler delegate. ASP.NET calls the begin method before the page begins its render cycle. You launch an async operation (for example, an async ADO.NET call or an async call to a Web service) and return an IAsyncResult interface that ASP.NET uses to detect completion of the asynchronous operation. When the operation completes, ASP.NET calls your end method, where you do whatever's necessary to finish up (for example, bind the results of an async database query to a control).

The code in Figure 8 uses this technique to asynchronously fetch the contents of another Web page using System.Net.WebRequest.BeginGetResponse and System.Net.WebRequest.EndGetResponse. When the call completes and ASP.NET calls EndAsyncOperation, the page parses the content and displays the targets of all the HREFs it finds. What differentiates this page from a synchronous page is that the thread assigned to process the request is returned to the ASP.NET thread pool the moment BeginAsyncOperation returns. This is powerful stuff that can dramatically increase the throughput of I/O-intensive pages.

Figure 8 Asynchronous Page

public partial class AsyncPage_aspx { WebRequest _request; void Page_Load (object sender, EventArgs e) { AddOnPreRenderCompleteAsync ( new BeginEventHandler(BeginAsyncOperation), new EndEventHandler (EndAsyncOperation) ); } IAsyncResult BeginAsyncOperation (object sender, EventArgs e, AsyncCallback cb, object state) { _request = WebRequest.Create("https://msdn.microsoft.com"); return _request.BeginGetResponse (cb, state); } void EndAsyncOperation (IAsyncResult ar) { WebResponse response = _request.EndGetResponse (ar); string text = new StreamReader (response.GetResponseStream ()).ReadToEnd (); response.Close(); Regex regex = new Regex ("href\\s*=\\s*\"([^\"]*)\"", RegexOptions.IgnoreCase); MatchCollection matches = regex.Matches(text); StringBuilder builder = new StringBuilder(1024); foreach (Match match in matches) { builder.Append (match.Groups[1]); builder.Append("<br/>"); } Label1.Text = builder.ToString (); } }

That's just one way to use the new asynchronous page infrastructure in ASP.NET 2.0. The whole story could easily fill a column or feature-length article. I'll have a lot more to say about asynchronous pages in a future installment of this column.

Send your questions and comments for Jeff to  wicked@microsoft.com.

Jeff Prosise is a contributing editor to MSDN Magazine and the author of several books, including Programming Microsoft .NET (Microsoft Press, 2002). He's also a cofounder of Wintellect, a software consulting and education firm that specializes in Microsoft .NET.