XML Files: Writing XML Providers for Microsoft .NET

MSDN Magazine

Writing XML Providers for Microsoft .NET
Aaron Skonnard
Download the code for this article: XML0109.exe (123KB)
Browse the code for this article at Code Center: XML_Providers


oughly five years ago Microsoft introduced the Universal Data Access (UDA) architecture, which promised to simplify working with enterprise data. The UDA strategy promoted exposing data through a single suite of COM interfaces known collectively as OLE DB, the idea being that a single suite of interfaces lowers the bar for consumers and makes it easier for them to access a variety of supported data stores whether they are relational or nonrelational. In an OLE DB-centric world, consumers no longer needed to worry about learning the details of native (data source-specific) APIs.
      The assumption behind this strategy, of course, was that someone would provide the layer that maps the "universal" OLE DB API to the data source's native APIs. This piece of code that sits between the consumer and the native API is known as an OLE DB provider, a fitting name since it's providing access to the native data store through OLE DB interfaces.
      Today there are many OLE DB providers in place for all kinds of data including SQL Server™, Oracle, Jet (Access), ISAM, AS/400, DB2, ODBC, OLAP, Microsoft index/search services, as well as many more. OLE DB consumers can access any of these supported data stores through the same suite of interfaces.
      When OLE DB was gaining momentum, some developers saw the entire world through OLE DB-colored glasses. Whenever these developers were exposing or consuming data, they did it through OLE DB. Although they seemed to be fanatics, these developers had a vision of a future where standardization could eventually bring immeasurable levels of supporting infrastructure, such as generic code, tools, and services that greatly simplify common tasks. In the end, OLE DB didn't fully realize its original goal, but those fanatics were on the right track.

Problems with OLE DB

      In retrospect, it seems obvious that OLE DB would never become a true universal standard for exposing and accessing information because it faced competition from other vendors. Since OLE DB is a COM technology and COM itself never became universally supported across platforms, it never really stood a chance. But to top it off, OLE DB was extremely difficult to work with directly, a fact that greatly limited its acceptance. Developers today still long for the benefits of OLE DB, but they just aren't willing to pay the high price.
      As OLE DB was struggling to gain momentum, another technology crept up from behind and, without anyone really noticing, displaced it as the true UDA for exposing information. This technology, as you might have guessed, is XML and its growing family of specifications and tool support.


      If you reread the previous paragraphs and replace every instance of OLE DB with XML, you'll see that XML accomplishes the same goals that OLE DB was designed to achieve without suffering from any of the same problems. In fact, XML has already become a ubiquitous industry-wide standard that's supported on all platforms, and it's much simpler than OLE DB.
      One way to achieve universal data access with XML is to simply map all data sources to XML 1.0 files, as illustrated in Figure 1. This scenario allows consumers to use any XML parser or API on the platform of their choice. In this case, the XML parser acts like the provider in that it maps the XML 1.0 back to a standard XML API that is easily consumable by a heterogeneous client base. Once the data is represented as XML, clients can simplify processing through layered XML technologies such as XPath and XSLT.

Figure 1 Mapping Data Sources to XML 1.0 Files
Figure 1 Mapping Data Sources to XML 1.0 Files

      File size and performance are the obvious concerns with this technique, but increased interoperability is sometimes worth the trade-off. Nevertheless, in many situations it's just not feasible to map large data sources to XML 1.0 files.

XML Infoset Providers

      The ultimate XML provider would not expose data as XML 1.0 but rather map the standard XML interfaces to the native interfaces directly. As with OLE DB, this approach allows consumers to use the standard XML APIs to access a wide variety of data sources without sacrificing performance.

Figure 2 Using Other Technologies with XML Providers
Figure 2 Using Other Technologies with XML Providers

      This approach, like the previous one, facilitates layering other XML technologies, such as XPath or XSLT, on top of custom XML providers without sacrificing performance (see Figure 2). However, this approach has a much better chance of realizing the UDA-topia described earlier as long as everyone agrees on XML's abstract data model. If this model is not standardized, there is no hope for long-term interoperability at the provider level. The W3C has been working on codifying XML's abstract data model in a specification called the XML Information Set (Infoset). The Infoset describes the logical structure of an XML document in terms of nodes (also known as Information Items) that have properties.

Figure 3 Infoset of an XML 1.0 Document
Figure 3 Infoset of an XML 1.0 Document

      Figure 3 illustrates the Infoset of an XML 1.0 document. Each node in the tree has a well-defined set of properties that must be made available by a provider. For example, an element node has a namespace name, a local name, a prefix, an unordered set of attributes, and an ordered list of children (see the Infoset specification at http://www.w3.org/xml/schema for more details). This abstract description at of an XML document standardizes the info that must be made available to consumers by XML providers.
      An XML API simply maps the Infoset onto programmatic types. There are many Infoset-compatible APIs available today, including the Simple API for XML (SAX), Document Object Model (DOM), and the newer Microsoft .NET interfaces—and there are surely more to come. Writing a custom XML Infoset provider is simply a matter of defining how information from a data source should be exposed through one of the Infoset-compatible APIs. This mapping effectively exposes the information as an Infoset representation and now consumers have all the benefits of working with a well-known XML API.
      For example, today DOM implementations sit directly on top of databases. SAX producers explicitly call into ContentHandler implementations to generate Infosets (no parser involved). And there are already some .NET-compliant XML providers popping up for a wide variety of data sources, including the Windows® registry and .NET assemblies.
      An obvious concern with this approach is that it ties potential clients to a specific XML API just like OLE DB tied clients to COM. For example, only SAX-based applications can take advantage of SAX-based providers, just as only .NET applications can use .NET-based providers. To truly make a given data source universally accessible to XML clients, you would have to implement providers for all mainstream XML APIs. Most developers, however, would find this an acceptable trade-off when better performance is required and an XML 1.0 solution is out of the question.
      It's also important to note that XML providers (both XML 1.0 and Infoset-based), unlike OLE DB providers, are typically read-only in nature. They simply expose information through a standard abstract data model, but say nothing about where it resides or how to update it. This is a much harder problem to solve than simply exposing information since every data store is different in this respect. This is one reason why OLE DB (and ODBC) will continue to thrive in database-centric situations that need this type of fine-tuned interaction.

.NET Infoset Representations

      The .NET Framework provides three base classes, XPathNavigator, XmlReader, and XmlWriter, each of which models the Infoset differently. XPathNavigator models the Infoset as a traversable tree of nodes; XmlReader models the Infoset as a forward-only stream of nodes (pull); and XmlWriter models the Infoset as a sequence of method calls (push). (Please note that all references to .NET in this article are based on pre-Beta 2 code and may change by the release date.)
      As I just mentioned, XPathNavigator models an Infoset as a traversable tree of nodes (see Figure 3). The tree representation is exactly the same as the one defined by the W3C XPath Recommendation (hence the name XPathNavigator), which is nearly a one-to-one mapping to the Infoset. This representation is the most intuitive for consumers because it just "feels" more like XML than the others.
      The .NET Framework provides a built-in implementation of XPathNavigator for traversing in-memory DOM trees. This class is currently called DocumentXPathNavigator, but since it's private you can't instantiate it directly. Instead, the framework provides a factory interface called IXPathNavigable that data sources should implement if they support XPathNavigator functionality. For example, in the case of the DOM, you can instantiate an XPathNavigator implementation for any node in the tree as shown here (assuming node is a reference to an XmlNode):



      This design allows any data source to be exposed through XPathNavigator, as shown in Figure 4. Once a custom provider has been written for your data source, you can use it just like you would use the built-in implementations.

Figure 4 XPathNavigator
Figure 4 XPathNavigator

      XmlReader models reading an Infoset as a forward-only, linear stream of nodes, as shown in Figure 5. Unlike SAX, XmlReader allows the client to pull the nodes one at a time much like the firehose cursor model in data access technology. This representation offers consumers a streamlined programming model. However, attributes are not part of the stream and special end marker nodes must be dealt with, so in some cases it is not as easy as using a firehose cursor.

Figure 5 XMLReader
Figure 5 XMLReader

      .NET provides several built-in implementations of XmlReader, including XmlTextReader and XmlNodeReader. XmlTextReader reads XML 1.0 text, while XmlNodeReader reads a DOM tree. As with XPathNavigator, the XmlReader design allows for custom reader implementations that sit on top of any data store (such as a file system reader).
      While XPathNavigator and XmlReader model reading Infosets, XmlWriter models writing Infosets through a sequence of method calls. XmlWriter is very much like SAX in that it's used to push an Infoset into a given implementation. A custom implementation of XmlWriter would typically map the sequence of method calls back to a custom data format/store such as XML 1.0, electronic data interchange (EDI), or even a database. The built-in XmlTextWriter implementation simply maps the sequence of method calls back to XML 1.0 format.
      To summarize, XPathNavigator and XmlReader make it possible to expose information as an XML Infoset, while XmlWriter does the reverse. For more details on how XmlReader and XmlWriter work, see The XML Files in the January 2001 issue. The rest of this column focuses on exposing data sources through custom XPathNavigator and XmlReader implementations.

XPathNavigator or XmlReader?

      The first step in writing a custom XML provider is to decide which of these models to use. Deciding between XPathNavigator and XmlReader depends on the structure of the underlying data source, the functionality of the native APIs, and the functionality desired by consumers.
      If the underlying data store supports streaming and the native API only moves forward through the stream, XmlReader is the most natural fit; trying to represent a streaming data store as an XPathNavigator would require the provider to perform sophisticated caching in order to facilitate moving in different directions through the underlying stream.
      If, however, the underlying data store is hierarchical in nature and the native API supports moving in different directions (like the Windows registry does), XPathNavigator is a more natural fit because the data is already in a tree structure and the provider doesn't have to deal with the artificial end marker nodes (see Figure 5).
      Furthermore, if consumers want your provider to support XPath/XSLT, implementing XPathNavigator is the only option. The .NET XPath/XSLT implementations are defined strictly in terms of XPathNavigator references. This makes sense because implementing XPath/XSLT on top of XmlReader would be impossible without retaining the document in memory, which destroys the main benefit of the streaming model. Once you lose that, you might as well implement the more natural Infoset representation through XPathNavigator.
      It's also possible to implement an XmlReader on top of any XPathNavigator implementation, but the reverse is not true without placing restrictions on the expression language. A generic XPathNavigator implementation that sits on top of any XmlReader is possible, but it would only be able to support XPath expressions that don't attempt to move backwards through the underlying stream (hence, no parent or ancestor axes). In general, you'll be better off implementing XPathNavigator unless the underlying data store only supports streaming.

Extending XPathNavigator

      Implementing a custom XPathNavigator entails deriving a new class from XPathNavigator. Since XPathNavigator is abstract, all of its abstract members must be overridden to make the new class concrete (so you can instantiate it). Figure 6 shows how to derive a new class from XPathNavigator, along with the minimal set of overrides that are required. Figure 6 just shows the stubs, which will still need to be filled in.
      The implementation of each of these members defines the mapping to the underlying data store. For example, if you were implementing a file system navigator, the implementation of MoveToFirstChild should move to the first child of the current directory (if any). Exactly how you implement these members depends entirely on the underlying data store and the corresponding native API.

XPathNavigator Semantics

      XPathNavigator supports the notion of a cursor, which is positioned on the current node. When any of the XPathNavigator properties are accessed (see Figure 6), they return the information corresponding to the current node. For example, the LocalName, NamespaceURI, Name, Prefix, and Value properties return the appropriate information for the current node.
      The HasAttributes and HasChildren properties identify whether the current node has any attributes or child nodes, respectively. If there are attributes, they can be accessed by name through the GetAttribute method. The MoveToAttribute method makes it possible to move the cursor to a specific attribute node (identified by name), while MoveToFirstAttribute and MoveToNextAttribute make it possible to iterate through the entire collection of attributes. Once the cursor is positioned on an attribute node, the XPathNavigator properties can then be used to access the current attribute's information.
      When the cursor is positioned on an attribute, the only way to get back to the element is through a call to MoveToParent. As a side note, if attributes aren't considered children, how can they have parents? The answer is that the XPath specification says so. XmlReader uses the MoveToElement method for this purpose (much like the DOM's ownerElement property), but in the end they're the same. This slightly different interpretation of the XML's abstract data model is exactly the type of issue that motivated the need for the Infoset, which codifies all of these issues.
      If a given element node has namespace nodes, they can be accessed just like attributes through the GetNamespace, MoveToNamespace, MoveToFirstNamespace, and MoveToNextNamespace methods. According to the XPath specification, each element node has a set of namespace nodes, one for each of the in-scope namespace declarations. As with attributes, you must call MoveToParent to move from a namespace node back to the owner element.
      The set of MoveTo methods support traversing the tree in any direction. MoveToFirstChild moves the cursor to the current node's first child node. MoveToNext moves the cursor to the current node's next sibling node. MoveToPrevious does the reverse by moving the cursor to the current node's previous sibling node, and MoveToFirst moves the cursor to the first sibling node in document order. MoveToParent moves the cursor up to the current node's parent node, while MoveToRoot moves the cursor back to the topmost node, in the tree, known as the root or document node.
      There are also a few other handy "move" methods. MoveToId moves the cursor to the element node that has an attribute of type ID with the specified value (this requires a DTD or schema). MoveTo moves the cursor to the same position as that of the supplied XPathNavigator.
      MoveTo is especially useful when used in combination with the Clone method, which returns an exact snapshot of the current XPathNavigator, as shown here:

  public void FindFooChild(XPathNavigator nav)
XPathNavigator clone = nav.Clone();
// use clone and nav remains unaffected
bool found = clone.MoveToFirstChild();
while (found)
if (clone.LocalName.Equals("foo"))
// success, move nav to new position
found = clone.MoveToNext();


This allows consumers to work on temporary copies of the navigator before moving the cursor. The IsSamePosition method checks to see if the current navigator is at the same position as the supplied navigator.

Custom XPathNavigator Samples

      Appropriately implementing these members typically requires a state machine in the provider. The state machine simply keeps track of the current node in the underlying data store and how to move to the parent, sibling, and children nodes as well as attribute and namespace nodes if they exist. Once you have the state machine in place, implementing the XPathNavigator members is a straightforward operation.
      I've provided several sample XPathNavigator implementations that illustrate how this is done. The complete source code for each example is available for download from the link at the top of this article. The simplest example is a ZIP file navigator (ZipNavigator) that exposes a ZIP file as an XML document. The internal structure of a ZIP file is just a linear list of compressed files, each of which comes with detailed information. I decided to model this structure as an XML document with a top-level contents element. Inside the contents element, there is a child element for each compressed item in the ZIP file. Each of these elements are annotated with several attributes to describe the item in more detail (path information and compressed size, for example). Figure 7 shows a ZIP file opened in WinZip, and Figure 8 shows the corresponding XML format exposed by ZipNavigator.

Figure 7 A ZIP File Opened in WinZip
Figure 7 A ZIP File Opened in WinZip

      Figure 9 shows part of the ZipState class, which keeps track of the current item in the actual ZIP file and how to move around to the parent and children items. This class works in tandem with the native ZIP file API that I decided to use (AaronsZipUtils.ZipReader) when performing these operations. This example is fairly simple since the ZIP file items cannot have children.
      Figure 10 shows part of the ZipNavigator implementation and its interactions with the ZipState class. Download the complete source code for more details on how this works.
      A more sophisticated example is the assembly navigator (AssemblyNavigator), which exposes a .NET assembly as an XML document. In this case, the XML structure is driven by the type information in the assembly. The top-level element is the name of the assembly, and it will contain a child element for each module. Module elements have a child element for each type construct. Type elements have a child element for each member, and so on. Each element is annotated with several descriptive attributes. All of them come with an isa attribute, which identifies what type of construct it represents. For example, Figure 11 shows a simple assembly opened in ILDASM, the intermediate language disassembler, and Figure 12 shows the corresponding XML format exposed by AssemblyNavigator.

Figure 11 Assembly Displayed in ILDASM
Figure 11 Assembly Displayed in ILDASM

      Another example that I've provided is a file system navigator (FileSystemNavigator) that exposes the entire file system as one large XML document. The mapping from the file system to XML is straightforward since the data is already hierarchical in nature. The root element is named mycomputer and it has child elements for each logical drive. A logical drive element has a child element for each child directory or file. A directory element has a child element for each child directory or file, and so on. File elements do not have children. Directory and file elements are also annotated with attributes that describe the directory and file details. These are shown in Figure 13.
      In addition to these samples, I've also provided a Windows registry navigator that was originally written by Chris Lovett, the Product Unit Manager of B2B Web Services at Microsoft. Due to space limitations, it's not practical to display the complete code for any of these samples. Again, for more details on any of these samples, download the source code and have a look. This suite of examples should be more than enough to get you started.
      Using these custom navigators is as simple as using the built-in navigator for DOM documents. The code in Figure 14 illustrates how to traverse an XPathNavigator and serialize it back out to an XmlWriter implementation.
      Figure 15 illustrates that the SerializeNode method works the same regardless of the type of XPathNavigator implementation. This again is the main benefit of exposing data as XML. Consumers can work with it as if it were XML, without having to become familiar with the native structure or API that underlies it.

XPath Support

      The good news is, once you've implemented your custom XPathNavigator, you get XPath support for free. The XPathNavigator base class provides an implementation of the Select method, which compiles the supplied XPath expression and returns an XPathNodeIterator reference. Each time the client calls XPathNodeIterator::MoveNext, the implementation calls into the most derived class (the class you derived from XPathNavigator) to move throughout the tree checking for matches (see Figure 16). As long as you've implemented the members shown in Figure 6 appropriately, the built-in XPath engine should work with it out of the box (or Web release). Note that the namespace axis is not implemented in Beta 2, so those methods will currently not be called by the default implementation of Select.

Figure 16 MoveNext
Figure 16 MoveNext

      If for any reason you're not happy with the standard XPath engine, you can always override the Select method and develop your own implementation of XPathNodeIterator to provide a custom XPath evaluation engine, but that's not a trivial matter.
      The following code illustrates how to evaluate an XPath expression against the custom FileSystemNavigator. This example identifies all of the descendant elements of the c:\temp directory that have a name containing "xml".

  public void EvaluateXPathAndDisplayResults()
XmlTextWriter tw = new XmlTextWriter(Console.Out);
FileSystemNavigator fsn = new FileSystemNavigator();
XPathNodeIterator sel =
while (sel.MoveNext())
SerializeNode(tw, sel.Current);


      The provided FileSystemNavigator class also allows you to associate custom navigators with certain file extensions, as shown in Figure 17. The RegisterFileHandler method takes a file extension, the name of the XPathNavigator class, and the name of the assembly it lives in. Then, when the FileSystemNavigator is positioned on a file whose extension matches one that has been registered, it's able to dig into that particular file using an instance of the registered XPathNavigator implementation. This also makes it possible to write XPath expressions that walk through the list of the file types registered on the system.

Figure 18 Using fsnav.exe
Figure 18 Using fsnav.exe

      I've provided a sample client application named fsnav.exe that registers the .xml extension with the standard DOM navigator that comes with .NET, the .dll extension with AssemblyNavigator, and the .zip extension with ZipNavigator. Figure 18 illustrates how to use fsnav.exe with XPath expressions that dig into the different registered file types.

XSLT Support

      In addition to XPath support, writing a custom XPathNavigator also gives you free XSLT support. The .NET Framework XslTransform class takes an XPathNavigator as the source document to be transformed. Hence, you can write XSLT documents against your provider's exposed XML format. Here I illustrate how to use XslTransform to execute a transformation against an instance of the AssemblyNavigator class:

    public void TransformAssembly(string assemblyFile)
// pass in the assembly file name to transform
AssemblyNavigator nav =
new AssemblyNavigator(assemblyFile);
// XSLT document
XslTransform tx = new XslTransform();
// pipe output to Stream
Stream str = new FileStream(@"temp-assembly.htm",
tx.Transform(nav, null, str);


Writing Custom Readers

      Implementing a custom XmlReader is very much like implementing a custom XPathNavigator except it only has to move in one direction—forward—directly through the tree in document order (Figure 3 labels the nodes in document order). Figure 19 shows how to derive a new class from XmlReader along with the minimal set of required overrides.
      Notice that many of the members are exactly the same as those defined by XPathNavigator. The additional members simplify working with the streaming model as well as attribute nodes. There are several additional overloads for accessing attributes by qualified name, unqualified name, and index. This places a greater burden on the provider implementor, but it makes MyReader easier for consumers to use. There are also additional properties for determining the current state of the stream (ReadState, EOF, and so on), something that you don't have to worry about with XPathNavigator implementations.
      As with XPathNavigator, XmlReader supports the notion of a cursor, but it's restricted to forward movement. When the cursor is positioned on a node in the stream, the various properties can be used to inspect the current node.
      ReadInnerXml returns the nodes within the current node as a string of XML 1.0. If the current node is an Element node, ReadInnerXml moves the cursor to the corresponding EndElement node. If the current node is an attribute, it remains on the attribute after the call. ReadOuterXml is just like ReadInnerXml except it also consumes the current node and moves past the corresponding end node (if any). And finally, the ReadString method returns the text contents of the current element and advances the cursor to the next non-text node. These Read methods are the only ones you must implement to advance the cursor through the stream.
      To show you how this works, I've provided a sample XmlReader implementation named NavigatorReader. This class provides a generic XmlReader implementation that sits on top of any XPathNavigator implementation. It's a fairly simple implementation since it just traverses the underlying navigator in document order, but it does have to insert the artificial EndElement nodes in the appropriate locations. Here how it's used:

  public void TraverseReader()
// traverse a .NET assembly as XML
AssemblyNavigator an =
new AssemblyNavigator("person.dll");
NavigatorReader reader = new NavigatorReader(an);
while (reader.Read())
Console.WriteLine("{0}: {1}",
reader.NodeType, reader.Name);


      NavigatorReader turns out to be useful when integration with the DOM is required. The .NET implementation of the DOM only supports loading through an XmlReader reference, not XPathNavigator. But if you decide to implement XPathNavigator, you can always use NavigatorReader to get this integration, as shown here:

  public void LoadDOM()
// traverse a .NET assembly as XML
AssemblyNavigator an =
new AssemblyNavigator("person.dll");
NavigatorReader reader = new NavigatorReader(an);
XmlDocument doc = new XmlDocument();
// do something with doc here


      As I mentioned earlier, it's also possible to implement XPathNavigator on top of XmlReader as long as the XPath expressions don't use the parent or ancestor axes. This is an acceptable simplifying assumption since it now makes it possible to process streams of XML with the help of XPath, as shown here:

  ReaderNavigator reader = new ReaderNavigator("foo.xml");
XPathNodeIterator iterator =
while (iterator.MoveNext()) {
// process node current


Mark Fussell, Program Manager of the .NET XML Framework, has provided such a sample, and I've included it in the sample code (it's called XPathReader).

The World as an XML Document

      Like the fanatic OLE DB developers, many XML developers are starting to see the world through Infoset-colored glasses. Not only does it make it trivial for consumers to access proprietary data sources, but it also allows for integration with sophisticated XML-related infrastructure, which continues to evolve at an unprecedented rate. Performance concerns can be resolved by writing custom XML Infoset providers that offer a more direct mapping to the native interfaces.
      The .NET XML architecture makes it easy to write read-only Infoset providers that can be plugged into the rest of the framework. It also offers two Infoset representations through the XPathNavigator and XmlReader interfaces, each of which is better suited for different types of underlying data sources. The sample providers discussed in this piece should help you get started on the path to writing your own custom XML provider in .NET.

Send questions and comments for Aaron to xmlfiles@microsoft.com.

Aaron Skonnard is an instructor/researcher at DevelopMentor, where he develops the XML and Web service-related curriculum. Aaron coauthored Essential XML Quick Reference (due out September 2001) and Essential XML (Addison Wesley). Get in touch with Aaron at http://staff.develop.com/aarons.

From the September 2001 issue of MSDN Magazine.