Death, Taxes, and Relational Databases, Part 2


Andrew Conrad
Software Design Engineer, Microsoft Corporation

January 12, 2004

Download the objectstorepart2.msi sample file.

Summary: The second in a series of articles by Andrew Conrad showcases the XML functionality of SQL Server 2000 by turning SQL Server into an object repository using XML as the bridge between relational and object-oriented data. (21 printed pages)

Requirements This download requires Microsoft.NET Framework 1.0, Microsoft SQL Server 2000, and SQLXML 3.0 Service Pack 1.

Editor's note   This month's installment of Extreme XML is written by guest columnist Andrew Conrad. Andy is a Software Design Engineer with the WebData XML team. He has been working on SQL Server 2000 XML support and the SQLXML Web releases for the last four years. He can be reached at


It is clear that XML has now become the modern industry standard for data exchange across service boundaries. In addition, it is becoming a significantly common practice to use XML as the underlying data model for data-driven applications. For the enterprise developer, this means that having to work with both XML and relational data stores is quickly becoming one of life's certainties (along with death and taxes, according to Ben Franklin. I would also modernize the quote by adding spam e-mail and running out of USB ports.). For the developer this means dealing with the problem of translating between the hierarchical, semi-structured XML data model and the normalized relational data model. The former is well suited for data exchange and modeling business data, and the latter for persistence storage. However, there is a solution—SQLXML. SQLXML 3.0 is an add-on product for SQL Server 2000, which enhances SQL Server 2000 XML support. It enables the developers to bridge the gap between XML and relational databases by providing a number of features that translate between the two data models.

Note This column is based on the SQL Server XML functionality that is available in SQLXML 3.0 SP2. Parts of the sample will not work properly without features from SQLXMLSQLXML 3.0 SP2). Since SQL Server 2000 was released in fall 2000, the WebData group at Microsoft has provided three fully supported upgrades to enhance and improve the XML features. These frequent releases are a result of customer demand and the fast-changing world of XML standards. For example, SQLXMLSQLXML 3.0 added support for W3C XML Schema, SOAP, and the .NET Framework. To download the latest Web release or to find more information on the new features offered in the XML for SQL Server Web Releases, see

In general, the SQLXMLSQLXML feature set can be divided into two major areas:

  1. Server side functionality that can be accessed using Transact-SQL extensions.
  2. Client side/ middle tier functionality accessible through ADO, HTTP, or .NET data provider.

This article concentrates on the client side/ middle tier SQLXML functionality, and shows how to leverage that functionality to turn SQL Server into an object repository for the clr object. In part one of this article, the object repository was implemented using the SQLXML server side functionality. For more information on those features, or for a general overview of the problem this solution is trying to solve, please refer to part one of this article.

Note This article assumes the reader has some knowledge of SQLXML and the XmlSerialization functionality available in the .NET Frameworks. For an introduction to SQLXML, please see A Survey of Microsoft SQL Server 2000 XML Features. For an overview of XmlSerialization in the .NET Frameworks, XML Serialization in the .NET Framework is an excellent introductory overview.

The Object Repository Revisited

In part one of this article, to illustrate how SQL Server could be used as an object repository, an employee tracking application was developed. This employee-tracking database utilized an object repository that abstracted the application developer from the underlying storage mechanism of those objects. Basically, the consumer of the object repository was given an abstract class that exposed the basic, common CRUD (create, retrieve, update, and delete) operations:

public abstract class ObjectRepository
   abstract public ObjectRepository(String connectionString);
   abstract public Employee[] GetEmployee(string lastName);
   abstract public void PersistEmployee(Employee employee); 
   abstract public void PersistEmployees(Employee[] employee);
   abstract public void RemoveEmployee(Employee employee); 
   abstract public void ResetObjectStore();
   abstract public ConcurrencyControl ConcurrencyControl 
      { get; set; }

With the exception of the last two methods, this API is the same as the one utilized in part one of this article to illustrate the server side SQLXML functionality. In the process of implementing the API with client side SQLXML functionality, it became evident that the ResetObjectStore method and the ConcurrencyControl property were required, but more on this later.

The original ObjectRepository API includes methods for retrieving Employee objects based on the employee's last name, persisting changes to an array of Employee object, and removing Employee objects. Note that a helper method is provided for persisting a single Employee object. Under the covers, this method wraps the single Employee object in an array and uses the same code as the method for persisting an array of employees.

From the object repository API, the consumer is not coupled at all with the underlying data store. They just query for objects, run their business logic, and then persist the changes. This allows the object repository to change the underlying storage mechanism without affecting the consumer's code and provides a very object oriented view of the persistence layer for the business logic developer to execute the CRUD operations against. In fact, the ObjectRespository class could implement the IList or IDictionary interfaces and behave like the collections classes in the frameworks.

One of the strengths of the original object repository architecture was that it enabled the consumer to extend the Employee class by deriving an Employee2 class from it without changing the API or the implementation of object repository. The same design guideline will hold for the new implementation using the SQLXML client side technologies. However, a slightly different mechanism is leveraged to accomplish that goal.

A Quick Look at Annotated Schemas

It has become a common practice to describe a given set of XML document instances using an XSD schema. The XSD schema is used to describe structure and content of the XML and can often be used as a contract between two distinct entities wanting to exchange information encoded as XML. In the case of the object repository example, since the input/output of the XmlSerializer is XML, and through SQLXML the relational database can expose/ consume XML, it seems quite plausible to use an XSD schema to describe the contract between the two APIs. In part one of this article, the example did not explicitly use an XSD. However, there was an implicit contract in that the XML being passed between the database and the application had to consist of a particular shape and content (the overflow feature of SQLXML did allow for the XML to contain some open content). For the object repository utilizing server-side SQLXML technologies, this contract was contained in the FOR XML and OpenXml transact SQL statements.

As an alternative to using transact SQL commands, SQLXML provides the ability to specify the XML to relational mapping by annotating a given XSD schema with relational metadata. The XSD schema describes the desired shape and content of the XML, and the annotations describe how to generate the XML from the underlying relational structures. The basics of these annotations includes mapping between elements and attributes in the XSD schema to tables/views and columns in the databases. These annotations can also be used to specify the hierarchical relationships in XML (thus, representing the relationships in the database).

The best way to learn how this mapping process works is to dive right into an example of an annotated XSD:

<xsd:schema xmlns:xsd=""
<xsd:element name="CustomerRecord" sql:relation="Customers" >
            <xsd:element name="Contact"  sql:field="ContactName"
               type="xsd:string" /> 
            <xsd:element name="Title" sql:field="ContactTitle"
               type="xsd:string" /> 
            <xsd:element name="Company"  sql:field="CompanyName"
               type="xsd:string" />
            <xsd:element ref="Order" 
               sql:relationship="CustomerOrder" />
         <xsd:attribute name="ID" sql:field="CustomerID"
      <xsd:element name="Order" sql:relation="Orders" >
            <xsd:element ref="Items" 
               sql:relationship="OrderOrderDetails" />
         <xsd:attribute name="OrderID" type="xsd:int"/>     
         <xsd:attribute name="Ordered" sql:field="OrderDate"
         <xsd:attribute name="Shipped" sql:field="ShippedDate"
<xsd:element name="Items" sql:relation="[Order Details]" >
            <xsd:element ref="ItemPurchased"
               sql:relationship="OrderDetailsProducts" />    
         <xsd:attribute name="Price" sql:field="UnitPrice"
        <xsd:attribute name="Quantity"  type="xsd:int"/>
   <xsd:element name="ItemPurchased" type="Products"
      sql:relation="Products" />
       <xsd:complexType name="Products">
            <xsd:element name="Name"  sql:field="ProductName"
               type="xsd:string" /> 
            <xsd:element name="NumberInStock"  sql:field="UnitsInStock"
               type="xsd:int" /> 
            <sql:relationship name="CustomerOrder"                
                              child-key="CustomerID" />
            <sql:relationship name="OrderOrderDetails"                
                              child="[Order Details]"
                              child-key="OrderID" />
            <sql:relationship name="OrderDetailsProducts"              
                              parent="[Order Details]"
                              child-key="ProductID" />

Basically, all that has been done here is to provide an XML-to-relational table mapping. In other words, this is very similar to a traditional relational view except that the result is XML instead of a rowset. In fact, the mapping provides what can be thought of as a virtual XML view and a nice layer of abstraction over the underlying relational tables and columns.

Let's take a closer look at the additions to the XSD. First of all, the XML for SQL Server namespace was added with the sql namespace prefix. For the mapping annotations to be properly recognized, the developer must provide this namespace and use it for all the SQLXML mapping specific annotations. Next, attributes and elements were mapped to columns in the database using the sql:field annotation. Note that a few of the attributes and elements happen to have the same name as the column in the databases. In that case, no annotation was required because it is assumed by default that the column name matches the attribute/element name. Next, the remaining elements in the schema are mapped to tables in the database using the sql:relation annotation. The sql:relation annotation, because of scoping rules, also applies to all the column-mapped attributes and sub-elements. Finally, I needed to specify how the elements mapped to tables would be joined. In other words, describe how the hierarchy of the XML View should be created. If you look at the bottom of the schema, in the xsd:annotation section, I have defined three join relationships that specify how the tables should be joined. To use these relationships elsewhere in the document, I have given each an alias. I then use the sql:relationship alias to specify the relational join information when I "join" elements. For example, I want to nest Order information inside of Customer information, so I use the CustomerOrder sql:relationship that specifies the join between the underlying tables in the database.

For the example above, I only used a few of the mappings annotations available. There are several other annotations that provide different ways to expose the relational data as XML. For example, there are annotations to add CDATA sections, to provide constant elements, and even to expose open XML content that does not map to a specific column in the database. For more information, see the SQLXML Server books online.


To query the virtual XML View, one specifies an XPath query statement. XPath, as defined by W3C standard, is a navigation language for XML instance documents. The following is an example of two XPath statements that can be executed against the XML View created with the annotated mapping schema from above:


The first query returns all the CustomerRecord elements with the given CompanyName. The second only returns the Order elements for the same Company, with the addition filter based on the specified OrderID.

SQLXML, with some restrictions (see SQLXML books online for more information), allows the user to specify any XPath query against an XML View. This XPath query is then translated through the relational metadata provided in the annotated schema to a For Xml Explicit mode transact SQL query, which is used to generate the resulting XML fragment or document. Further, SQLXML allows parameterization of the predicate values, so queries can be reused and the query plans cached as required.


SQLXML provides two ways to modify a database:

  • OpenXML
  • Updategrams

As discussed in part one of this article, OpenXML modifies the underlying database by shredding the incoming XML document into a relational rowset, which then can be manipulated by the user through standard transact SQL statements. Although OpenXML can be a powerful and useful mechanism, there are some situations where its usage is inappropriate. In general, this is when either the user prefers for the XML to relation translation to occur outside of the SQL Server process, or when the developer prefers to perform DML operations through the virtual XML view abstraction instead of against the relational structures of the underlying SQL Server (that is, they don't want to write transact SQL statements or even know about the relation database schema). In those cases, Updategrams are a more appropriate solution.

Updategrams are executed through the XML template functionality provided by SQLXML. They are a special template command node type that specifies the before and after images of a given XML instance document. Then, based on the annotated mapping schema specified, the changes are translated to appropriate transact SQL insert/update/delete statements. For example, if a Customer element appears in the before block, but not in the after block, then a delete SQL statement against the Customer table is generated. Here is a sample Updategram that updates a Customer and inserts an Order:

<ROOT xmlns:updg="urn:schemas-microsoft-com:xml-updategram">
<updg:sync mapping-schema="Customers.XSD" >
           <Customer ID="GROSR">
            <Contact>Manuel Pereira</Contact>
         <Customer ID="GROSR">
            <Contact>Manuel Pereira</Contact>
            <Order OrderID="10268" Ordered="2000-07-30T00:00:00" 

There are a couple of things to note here. First of all, Updategrams must have the appropriate namespace (urn:schemas-microsoft-com:xml-updategram) included in the template. Next, the before and after blocks are included in a sync block, which translates into a single relation transaction. In other words, if you have several changes to the XML View that you are trying to persist to the database, and one of the SQL statements fails, the entire transaction is rolled back. Further, Updategrams use optimistic concurrency that ensures that the updated data hasn't been altered by another application since it was read from the database. This is done in Updategrams by including the values in the before block of the Updategrams. Before updating the database, the Updategrams checks the values specified in the before block against the values currently in the database to ensure that the update is valid. The reason for using optimistic concurrency control is that most clients who are using the XML View are disconnected clients. In fact, once you have queried the XML View, you have basically taken a snapshot of the data and have no way of knowing if it has changed. For most disconnected application scenarios, this is a design reality.

Implementing the Client Side Object Repository

Except for adding a couple of members, the API for the object repository has stayed the same (with the same semantics) between the version using the server-side SQLXML features and the one using the client-side features. However, the implementation is very different, which will become quite evident by walking through the code.

The first method to examine it the object repository query method (From the ObjectRepositoryClientSideclass):

override public Employee[] GetEmployee(string lastName) 
SqlXmlCommand command = new SqlXmlCommand(_SqlXmlconnectionString);
   command.CommandType = SqlXmlCommandType.Template;
   command.CommandText ="<EmployeeList" + 
      "xmlns:ms='urn:schemas-microsoft-com:xml-sql' " +                          
      "xmlns:xsi='' >" +
         EmployeeMappingSchema.Map +
         "      <ms:header>" +
         "        <ms:param name='LastName'/>" +
         "      </ms:header>" + 
           "      <ms:xpath-query mapping-schema='" + _ 
           EmployeeMappingSchema.MapName + "'>" +
         "        /Employee[LastName=$LastName] " +
         "      </ms:xpath-query >" +
   command.ClientSideXml = true;
   SqlXmlParameter param = command.CreateParameter();
   param.Value = lastName;
   param.Name ="@LastName";

   Employee[] fetchedEmployees;
   fetchedEmployees = DeserializeEmployee(command.ExecuteXmlReader());
   foreach (Employee emp in fetchedEmployees)
   return fetchedEmployees;

The method starts by creating a new SQLXMLCommand and setting the command type to SQLXMLCommandType.Template to specify that a SQLXML template query will be executed. This SQLXML template contains a single command node of the type xpath-query, which contains a parameterized XPath statement ( /Employee[LastName=$LastName] ). Note, it would have been possible to specify SQLXMLCommandType.XPath instead, but I wanted to provide the annotated mapping schema inline with the command instead of specifying an external file. That decision is based purely on making the sample code easier to setup and it makes little difference in the functionality. . So, instead of using file-based mapping schemas, the mapping schema for the client-side object repository is contained as a const string in EmployeeMappingSchema class. I suppose the right thing to do is create a resource file and make the annotated mapping schema a resource, but I'm going to worry about that in the next article.

Next, the method specifies SQLXMLCommand.ClientSideXml = true. This tells SQLXML to perform the underlying rowset to XML encoding outside of the SQL Server process in the client executable process. It is possible to verify this by looking at the SQL queries (using SQL Profiler) generated by SQLXML using the client-side mode where no For XML clause is specified.

Finally, the LastName parameter is created and the query is executed with the output as an XMLReader. This XMLReader is then passed to the same deserialization code that was used in the object repository implementation that used SQLXML server-side features. In fact, since this code is common I have moved it to the ObjectRepository abstract base class.

Before returning the array of new Employee objects, the employee objects are cloned and cached locally in the object repository:

private void CacheEmployee(Employee employee)
   if (_Employees[employee.ID] == null)
      _Employees.Add(employee.ID, employee.Clone());
      throw new ApplicationException("Employee already exists.");

This is done for a couple of reasons. When changes to Employee object are persisted, the object repository can tell the difference between whether the Employee object is being updated (has a clone in the cache) or inserted (does not have a clone in the cache). In addition, for Employee objects being updated or deleted, the cloned version is used for concurrency control.

Note that I have made a slight break from my original design here in that the consumer's objects must now implement ICloneable. My original design constraint for the object repository was that the client objects were not required to implement any interfaces or derive from any abstract classes. However, making the client objects implement ICloneable is not a huge burden, and is necessary for Updategram generation.

Also note, that the object repository currently throws an error if an Employee object with the same ID is already in the cache. This is a somewhat naive approach. In fact, it would probably be better to also keep track of the original employee objects that are returned to the object repository consumer and return references to those objects instead of materializing new ones, thereby avoiding duplicate object (as defined by the objects' key(s)) creation. In fact, the trip to the database could be completely avoided in those cases where the required data is already in the local cache. However, that becomes significantly difficult when the queries become more complicated (more complicated predicates, for example) or when the data in the cache becomes inconsistent with the underlying data store.

Since the real magic of this GetEmployee method is the SQLXML template, and more specifically the annotated mapping schema, it is worth taking a closer look. Here is the complete annotated mapping schema with all the annotations that were added to the original XSD in bold:

<xs:schema elementFormDefault='qualified' 
   id='EmployeeMappingSchema' ms:is-mapping-schema='1'>
   <xs:element name='Employee' type='Employee' sql:relation='Employee'
   sql:overflow-field='Overflow' sql:key-fields='EmployeeID' />
      <xs:complexType name='Employee'>
            <xs:element name='AddressList' type='ArrayOfEmployeeAddress' 
            sql:is-constant='1' />
            <xs:element name='FirstName' type='xs:string' 
            sql:field='FirstName' />
            <xs:element name='LastName' type='xs:string' 
            sql:field='LastName' />
            <xs:element name='EmployeeID' type='xs:int' 
            sql:field='EmployeeId' />
            <xs:element name='Title' type='xs:string' sql:field='Title' />
      <xs:complexType name='ArrayOfEmployeeAddress'>
         <xs:element name='Address' type='EmployeeAddress' 
         sql:relation='Address' sql:relationship='EmployeeAddress' 
         sql:key-fields='AddressType EmployeeID' />
   <xs:complexType name='EmployeeAddress'>
         <xs:element name='Street' type='xs:string' sql:field='Street' />
         <xs:element name='City' type='xs:string' sql:field='City' />
         <xs:element name='State' type='xs:string' sql:field='State' />
         <xs:element name='Zip' type='xs:string' sql:field='Zip' />
         <xs:element name='Type' type='xs:string' 
         sql:field='AddressType' />
         <sql:relationship name='EmployeeAddress' parent='Employee' 
         parent-key='EmployeeId' child='Address' child-key='EmployeeId' />

As you can see, I have annotated the XSD with the sql:field and sql:relation annotations to map the elements in the XSD to the columns and tables in the database. This is a relatively straightforward process because the shape of the XML is fairly similar to the layout of the database. Note, in cases when the XML and the relation database schema don't match so nicely, it is possible to map sub-elements of a given element to multiple tables or to map a hierarchy of elements to a single table.

Since the XML hierarchy represented by the XSD maps to multiple tables, it is required to specify how to join the tables (Employee and Address tables). This is done using the sql:relationship annotation in the appinfo section of the XSD. The sql:relationship annotation specifies the parent and child tables and the columns that are used to generate the join. The SQLXML query engine, since it is generating a hierarchical relationship, always interprets this as a left-outer join with the parent table as the left member of the join.

SQLXML supports several other more specialized annotations, three of which are used in the sample's mapping: sql:key-fields, sql:is-constant, and sql:overflow. The sql:key-fields annotation serves two purposes here. First, it indicates which values should be used to identify matching pairs of elements when an Updategram with multiple elements of the same type in the before and/ or after blocks is submitted (this means that in effect, the key values in the client objects should be read-only since changing the values would lead to unpredictable results). Secondly, the sql:key-fields annotation identifies the primary key(s) for the table being mapped to and is used by the SQLXML query engine to correctly nest the results of queries. Although not required for some cases where the mapping is only being used for query access, it is a highly recommended practice to always include the primary key information for a given table through the sql:key-fields annotation. Since this mapping will be used for query and persistence of changes, I have included the key information for both tables to which we are mapping.

It is quite common for an XSD to contain elements that do not map directly to any table or column in a given database. One popular case of this is wrapper elements. In other words, elements that have the sole job of containing other elements. In the case of this mapping, the AddressList element can be considered a wrapper element. In the generated code it is represented by the ArrayList of the EmployeeAddress objects for a given Employee object. Clearly, there is no database construct to map this element to, so the sql:is-constant annotation is used to signify that the element should be considered constant content. That is, when the query results are generated, an EmployeeAddress element is automatically included for each Employee element, and upon persistence of changes, it is simply ignored by the Updategram XML to transact SQL translation.

Finally, there is the sql:overflow annotation. For XPath queries against the virtual XML View, the sql:overflow annotation works exactly the same as the xmltext directive of For XML Explicit mode queries. In fact, when the underlying transact SQL query is generated by SQLXML, that directive is specified. For persistence of changes, the sql:overflow annotation is similar to the OpenXML @mp:xmltext metaproperty with one key difference—how unmapped content is specified. In the case of OpenXML, any content that is not mapped with the transact SQL WITH clause is put into the overflow. In the case of annotated mapping schemas, the overflow functionality is required to be more flexible for cases when a given element or attribute is defined in the XSD, but does not map to a given column or table in the relational data store. In addition, the content for that element or attribute needs to be persisted and round tripped to and from the data store. To specify that an element or attribute should be persisted along with the other overflow content (true open content from the XML instance document), the sql:mapped annotation is used to specify which elements or attributes from the XSD are not mapped. Interestingly, when no overflow field is specified, elements or attributes with the sql:mapped value of false are ignored similar to constant content.

Persisting the changes to the Employee objects is a bit more complicated then querying and materializing. For the client-side object repository, all the persistence methods call a single method called persist changes:

private void PersistChanges(Employee afterEmployee, bool remove)
   SqlXmlCommand command = new SqlXmlCommand(_SqlXmlconnectionString);
   command.CommandType = SqlXmlCommandType.Template;
   Employee beforeEmployee = (Employee) _Employees[afterEmployee.ID];
   command.CommandText = 
      UpdateGramGenerator.GenerateUpdateGram(beforeEmployee, remove ? null : 
   afterEmployee, Type.GetType("ObjectStorePart2.Employee"), 
   EmployeeMappingSchema.MapName, EmployeeMappingSchema.Map, 
   XmlReader reader = command.ExecuteXmlReader();
   if (beforeEmployee != null)
   if (afterEmployee != null)

This method takes two parameters:

  • The current version of the Employee object as provided by the user and a flag specifying whether the operation is deleting the Employee. Like the query method, a SQLXMLCommand object is created and the CommandType is set to be a SQLXML template.
  • The original version of the Employee object (as represented by the cloned version) is fetched from the object repository's cache.

Note It is possible that the original Employee object was not in the cache. This represents an inserted Employee object.

Next, the original and current Employee objects, the type of object, the mapping schema and its name, and an enum value representing the required concurrency control are passed to n method that generates the Updategram, which is set as the command text. The command is then executed and the changes are persisted. For update or delete cases, the original version of the Employee object is then removed from the object repository cache. Finally, the latest version of the Employee object is cached locally as the new/original version.

An Updategram Generator

The real heavy lifting of the PersistChanges methods is accomplished by the UpdateGramGenerator class. The UpdateGramGenerator class is a utility class that takes a before and after version of a given object (represented by different instances of the object) and generates a SQLXML Updategram that is used to persist the changes to the object to the relation datastore using the SQLXML virtual XML View.

public static string GenerateUpdateGram(object before, object after, Type 
objectType, string mappingName, string mapping, 
ConcurrencyControl concurrencyChecks)
   MemoryStream stream = new MemoryStream();
   XmlTextWriter writer = new XmlTextWriter(stream, null);
   writer.WriteAttributeString("xmlns", UpdateGramPrefix, null, 
   writer.WriteAttributeString("xmlns", "ms", null, 
   writer.WriteStartElement("sync", UpdateGramNamespaceUri);
   writer.WriteAttributeString("mapping-schema", mappingName);
   SerializeBefore(before, objectType, writer, concurrencyChecks);
   SerializeAfter(after, objectType, writer);            
   stream.Position = 0;            
   return new StreamReader(stream).ReadToEnd();

This method opens a XMLTextWriter and then builds the Updategram programmatically by writing out the Updategram header, followed by the sync block which contains the before and after images of the objects as serialized XML. The SerializeAfter method is straightforward in that in creates an element for the after block of the Updategram and then serializes the "after" Employee object:

private static void SerializeAfter(object obj, Type objectType,   XmlTextWriter writer)
   writer.WriteStartElement("after", UpdateGramNamespaceUri);
   if (obj != null)
      XmlSerializer serializer = new XmlSerializer(objectType);
      serializer.Serialize(writer, obj);

Note, in the case of removed Employee objects, the object reference being passed in is null, and therefore an empty after block is generated. If a certain element exists in the before block, but not in the after block, SQLXML interprets the operation as delete.

The SerializeBefore method is a bit more complicated:

private static void SerializeBefore(object obj, Type objectType, 
XmlTextWriter writer, ConcurrencyControl concurrencyChecks)
   writer.WriteStartElement("before", UpdateGramNamespaceUri);
   if (obj != null)
      XmlAttributeOverrides concurrencyOverrides = new 
      (concurrencyChecks ==ConcurrencyControl.UseConcurrencyAttributes)
         ReflectTypeForConcurrencyValues (objectType, 
      XmlSerializer serializer = new XmlSerializer(objectType,  
      serializer.Serialize(writer, obj);

This method is similar to the SerializeAfter method with one exception—the fields and properties that are serialized from the Employee object can be controlled at runtime through the concurrencyChecks parameter. If the parameter value is ConcurrencyControl.AllValues, then this method is identical to the SerialAfter method in that the before element is emitted containing the original instance of the Employee object. However, persisting the entire original instance of the Employee object may not always be desired, so it is possible that the user could specify the ConcurrencyControl.UseConcurrencyAttributes value instead.

The reason for this option is because SQLXML uses all the values in the before block for concurrency control when persisting the changes to the data store. In other words, every value in the before block is used in the where clause in the transact SQL statement generated by SQLXML. Obviously, there are cases when only certain properties and/or fields are needed for concurrency control. For example, in the case of the Employee and Address tables for this sample application, only the primary keys on both table are required for concurrency control.

To accomplish this task, the UpdateGramGenerator requires that the class developer uses the ConcurrencyAttribute to specify whether a field or property should be serialized in the before block at runtime, and therefore to control whether that value is used for concurrency:

[AttributeUsage(AttributeTargets.Property | AttributeTargets.Field)]
public class ConcurrencyAttribute :Attribute
   private bool _useValue;
   public ConcurrencyAttribute() { _useValue = true; }
   public ConcurrencyAttribute(bool useValue)
      _useValue = useValue;
   public bool UseValue
      get { return _useValue; }
      set { _useValue = value; }

The default for the attribute's UseValue property is true since using all values by default for concurrency is the safest approach.

Note when specifying ConcurrencyControl.AllValues to the GenerateUpdateGram method, once must be using the RTM version of SQLXML 3.0 SP2. This is dues to a feature that was added to SP2 of SQLXML 3.0 that allows overflow content to exist in the before block of the Updategram. Prior to SQLXML 3.0, SP2 an error was thrown if the before block contained any non-mapped overflow content, even in cases where an overflow field was specified.

Here is the new example of using the custom attribute with the Employee class:

    public class Employee : ICloneable
        [XmlElement(ElementName = "FirstName")]
        [Concurrency(UseValue = false)]
        public string FName
            get { return _firstName;}
            set { _firstName = value;}
        [XmlElement(ElementName = "LastName")]
        [Concurrency(UseValue = false)]
        public string LName
            get { return _lastName;}
            set { _lastName = value;}
        [XmlElement(ElementName = "EmployeeID")]
        [Concurrency(UseValue = true)]
        public int ID
            get { return _employeeID;}
            set { _employeeID = value;}

Here the LName and FName properties are not used for concurrency, but the ID property is used.

At runtime, for fields or properties that specify false for ConcurrencyAttribute.UseValue property, the XML serialization will not omit the appropriate attributes/elements. This is accomplished by building up an XmlAttributeOverrides collection based on the class definitions of the objects being persisted:

private static void ReflectTypeForConcurrencyValues(Type type, 
XmlAttributeOverrides concurrencyOverrides)
   MemberInfo[] memberInfo = type.GetMembers(BindingFlags.Public | 
   foreach (MemberInfo member in memberInfo)
         if (member.MemberType == MemberTypes.Field || member.MemberType == MemberTypes.Property)
            Object[] atts = member.GetCustomAttributes(typeof(Attribute), false);
            XmlArrayItemAttribute ai = null;
            ConcurrencyAttribute ca = null;
            foreach (Attribute a in atts)
               if (ai == null)
                  ai = a as XmlArrayItemAttribute;
               else if (ca == null)
                  ca = a as ConcurrencyAttribute;
         Type childType = null;
         if (ai != null)
            childType = ai.Type;
         else if (member.MemberType == MemberTypes.Field)
            childType = ((FieldInfo)member).FieldType;
         else if (member.MemberType == MemberTypes.Property)
            childType = ((PropertyInfo)member).PropertyType;
         if (ca != null)
            if (!ca.UseValue)
               XmlAttributes att = new XmlAttributes();
               att.XmlIgnore = true;
               concurrencyOverrides.Add(type, member.Name, att); 
           ReflectTypeForConcurrencyValues(childType, concurrencyOverrides);

This method walks the type being passed in using reflection to find all the property and field members. For each of those members, all the custom attributes are retrieved. Now each attribute is examined. If the custom attribute is the ConcurrencyAttribute and the UseValue property is false, then an XmlAttribute is created with the XmlIgnore value specified to be true. This attribute is then added to the XmlAttributeOverrides collection, which is passed in when the object is serialized. Passing in this collection allows the developer to override the runtime behavior of XmlSerialization. In this case, it provides the Updategram generator with a way to explicitly specify that certain fields and properties should be ignored when the object is serialized.

This method calls itself recursively for all child type in cases where the child types are not simple types. For collection types, the XmlArrayItemAttribute is examined to see what types the collection could contain.

As I stated before, with the client-side implementation of the object repository, I added a couple of new methods to the ObjectRepository abstract base class:

public override void ResetObjectStore()

public override ConcurrencyControl ConcurrencyControl
      return _concurrencyControl;
      _concurrencyControl = value;

Since the object repository using SQLXML client-side technologies is stateful, it is necessary to be able to reset the state of the object repository. Because the implementation uses server-side SQLXML features, this method does nothing since that implementation is stateless. The ConcurrencyControl property is used by the client of the object repository to specify the level of concurrency control required. This maps directly to the concurrency control parameter of the Updategram generator.

Possible Further Enhancements

During the implementation of the client-side object repository, I discovered a few enhancements that a future version could make:

  • Another interesting concurrency control mode exists—keys plus only updated values. That level has more protection than just specifying key values, but would not be as restrictive as using all values. To achieve this, one would have to modify the ReflectTypeForConcurrencyValues method to check the runtime objects being passed in for values that have changed in addition to looking for the ConcurrencyAttribute.
  • As an interesting alternative to using the ConcurrencyAttribute, one could annotate the mapping schema with their own annotation specifying which attributes and/or elements should be used for concurrency. If the annotated mapping schema was stored external to application code, for example in a file, then it would mean that the application did not need to be recompiled if the concurrency requirements changed.
  • Cloning and storing the objects in the object repository. Although functionally is the easiest way to track the before image of a given object, it is probably overkill and could have adverse performance issues for some applications. An interesting alternative is to store a list of only the values used for concurrency control. Like the current solution, this list would be indexed by the object key(s).
  • Although the Updategram generator is developed to accept any CLR object, the object repository is not. It would be helpful to refactor the object repository so that it could be used by any clr object with the mapping specified at creation time of the object repository. Generics would work well in this implementation.


As discussed in part one of this article, there is quite a bit of overhead in using XML as the transport layer data format between the client application and the data store. By using the client-side features of SQLXML that issue has been somewhat alleviated because no data is being sent over the wire encoded as XML. (In other words, not angle brackets). However, there is still a serialization and deserialization process occurring in the client code each and every time an object is materialized or persisted from/to the underlying data store.

The serialization and deserialization process suggests that this sort of architecture would be more appropriate for an application that either had interop requirements for its data or one that could accept the overhead of using XML as the underlying data model in exchange for the functionality.