Section 5: Extensions: Building Blocks for Extra Functionality


Brian Travis
Architag International Corporation

November 2003

Applies to:
    Microsoft® .NET Framework
    Microsoft Visual Studio® .NET 2003

Summary: Learn how to build distributed applications using Visual Studio .NET and extensions, for extra functionality. (32 printed pages)

To see an overview of the entire project, read FoodMovers: Building Distributed Applications using Microsoft Visual Studio .NET.


Interchangeable Parts
System Development by Artisans
Building Blocks
Advanced Web Services
Common Interoperability Problems and Their Solutions
Web Services Protocols
Implementing Security
Implementing Attachments

Two hundred years ago, give or take a half century, manufacturing was done a bit differently than it is now. Every item that was manufactured, from pots to guns to clothes, was made as a custom part. Manufacturing a rifle, for example, required hundreds of hours for a skilled gunsmith. A suit was made by a tailor, specifically to fit a particular customer. If some item needed repairing, the craftsman would need to build a part from scratch and fix the manufactured item.

Interchangeable Parts

Flash forward a hundred years, and we still saw manufacturing done the same way. Everything from stoves to bicycles to cars were made one at a time, with no hope that a part made for one would be interchangeable with the same part made for another. In fact, the concept of interchangeable-parts manufacturing was as foreign as the concept of a self-ruling government was a hundred years earlier. Things we take for granted now were not even considered then.

Then came Eli Whitney. Whitney's cotton gin revolutionized the picking of cotton, finally making the crop profitable for many parts of the country. In making his revolutionary machine, Whitney also perfected the concept of interchangeable parts that could be mass-produced, making replacement easier, faster, and cheaper.

Henry Ford took the idea of interchangeable parts and applied them to a larger market item, the automobile. Like Whitney, Ford saw that the effort required to forge and sculpt each part by hand required expertise that took a long time to develop. If he could create a method to manufacture hundreds or thousands of exact copies of a single part, then the engineering would only need to be done once, and all parts would be interchangeable from one car to another. This was the beginning of mass-production.

Mass-production enabled Ford to hire unskilled workers to manufacture identical parts and assemble them into cars for sale. These parts could also be distributed throughout his markets, and when the owner of one of Ford's cars needed a new part, it would be possible for the owner to get it locally without hiring an expensive artisan to create one for him.

There was an article in a recent issue of Popular Science magazine about a group of hobbyists who were modifying their cars to provide extra horsepower and other performance enhancements to otherwise stock cars. Popular modifications include replacing the carburetor with a larger one to improve airflow, or swapping the drive tires with wider ones to handle the extra power.

These hobbyists, called "tuners," were able to get some impressive performance out of their otherwise generic cars and small engines. And, while they certainly did plenty of custom manufacturing and crafting of individual parts, much of the enhancements were done by parts that they bought off the shelf at the local parts store.

By the time their cars were ready, each was a Frankenstein collection of mass-produced parts and custom work that formed an impressive end product. This kind of affordable customization would not be possible without the advent of mass-produced interchangeable parts.

Mass production and interchangeable parts formed the cornerstone of the Industrial Revolution, and made possible great leaps in productivity and worldwide wealth.

System Development by Artisans

Flash forward another hundred years or so to the present. It seems as if we have forgotten the concepts of leveraging the work of others. Most software development is done by the programmer-as-artisans who re-create the same functionality again and again, each time fixing the problems with the previous implementation.

What we need is a concept of interchangeable parts for software development. This has been tried again and again with varying success. The C programming language came with a standard library (stdlib) of functions, such as printf and sscanf, which most C programmers gladly used rather than writing their own. Later, the Microsoft® Foundation Class (MFC) for C++ development was made available to programmers working in an object-oriented Microsoft Windows® environment. Who wants to write a dialog box function if there is one available that works and does mostly what is needed?

This approach has worked for basic functions like printing and the raising of dialog boxes, but for more complicated business-oriented processes, the function-library approach lacks the flexibility required by most organizations, and so programmers are forced to write complex code over and over again.

Consider, then, the concept of adding security to an application. Security is, at one level, a basic function with well-known methodologies, while at the same time critical to the success of an enterprise, which wants to make sure that its individual policies are manifest in the code.

Security is an important aspect of any modern enterprise system. Too often, we see programmers resorting to hand-crafted techniques to add security to an existing system, or build security into new systems. This hand-crafting often uses components and libraries that are available within the enterprise, or from third-party sources. However, the act of putting together these components requires re-engineering for each system.

What if I told you that you could add security subsystems that instantiate any corporate policy by just adding a couple lines of code to your existing application just as the car hobbyists increased the horsepower of their cars by adding off-the-shelf superchargers? Well, you can, by using the standards and methods that we have been using for FoodMovers.

Building Blocks

Microsoft's .NET Framework provides an infrastructure for building systems as services that are interchangeable, much like the concept invented by Henry Ford. As we have seen in this project on the FoodMovers system, it is possible to build a custom solution using standard parts that fit together using widely supported standards. The reason we use these blocks and standards is to be compatible with other blocks so we can extend our system intelligently in the future.

Building on the basic Web services standards (XML, XML Schema, SOAP, and WSDL), Microsoft and industry partners have created advanced Web services specifications for adding security, reliable messaging, and transactions to your Web services applications. These specifications were designed to be modular, composable and platform independent. The advanced Web services protocols are quickly maturing as a result of numerous industry partners collaborating to ensure that the specifications address the identified scenarios and providing implementations of the specifications for interoperability work. Microsoft provides early implementations of these specification through a supported product called Web Services Enhancements for Microsoft .NET (WSE). (See The MSDN Web Services Development Center.) WSE provides support for security, message routing, attachment support for binary data types, a message-based object model for SOAP messaging, support for multiple hosting environments, and support for alternative transports.

Microsoft envisions Web services protocols as a "highway" for secure, reliable, transacted application communication. By creating a set of easily implementable coding objects, a freeway is built, not by a central transportation authority, but by tens of thousands of developers creating hundreds of thousands of services that are available to whomever needs them and has the authority to access them.

In this section, we will see how easy it is to use Microsoft's WSE to add important functionality to the FoodMovers application, just by adding these building blocks to our product.

Advanced Web Services

In a traditional system development project, requirements like security and reliability are considered in the design phase and included in the architecture and development specifications for the system.

A Web services environment is no different. We must consider all of the requirements for our system as we create the architecture and design. Let's consider security, for example. In order to design a security infrastructure, we need to understand the internal systems and the security they require. Then we would want to work with external partners to determine their requirements for security.

In a traditional system design environment, these requirements are written into a design specification and carried through to the development and deployment of the completed system.

Once developed, a security infrastructure must be maintained by all parties involved in its deployment. If any part of the system is updated later, the security subsystem would need to be evaluated and possibly modified, which would impact all of the partners that had a stake in the messages sent over the secure system.

But let's take a look at a larger view. FoodMovers must develop their internal and external security system as part of their relationship between systems. FoodMovers' external partners must create a compatible security system so they can communicate with FoodMovers. But the partners also work with other companies, and they must also create a security infrastructure in order to communicate.

In other words, many different companies have the requirement to implement security in their systems. Would it be possible to mass-produce a security capability such that anyone who wants security can install the interchangeable part?

Common Interoperability Problems and Their Solutions

In past sections of this project, I discussed certain integration problems and how a service-oriented architecture can solve them. We applied a loosely coupled, message-based architecture to allow disparate systems to communicate over standard infrastructures. We created Web service interfaces that encapsulated business logic and made data and processes available to other applications.

Is that all there is to interoperability? Can all integration problems be solved using a common alphabet, XML; a common encapsulation and protocol, SOAP; and a common contract, WSDL? Like security that I mentioned above, most implementations of service-oriented architectures need to have certain capabilities. In this section, I will discuss common problems and standards-based approaches to solving them.

Problem: Security

The dream of businesses sharing information by exposing their processes as Web services is usually shattered by the cold reality of security problems. In using Web services, we can deliver messages using the Internet, which is notorious for being insecure. The open nature of the Internet is such that packets can be picked up by anyone as they race around the globe from origin to destination.

In Web services, our messages are packaged using SOAP. Messages in XML syntax are easily read by human eyeballs. It is not a hard job to tap the wire, or fake that an unknown company is a trusted partner.

Consider one vulnerable connection we have created: the ability for our suppliers, specifically Hearty Soup Company, to query our inventory and place orders directly into our system. If our competitors were able to do this, they would be able to find out more about us than we would like. Also, if Hearty Soup Company's competitors (or anyone else for that matter) were able to enter unauthorized orders for Hearty Soup Company, their business would suffer.

It is critical then, that we have two types of security in the interactions with our suppliers. First, we want to make sure that any request that comes from our trusted trading partners is, in fact, coming from them and not someone pretending to be from them. This is called authentication. Second, we want to encrypt any messages going back and forth so that anyone between us and them cannot decipher the message. This is called encryption.

We could implement these systems ourselves but, fortunately, there are standardized building blocks that we can use that will make our job easier.

Solution: Multi-faceted Security Specifications

Microsoft, IBM, SAP, and VeriSign formed various committees to develop several specifications that cover the various areas of Web service security. The hierarchy of specifications is contained under an effort called WS-Security. WS-Security is a starting point for security building blocks. It is shown in Figure 1.


Figure 1. WS-Security is one of many specifications that will standardize and secure interactions among businesses.

  • WS-Security is a specification that defines how to attach signature and encryption headers to SOAP messages. It also defines how to attach security tokens such as Kerberos tickets or certificates and keys. It defines the namespaces that will be used in implementing security in Web services. By this namespace WS-Security defines a vocabulary that will be used to enhance SOAP to include security. However, WS-Security is not enough. There are more applications and specifications that need to be built on WS-Security to standardize all aspects of a security implementation.

  • Secure Socket Layer (SSL) and Transport Layer Security (TLS) provide transport-level security. SSL/TLS together enable point-to-point secure sessions and offer security features such as authentication, data integrity and confidentiality.

  • XML Signature specifies an XML syntax to represent and verify the signature of Web resources, services, or any URI representations.

  • XML Encryption develops a process for encryption and decryption. XML Encryption syntax describes encrypted content and information that enables the intended recipient to decrypt the document.

  • WS-Policy specifies how to implement policies of Web service endpoints, such as description of capabilities and constraints and policies on intermediaries and endpoints.

  • WS-Trust specifies trust model. WS-Trust cannot be used by itself. It is a building block on top of WS-Security. Trust policies are rights associated with a license and, as such, can be composed using any rights definition language to provide additional semantics such a usage policy and usage restrictions.

    WS-Trust is an extension to WS-Security to define a model for secure communication, methods for issuing credentials, a process for deriving session keys, methods for authenticating SOAP messages, and a process for evaluating and assessing trust. Using these extensions, applications can have authenticated secure communication over SOAP that can be "trusted" by the involved parties.

  • WS-Privacy defines how to implement a privacy model that includes Web services, requestors' statements of privacy preferences, and organizational privacy statements.

  • WS-SecureConversation specifies how to manage and authenticate message exchanges between parties, and how to secure conversations.

  • WS-Federation specifies how to manage and broker the trust relationships in a heterogeneous federated environment, including support for federated identities.

  • WS-Authorization specifies how to manage authorization data and authorization policies.

WS-Security uses different security mechanisms. For encryption and decryption using Public Key Infrastructure (PKI), keys are exchanged between partners. FoodMovers will send its public key to Hearty Soup Company. Hearty Soup Company will use that key to encrypt a message, such as an Item Maintenance document. When FoodMovers receives the message, it will use its associated private key to decrypt the message. The encrypted message between the two parties is undecipherable to anyone who sees it. In fact, Hearty Soup Company cannot even decipher it after it has encrypted it with the FoodMovers' public key. The process is the same when FoodMovers sends an encrypted message, such as a purchase order, meant only for the Hearty Soup Company's eyes.

In addition to message encryption, authentication is a vital part of security. Authentication is managed by sending a signature from the sender to the receiver. Signatures are used to authenticate that the sender of the message is who they say they are and ensuring that nothing happened to the message during the transmission. That is, that no parts are missing and that the data is the same as the sender has signed it.

Signature authentication is similar to encryption. A private key and a hash algorithm are used to obtain a value from the message. This value is sent to the receiver with a public key. The receiver processes the message parts using the algorithm and the sender's public key. The receiver will obtain a value. If this value is the same as the one that is sent with the message, the receiver is authenticated and the message is intact. If not, either some parts of the message got corrupted on the way or another party tried to send the message in an attempt to fool the receiver into thinking that they are the trusted party.

I will discuss, using code, how WS-Security is applied to the message and how this message looks later in this section.

Problem: Binary Data

SOAP works great for XML documents. In fact, the SOAP specification assumes that the document in the SOAP body is in XML syntax. This works fine for information that can be codified as an XML document.

However, not everything is XML. We might want to send digitized photos along with an insurance claim, for example, or we might want to send a scanned drawing for a manufacturing price quote.

Sometimes, even XML is not XML. When we encrypt an XML message, put a digital signature in it, encode it, or compress the message, it becomes a binary document.

There is a W3C specification called "SOAP Messages with Attachments." This is commonly called "SOAP with Attachments," or "SwA." The specification lays out the rules for attaching a non-XML object to a SOAP message. SwA does not create a new attachment mechanism. Rather, it uses the familiar Multi-part Internet Message Encapsulation (MIME) standard that email messages use.

MIME encodes a document so it can pass easily through mail gateways on its way to a final destination. For text attachments, MIME simply places the document after the message in its native format. Some characters must be escaped, but the MIME attachment looks pretty much like the text file that was input.

For non-text attachments, MIME encodes the message according to one of several different binary encoding specifications that turns unprintable characters into printable, transportable characters.

While MIME encoding works fine for most email purposes, there are a couple of problems when using MIME in a speed-sensitive, scaleable application.

  • First, MIME documents that are encoded can get pretty big. A one-megabyte graphic could nearly double in size by the time it was encoded to clear text.
  • Second, MIME concatenates each message serially at the end of the document. There are flags in the concatenated stream that indicate where each attachment ends, so the application can put the attachments back together. However, the application must read each attachment, character-by-character, until it reaches these flags. There is no ability for an application to go directly to the third attachment; it must first read all of the way through the first two.

Email readers work by decoding this MIME stream and saving the attachments somewhere to the user can get them. For email readers, MIME is a good, simple choice for attachments. However, for high-performance service-oriented architectures using SOAP, a more scaleable approach is required.

Brother, Can you Spare a DIME?

Microsoft and IBM worked together to create a more efficient attachment methodology that was more appropriate for message-based architectures. They turned their work over to the W3C, which formed a committee to turn it into an open standard called Direct Internet Message Encapsulation (DIME).

DIME is a standard created especially for transporting binary documents. In a DIME package, each message is composed of records. Each record contains payload length, payload type, and a payload identifier. The payload length is an integer indicating the number of octets of the payload. Therefore, there is no need to parse; DIME allows random access throughout the file.

Each attachment, including the SOAP document itself, is encapsulated in a DIME object. Because each package has a length indicator at the beginning, an application can quickly read through the stream by offset to find the next object, rather than parsing the stream character-by-character. DIME also supports both URI and MIME media type constructs so it can handle the older world of MIME.

You probably do not want to create this from scratch, and you shouldn't. The "plumbers" of the industry (Microsoft, IBM, BEA, and so on) will create solutions that you can call from a much higher level. This section was developed using WSE 1.0, which includes DIME, and shows how the current thinking about non-XML attachments has been implemented. Realize, however, that this will change in the future. It is always safest to go with an implementation that includes code written by the people who are closest to the quickly changing landscape than to create everything yourself.

Solution: Evolving Specifications

The SwA, MIME, and even DIME approaches fall short in the larger picture of Web services. All of these methods break the advanced Web services architecture because they deal with the Web services wire format. That is, they work outside of the information-centric architecture of Web services by requiring an external carrier for valuable information. Using attachments the way I have described here would require the development of a parallel infrastructure to deal with security, reliability, transactions, and other things.

Microsoft is currently working with several other companies to combine their understanding of binary data. The result of this effort will probably be an addendum to the SOAP Messages with Attachments standard that defines an attachment in accordance with the W3C XML Information Set (Infoset) standard. Infoset defines a standard data model for properties of an XML tree. The XML Infoset provides an architecturally sound method of adding non-XML binary data to standard SOAP messages without requiring a parallel infrastructure be created.

For an overview of the Infoset, see Understanding Infosets. For more information on the work Microsoft is doing on attachments, see XML, SOAP and Binary Data and Proposed Infoset Addendum to SOAP Messages with Attachments. XML, SOAP, and Binary Data discusses the architectural issues encountered when using opaque non-XML data in XML applications. The W3C is also working on a standard called SOAP Message Transmission Optimization Mechanism, or MTOM, which provides a method for optimizing transmission while still using the XML Infoset.

Problem: Visible Network Topology

Many systems use the concept of a virtual network topology. That is, a virtual address is pointed to a particular physical endpoint address, where the destination really goes. For example, my company's site,, is registered with our domain registrar, which publishes it as part of the worldwide domain name service (DNS) database. When you type, it resolves to a physical IP address,, at our ISP. If we want to change our ISP, it is just a matter of changing the pointer from the old one to the new one. You don't know that our ISP has changed, and you have no idea where the machine is.

The flexibility resulting from the ability to move boxes from place to place is one of the things that have led to the success of the World Wide Web. It has also caused headaches for those who want to regulate things like gambling, because a computer box can be in a country that allows gambling, while available in a locality that does not.

But even though the concept of virtual network addresses allows us to change our physical address, the change is not instantaneous. If we are using DNS, the conventional wisdom dictates that it takes up to 72 hours for a change to propagate throughout the world's DNS servers. What if I want to change my physical address on a moment's notice to deal with changing loads?

There are load-balancing routers that provide that ability, but they are usually optimized for balancing loads for Web-page access, and are limited in their rules.

In Web services, we need a more flexible approach to our network topology. We need a router that is intelligent enough to route messages from one box to another based on rules that are either expressed by the internal system or dictated by the message itself.

We also need a way to take physical machines offline for maintenance without interrupting the flow of documents to the service. In other words, we need a methodology for specifying more than just a point-to-point, request-response architecture for Web services.

Problem: Undependable Environment

Suppose we wanted to be sure our messages sent were received intact by the intended recipient. Rather than just sending a message and hoping that everything was delivered, we would like some kind of "proof-of-delivery" that acknowledged that the package was received, and that it passed some sanity check of its contents.

In the physical world, we can do that at the post office by sending a certified letter. A green label is attached to the letter, which the postman must have signed by the recipient once the message is delivered. The proof of delivery gets sent back to us so we know who received it and when. A certified letter does not indicate that the recipient understands or agrees with the message, or even that the receiver has even opened it. All it indicates is that the message was delivered to a party at a particular time.

In the electronic world, we need a similar technique of verifying that a message has been sent and received successfully. While we're at it, it would also be nice to know when it was received by the recipient.

Another problem in the electronic realm concerns the problem of duplicate messages arriving. Suppose I send a message to you, and you don't reply within a given amount of time, so I send another message. If you indeed receive two messages, you need to know that the second one was a copy of the first, and not another separate message.

Solution: Reliable Messaging

Microsoft, IBM, Tibco and BEA are working on a standard called WS-ReliableMessaging, which is concerned with solving these problems in an open, standard way. WS-ReliableMessaging enables reliable delivery between distributed applications in the presence of software component, system, or network failures.


Figure 2. Reliable Messaging model

A sender wants to send a message to a receiver. Both adhere to the W3C specification for reliable messaging. The sender will place information inside of a SOAP message indicating that it wants to get an acknowledgement that the destination has received the message. In order to do this effectively, the message must be time-stamped and identified with a unique identifier so duplicate transmissions will be ignored.

Another standard, WS-Addressing, provides transport-neutral mechanisms to address Web services messages between parties to increase reliability.

The acknowledgment is not an indication that the ultimate receiver understands the message, or even that it has opened it. Acknowledgment only indicates that the message was physically delivered.

WS-ReliableMessaging also has the ability to specify policies for retries, timeouts, and intervals, as well as fault reporting and security considerations.

As of the writing of this section, Microsoft has not implemented WS-ReliableMessaging in Web Services Enhancement or Microsoft Visual Studio® .NET.

Web Services Protocols

As I mentioned earlier, Microsoft's involvement in these Web services extensions has been implemented in a supported product called Web Services Enhancements for Microsoft .NET (WSE). WSE v1.0 provided an implementation of WS-Routing, WS-Referral, WS-Security and DIME. We will be using the .NET implementation of WSE to add encryption and authentication to messages received from our external partners.

We will also use WSE to encode our internal Web services messages using DIME.

Implementing the WSE Web service extensions requires adding the WSE library and namespace to our program and running the existing messages through the WSE filters.

WSE v2.0

At the time of this writing, WSE v2.0 is in technology preview status. The new version makes some significant improvements in several areas.

WSE v2.0 replaces WS-Routing and WS-Rerreral with WS-Addressing, enhances the security model, and adds a message-oriented programming model to allow for different types of transactions, such as batch processing, peer-to-peer programming, event-driven applications, or long-lived transactions.

It is clear that the direction that Microsoft is taking with WSE v2.0 is towards larger, more secure, and more flexible Web service architectures.

Implementing Security

First, we will add the two aspects of security, encryption and signature, to our Query Inventory process.

If you are new to security, you might be surprised to learn that the concepts are really quite simple. If you are a security pro, you can skip the next few paragraphs.

Remember in high school when you wanted to ask a girl out (replace "girl" with your own gender preference). Since you didn't know the girl very well, you might have asked some people you trusted if they knew anything about her. Eventually, you found someone who knew the object of your admiration personally, and she told you about the girl.

At the same time, the girl caught wind of your interest, and asked your common friend about you. This is an example of trusted paths. You trust your friend, and the girl trusts your friend as well. When your friend "vouches" for you, you know the girl is worth your effort.

The same thing happens in the security world when a certificate authority (CA) gets involved. The CA is a trusted source of referrals. When a mutually trusted CA "vouches" for you to another party, you can trust that party and they can trust you.

Trust is critical to implementing security, and is the cornerstone of one of the two main aspects of security. Assuring that a message comes from a source that it says it does is called authentication. In the real world, I put my signature on a document to prove that I read it and was there when it was signed. Of course, someone can falsify someone else's signature. I could have a disinterested third party, a Notary Public, witness that a person who claims they signed a document is the actual signer. And, of course, we have the court system to take care of any problems after that.

There are two reasons to sign a document. First, we need to establish that a message was sent by a specific user. FoodMovers needs to know that Hearty sent the document and no one else. Second, recognize that a document sent over the public networks could be intercepted by a third party, modified, then sent the rest of the way. We need to assure that we can detect if this kind of tampering has been done. Together, these two things are called "nonrepudiation."

The point, then, is that a signature gives us a trust that the document is sent, as intended, by the party that claims to have sent it. But that proof is only as trusted as the "trusted path" of the party. You sign a paper document with a pen. Electronic documents are signed using information inside of a "certificate."

In order to assure trust, we have the concept of a certification authority (CA). A CA issues certificates, and asserts that the party in possession of the certificate is trusted by that authority. A CA can exist pretty much anywhere, but the certificate is only as good as the trust engendered by that authority. For example, if you have a machine running Microsoft Windows 2000 Server, you can set it up to be a CA and issue certificates. However, anyone outside of your domain would probably not trust that certificate any more than they trust you.

For this reason, there are businesses whose purpose is to act as a widely trusted source. One such service is VeriSign. VeriSign is a CA that anyone can trust; they provide background checks sufficient to assure that a party is who they say they are. Once you have a certificate issued by VeriSign, you can give that to others, and they will probably trust you. Or at least, trust that you are who you say you are. The process of issuing certificates and the keys that come with them, is called Public Key Infrastructure (PKI), and is shown in Figure 3.


Figure 3. Public Key Infrastructure

Once you have a certificate, whether from your own internal system, an industry group, or a widely trusted source, you can use that to sign your document. A certificate is an object that can contain everything having to do with trust. An example of a certificate for signing and encrypting is shown in Figure 4.

Figure 4. Certificate

Before we get started, we need to exchange certificates with Hearty Soup Company. Our certificate contains a thing called a "public key," which was issued to us by the CA. A public key is pretty much what it sounds like: a key that anyone can use. In PKI, there are two keys. A public key is used to encrypt information and authenticate a signature.

In addition to the public key, which was issued to us by the CA, our operating system will generate a private key. This private key is associated with the certificate, but stored elsewhere for security. The private key is used to sign and decrypt messages.

Before we do business with our external partners, we must exchange our public keys with them. This is shown in Figure 5.


Figure 5. Exchanging public keys

We export our public key and send it to them, where they can import it into their environment. They, in turn, export their public key and we import it.

We will be using two common techniques, authentication using a signature, and encryption of the entire document. First let's cover signing the document and authenticating it.

As part of our nonrepudiation policy, we require that all requests coming from outside of our system be signed. A signature is attached to a document. That signature is an encrypted hash that has been processed with a well-known algorithm. To assure that the signature is tamper-evident, however, it will be run through this algorithm using a private key that is unknown to parties outside the creator. There is another input to the algorithm, and that is the message itself. Because the document is used as part of the algorithm, it is impossible for an intercepting party to make changes to the document and re-sign the document. Any changes to the document or the signature will be detected at the receiving end.

The signed document is sent to the receiver, which uses the public key of the sender to authenticate the document and detect tampering. The process is illustrated in Figure 6.


Figure 6. Signing and authentication

Let's see how to sign a document using Web Services Enhancement.

As you might recall from earlier sections, the Query Inventory process allows external partners to check the inventory levels of their items in our warehouse. The flow diagram is shown in Figure 7.


Figure 7. The QueryInventory function allows an external partner to check inventory levels for their products

Messages coming from Hearty Soup Company must first be signed to assure us that the source was, indeed, Hearty. The beauty of a service-oriented architecture is that Hearty can use any platform to create their Web service requests. For the sake of this section, we will show how they could do it using Visual Studio .NET. The following shows the code from their Web service client.

namespace QueryFoodMoversInventory
   using System;
   using System.Data;
   using System.Web.Services;

   class QueryInventory
      static void Main(string[] args)
         com.foodmovers.InventoryManagerService wsFMInventory =
         com.foodmovers.InventoryData datInventory =
            new com.foodmovers.InventoryData();
         string strOutput;

         datInventory = wsFMInventory.QueryInventory(420, 
         if (datInventory.Inventory.Count == 0)
            strOutput = "Item not found\n";
            strOutput = "";
            strOutput += "--------------\n";
            strOutput += "         UPC: " + 
               datInventory.Inventory[0].UPC + "\n";
            strOutput += " Description: " + 
               datInventory.Inventory[0].Description + "\n";
            strOutput += "    Quantity: " + 
               datInventory.Inventory[0].Quantity + "\n";
            strOutput += "ShelfAddress: " + 
               datInventory.Inventory[0].ShelfAddress + "\n";
         Console.WriteLine (strOutput);

The QueryInventory operation is only available to vendors who are in good standing and have prior authorization to query our inventory levels. In addition, a vendor may only query those items for which they are the supplier. Authentication is critical to this end, but there is another built-in, application-level security feature that we are going to use. Notice that the QueryInventory operation requires a SupplierID as well as a UPC number. The SupplierID is set by our system, and is not based on any public number space. If a request for inventory comes in with a proper UPC but without the proper SupplierID, it is rejected. We will tell each of our suppliers what their SupplierID is, but they will not know the other suppliers' IDs.

Of course, suppliers could keep hitting the site with guesses for another supplier's ID, but even if they guessed correctly, they would still be unable to get a response because their credentials would not match the SupplierID they were trying to impersonate.

The namespaces for the WSE extensions are shown below.

using Microsoft.Web.Services;
using Microsoft.Web.Services.Security;
using Microsoft.Web.Services.Security.X509;
using System.Security.Cryptography; 
using Microsoft.Web.Services.Timestamp;

Signing the document using WSE requires that a certificate be available. As part of our set-up with Hearty, we exchanged certificates and have imported them on each of our machines. Then, they apply the signature that is in their certificate to the message before sending it to us.

 1 com.foodmovers.InventoryManagerService wsFMInventory =
 2     new;
 3 com.foodmovers.InventoryData datInventory =
 4     new com.foodmovers.InventoryData();
 6 X509CertificateStore store;
 7 X509SecurityToken signatureToken = null;
 8 store = X509CertificateStore.CurrentUserStore(
 9     X509CertificateStore.MyStore);
10 bool open = store.OpenRead(); 
11 byte[] certHash = 
12 {
13     0xef, 0xcc, 0x71, 0xa4, 0x1c, 0x88, 0xf5, 0x28, 0xcd, 0x52, 
14     0xc1, 0x9d, 0x41, 0x4f, 0x18, 0xf0, 0xfb, 0x48, 0x33, 0x9e};
16 X509CertificateCollection certs = 
18 if (certs.Count < 1)
19 {
20     Console.WriteLine ("No certs found.");
21     return;
22 }
24 Microsoft.Web.Services.Security.X509.X509Certificate cert =
25    ((Microsoft.Web.Services.Security.X509.X509Certificate)certs[0]);
27 if (cert == null) 
28 {
29     Console.WriteLine("Cannot find an X.509 certificate.");
30     return;
31 }
32 else if (!cert.SupportsDigitalSignature || (cert.Key == null))
33 {
34     Console.WriteLine("The certificate does not support " + 
35         "digital signatures or does not have a private key 
36     return;
37 }
38 else 
39 {
40     signatureToken = new X509SecurityToken(cert);
41 }
42 if (store != null) { store.Close(); }
44 SoapContext requestContext = wsFMInventory.RequestSoapContext;
45 requestContext.Security.Tokens.Add(signatureToken);
46 Signature sig = new Signature(signatureToken);
47 requestContext.Security.Elements.Add(sig);
49 requestContext.Timestamp.Ttl = 60000;
50 datInventory = wsFMInventory.QueryInventory(420, "4119691055");

This code first finds any certificates with a certain "thumbprint." This is the code in lines 8-16. If a certificate is found, the code in line 32 checks to see if the certificate has the ability to create a digital signature.

Once everything checks out, a token is created on line 40 that contains the necessary certificate information.

Line 44 creates a context for a request. You can think of a SoapContext as another wrapper for our existing SOAP envelope. SoapContext is where the WSE extensions take place. The goal of this approach is to make extensions easy to add on to existing applications.

Once we instantiate the SoapContext class, we add the security token and a signature in lines 45-47. Line 49 establishes a "time to live," which sets an expiration date of 60,000 milliseconds. If the document is received after that time, it can be discarded.

The SOAP request message that is sent from Hearty Soup Company to FoodMovers is shown below.

<?xml version="1.0"?>
            {... many lines removed ...}
                     <Transform Algorithm=
                  <wsse:Reference URI=

Notice that the soap:Header element contains two WSE elements. SoapContext provided this ability. The wsu:Timestamp indicates the birth date and the expiration date of the message. The wsse:Security element contains the signature and other related information required to understand the context. Notice the wsse:BinarySecurityToken element near the top of the security header. It has an attribute, wsu:Id, which has a value of SecurityToken-d8ea3c6c-2d81-48c0-8a74-b4b8809d4677. This is the token that the client created by accessing the certificate.

Notice that the wsse:SecurityTokenReference element inside the Signature element near the bottom of the security header has the same identifier as the URI attribute. The Signature element contains the actual signature in the SignatureValue element, along with a reference, Id-8948f619-afe2-4e8f-8831-8ad5535e9beb, to an identifier that contains the content that was used to generate the signature. This identifier is found in the soap:Body element.

The soap:Body element has the actual request for an inventory item. This is the kind of payload that a SOAP document will always have.

On the server, the signature must be authenticated to assure it is from the right sender and has not been tampered with along the way. The WSE toolkit provides a pretty simple technique for doing this. The server code need not be modified in any way. The only change we need to make to our server is to add a couple of lines to the configuration file, Web.config, to indicate that we are using signatures.

      <add type="Microsoft.Web.Services.WebServicesExtension,
         Microsoft.Web.Services,Version=, Culture=neutral,
         PublicKeyToken=31bf3856ad364e35" priority="1" group="0"/>

The only tricky part here is that the type attribute of the add element must appear on a single line. It has been broken here for readability.

That's all there is to the server end. The server processes the request and sends back a SOAP response document:

<?xml version="1.0"?>
                  <Description>Progresso Chicken Vegetable 

Notice the timestamp that comes back. This was generated on the response document just because it was specified in the request document. Now let's encrypt the document with a public key.

Now that Hearty Soup Company has FoodMovers' public key, they can take any document and run it through an encryption algorithm that uses FoodMovers' public key as the encryption value. This document is now encrypted such that no one except FoodMovers can decrypt it. The only thing that can decrypt it is the private key that FoodMovers keeps locked in its safe.

Now the encrypted document can be sent over any kind of transport without a fear of anyone seeing it except for FoodMovers. This is shown in Figure 8.


Figure 8. Encrypting and decrypting using public keys

The code for encrypting a document is similar to the code used for signing, except that we are specifically looking for the capability of the key to provide encryption. First, we need to check to see if the certificate supports encryption. After that, it's just a matter of creating another token from the same certificate and adding it to our SoapContext.

else if (!cert.SupportsDataEncryption)
   Console.WriteLine("The certificate does not support data
encryptionToken = new X509SecurityToken(cert);
EncryptedData enc = new EncryptedData(encryptionToken);

Here is the encrypted SOAP request message that is sent from Hearty Soup Company to FoodMovers.

<?xml version="1.0"?>
            {... same as signature example above ...}
            {... same as signature example above ...}
            {... encryption token similar to signature 
            example above ...}

         <Signature xmlns="">
            {... same as signature example above ...}

In this example, the timestamp andwsse:BinarySecurityTokenelements are similar to the signature example above. However, in this document there are two tokens: one for signatures and one for encryption.

The next element,xenc:EncryptedKey, contains the public key with which the document body was encrypted. The key is contained inside thexenc:CipherValueelement.

The URI attribute of thexenc:DataReferenceelement indicates the identifier of the object that was encrypted by that key. It corresponds with the ID element of thexenc:EncryptedDataelement inside thesoap:Bodyelement, which gets us to the data.

Thexenc:CipherValueelement inside thesoap:Bodyelement contains the super-secret information that we spent all of this time protecting. This is the encrypted document that will be decrypted at the server end. When it is decrypted, it will contain the XML payload of the SOAP body, ready for action.

In order for decryption to work, the service must indicate the location of the certificates and how they are to be verified. This is done in the Web.config file on the server. The lines in the following code block must be placed inside the<configuration>element.

   <section name=""
         Microsoft.Web.Services, Version=, Culture=neutral,
         PublicKeyToken=31bf3856ad364e35" />

      <x509 storeLocation="CurrentUser"/>

These lines indicate to the Web service that we are using the WSE extensions for security, and that X509 certificates for deciphering the message can be found in theCurrentUsercertificate store.

It is important to note that the type attribute of the section element must appear on a single line.

This example shows one-way encryption and decryption. If you want to encrypt the response document, you must use the same techniques before you send the message back.

Implementing Attachments

Microsoft has also implemented DIME attachments in WSE. Remember that DIME can be used as an effective, fast method for attaching binary documents to your SOAP message. In addition, DIME provides a way of bypassing most of the overhead of HTTP by encapsulating the entire message as a DIME document and sending it directly to the receiver in the fastest way possible.

The following shows how to apply DIME to the SOAP document.

SoapContext requestContext = wsFMInventory.RequestSoapContext;
DimeAttachment dimeAttach = new DimeAttachment(
   "image/gif", TypeFormatEnum.MediaType, @"c:\graphics\PSCLogo.gif");

As with security, using DIME requires creating a SoapContext object.

POST /FoodMovers/FoodMoversWebServiceProjects_InventoryManager/
InventoryManager.asmx HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; MS Web Services
Client Protocol 1.1.4322.342)
Content-Type: application/dime
Content-Length: 3984
Expect: 100-continue
Connection: Keep-Alive

    ) )  _õuuid:a7dc5e48-b333-499f-8cf4-0f2fa165fbfb   <?xml version="1.0"
      <soap:Body><QueryInventory xmlns="
_   )
öuuid:8400274a-a8e6-4229-8b39-86869ebcaa57   image/gif   GIF89a$
{... binary image information removed ...}

Notice that the first few bytes are not readable. These are a binary start record and record navigation bytes that DIME uses to find the attachments. Notice, also, that the entire SOAP message has been encapsulated in the DIME message. It is the job of the receiver to decode the DIME message and do something useful with the attachments.

This is a simple example using a binary attachment. It is also possible to create a DIME package with just the SOAP document package. If we are using HTTP (as shown here), DIME does not provide much of an advantage. However, if we were to send this package directly over TCP, then DIME would provide some performance improvements.


In this section, we have seen that there are standards and tools designed to ease the implementation of advanced Web service processing. Rather than design your own security structures, there are standards that leverage existing standards for authentication and encryption. Rather than try to figure out how to get a message from one party to another through an intermediary, there are standards that exist.

Of course, a standard without an implementation is just a good idea. Microsoft's Web Services Enhancement code provides easy add-ons to Web services created using Visual Studio .NET. Adding functionality is as simple as instantiating a couple of objects and setting some properties.

There are still many standards that are in progress. For example, Microsoft and other industry players have developed specifications concerning the following efforts:

  • Events and Notification. What happens to a client that subscribes to a Web service and the service changes? There needs to be a standard way for servers to notify clients of changes that will affect their implementation. As of this writing, Events and Notification is still being developed.
  • Addressing. A Web service document might go through several different places before it gets to its final destination. This path might include firewalls, routers, and gateways. We need a standard to provide transport-neutral mechanism to address Web services and messages.
  • Coordination. As the universe of Web services expands, services will coagulate into computational units composed of a large number of participants. We need some standard way to coordinate the efforts of these computational units to get the most effective use out of the group. Coordination defines the roles of the participants for efficient coordination between and among them.
  • Workflow. A Web service transaction might include more than a simple request-response transaction. A transaction might include many different point-to-point messages involving many coordinated members, with decisions made between them. Transactions could take place over a period of days or months.

I expect the Web service extensions to keep on coming over the next few years. Of course, some of them will get silly and pretty specialized, but I am confident that the most important, critical standards will be completed and implemented to enable true, enterprise-strength Web services systems.

In the next and final section in this project, I will cover issues involved with going live. (To see an overview of the entire project, read FoodMovers: Building Distributed Applications using Microsoft Visual Studio .NET.)

Section 6, Going Live: Instrumentation, Testing, and Deployment

Once the architecture is designed and the code framework is created using Visual Studio, it is time to describe our plan for deployment and administration. In addition, there are several areas of implementation that need to be addressed before a robust, reliable, and secure architecture is deployed.

First, we need to develop a plan for "instrumentation." By placing "sensors" in our application, we can use instruments to provide a dashboard of our deployed system. Then we need to exercise the system in two areas, text and staging, before finally deploying the system in a production environment.

In this section, I detail a plan for exception and event management, and introduce the concept of "exception mining," which provides a method for wading through the information stream coming from the application to find events that need attention.

About the Author

Brian Travis is Chief Technical Officer and Founder of Architag International Corporation, a consulting and training company based in Englewood, Colorado. Brian is an expert in real-world XML implementations. Since founding Architag in 1993, he has created intelligent content management systems and e-business solutions for Architag clients around the world. Brian is also a noted instructor and popular lecturer in XML and related standards. In his role as principal instructor for Architag University, he has been teaching clients about XML in the U.S., Europe, Africa, and Asia.

Brian has lectured at seminars around the world and has written about XML, Web services, and related technologies. His most recent book, Web Services Implementation Guide is a guide for IT architects and developers who need to integrate internal systems and external business partners. The book provides the basis for understanding the goals of Web services for the Enterprise Architect, Project Architect, Deployment Architect, developer, and manager. It outlines the steps an organization must take in order to align itself with the new world of Web services.