Employ Strong Encryption in Your Apps with Our CryptoUtility Component

Michael Stuart and J Sawyer

This article discusses:

  • A handy cyptographic utility
  • Background on cryptographic algorithms
  • Protecting your keys
  • Potential pitfalls
This article uses the following technologies:
Visual Basic, COM+, .NET

Code download available at:CryptoUtility.exe(1,491 KB)


CryptoUtility Architecture
Cryptographic Decisions
How Symmetric Algorithms Work
Options for Symmetric Algorithms
Creating Strong Keys
Creating Strong Salt
"Hygienic" Cryptography
Storing the Key
Protecting the Key with ACLs
Protecting the Key with DPAPI
Mutual Authentication and Key Obfuscation
Protecting the Crypto Application Itself
COM Security
Putting It All Together

We've all been warned not to store secret information on computers. You know that storing passwords, credit card numbers, or Social Security numbers on your system is risky and can open the door for someone to steal that information. However, you may well have the need to store such sensitive information on computer systems.

Let's say you're running a bank payment processing system. Such a system needs to store credit card numbers to process recurring charges, to reverse charges, and to perform account audits. In this scenario, a non-reversible methodology (such as a hash algorithm applied to a password) is simply not appropriate. Additionally, both scenarios require encryption and decryption on multiple, independent machines.

In this article, we will examine some of the issues involved with developing strong encryption components for applications. The components will be usable in scenarios like the one we described. In addition, we have some other goals for a program we created called CryptoUtility. First, we want the highest possible level of security that is practical for server-side Web applications. Second, setup should be relatively easy from an administrator's point of view, allowing deployment to multiple servers without wading through 10 pages of installation instructions. Third, reversible encryption (which we'll get into later) must be available, with the keys protected and stored securely. Of course, since this is a server application, it needs to be highly scalable and run for months without administrative babysitting or rebooting. Finally, this component should be accessible to legacy COM applications (such as classic ASP) as well as .NET-based applications (such as Web services and Web applications running on ASP.NET).

There are potential vulnerabilities of some kind in every application. With something that handles extremely sensitive information, these dangers are magnified, and you'll need to understand them as thoroughly as possible. You must determine how identifiable threats interact with each other and how several independent issues could combine to constitute a vulnerability. With this knowledge, you can identify security hazards, prioritize their potential peril, and decide on acceptable levels of risk.

We'll explore threats using the STRIDE and DREAD models, as outlined by Michael Howard in Writing Secure Code, Second Edition (Microsoft Press®, 2003). STRIDE is a model to categorize threats; it stands for Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege. DREAD is a model to calculate the total risk associated with a threat; it stands for Damage potential, Reproducibility, Exploitability, Affected users and Discoverability. Threat modeling and analysis are beyond the scope of this article, but we'll refer to terms associated with the STRIDE and DREAD models as we discuss system security measures.

CryptoUtility Architecture

Architectural decisions affect every solution. Our first decision point is whether to implement CryptoUtility as an in-process DLL, a Windows® service, or a Serviced Component using Enterprise Services. Developing the component as an in-process DLL presents several security issues. First, the CryptoUtility runs as the ASP.NET account. Even though we haven't yet decided on a key management strategy (that will come later), we know that we don't want the ASP.NET account to have access to the key. This would create the possibility that an attacker could exploit a vulnerability and obtain confidential information through the Web site. While the odds are against this, the data that we are protecting is such that the damage potential is extremely high. This kind of intrusion could affect all users, so we always try to avoid this option.

The second option, the Windows service, looks promising. Since a Windows service runs under its own identity, we don't have the concern that we had with the in-process DLL. But the most realistic way to communicate with the server would be .NET Remoting. While Remoting is an excellent technology for inter-appdomain communications, in the Microsoft® .NET Framework 1.x secure access control is not built into the Remoting API, nor is it accessible from COM clients. While it is true that you can build your own Remoting channel sink components to add a layer of security, this is somewhat outside our current scope. (See Stephen Toub's MSDN®Magazine article "Secure Your .NET Remoting Traffic by Writing an Asymmetric Encryption Channel Sink". Also, see ".NET Remoting Authentication and Authorization Sample—Part II".)

We could use Code Access Security (CAS) with Remoting, but the lack of support for COM clients makes it a poor choice. We will, however, be examining CAS later in this article.

Using a serviced component in Enterprise Services has the same benefit as using a Windows service—it runs under its own identity with its own credentials. Moreover, Enterprise Services is built on COM+ Component Services, which provides strong access control and easy access to COM clients. Therefore, we made CryptoUtility a serviced component.

Cryptographic Decisions

Nineteenth century Flemish cryptographer Auguste Kerckhoff proposed a simple but powerful rule: the security of a cryptographic system should depend entirely on keeping the key a secret. The lesson to developers is: do not develop your own encryption algorithm (in fact, avoid implementing a well-known algorithm yourself except as an educational exercise). Unless you are a mathematical genius focusing specifically on cryptographic theory, you'll almost certainly make mistakes trying to do it yourself. Commercial implementations of established algorithms have been vetted through extensive certification, such as the Federal Information Processing Standards issued by the National Institute of Standards and Technology (NIST).

There are two types of cryptographic algorithms available: symmetric and asymmetric encryption. Symmetric (or private-key) encryption uses a single key for both encryption and decryption. A compromised key can lead to complete failure of the security of a system. With symmetric algorithms, the security of the key is inversely proportional to the number of people who have the key.

With asymmetric (or public-key) encryption, there are two mathematically related but independent keys, a public key and a private key. Information encrypted with the private key can only be decrypted with the corresponding public key, and information encrypted with the public key can only be decrypted with the corresponding private key.

Since we won't be distributing the key to anyone, we'll use symmetric encryption for our utility. It has the advantage of being significantly faster than an asymmetric algorithm, but we will need to take some precautions to protect the key.

How Symmetric Algorithms Work

Symmetric algorithms of the type discussed in this article are block ciphers. They break cleartext up into blocks of a fixed size (in the case of the Rijndael algorithm, 16, 24, or 32 bytes) and perform iterative rearrangement and substitution on successive blocks. In the .NET Framework, you have a choice of several symmetric algorithms: Data Encryption Standard (DES, the previous Federal standard), 3DES (or Triple-DES), RC2, and Rijndael. The Data Encryption Standard has fallen out of favor as hardware has advanced because, with only a 56-bit key, it is susceptible to brute-force attacks. Triple-DES, which is DES three times over, is still relatively secure and possibly a good choice for encryption as it is widely supported on most platforms.

We have chosen Rijndael for the CryptoUtility because it offers the greatest key length of the algorithms available natively from .NET—256 bits. It is also the new Advanced Encryption Standard accepted by the Federal Government in 2001, and only after a long and thorough review of several algorithms in a contest sponsored by NIST as a replacement for DES.

Options for Symmetric Algorithms

Symmetric algorithms offer a number of options to control their operation. In most cases, you don't have to set these specifically as the chosen defaults are the most secure. However, it helps to understand what they're doing. The options for symmetric algorithms include the following:

Mode sets the cipher mode. For Rijndael this is either Cipher Block Chaining (CBC) or Electronic Code Book (CB):

  • CBC, the .NET default, is the most secure cipher mode. CBC performs an XOR operation on each block of cleartext with the previous cipher block before enciphering it. It also requires an Initialization Vector (IV), a random block of the same length as the algorithm's block size. The IV is used as a stand-in to perform Cipher Block Chaining on the first block of cleartext, since at that point there is no previous block. The IV ensures that repetition in the first block of cleartext does not result in similar repetition of the first block of ciphertext when the same key is used.
  • ECB, the less-secure option, has each block enciphered independently. Repetition in the cleartext may produce patterns in the ciphertext, thereby weakening security.

Padding sets the padding mode. Since a given message's length may not be an exact multiple of the algorithm's block size, you have to pad the end of the message to fill the last block. Padding values are as follows:

  • PKCS7 pads the last block with integers, each of which is the number of bytes used to pad the message. For example, if the message required 2 bytes of padding, the padding would be "0x02, 0x02".
  • None. No padding is added. The message length must be an exact multiple of the block size.
  • Zeros. The last block is padded with zeros.

Key size sets the length of the key used for encryption and decryption. Longer keys are generally more secure because brute-force attacks on them are less likely to be successful. Rijndael allows for keys of sizes 128, 192, and 256 bits. The key size is independent from the block size.

Block size sets the length of each individual block. With Rijndael, the block sizes can be 128, 192, and 256 bits.

Note that Rijndael can be used with all nine permutations of block and key sizes; however, the IV size must be the same as the chosen block size. There is an advantage to using larger block sizes because Rijndael may use from 10 to 14 rounds internally during encryption, with a larger number of rounds for larger block and key sizes. Thus, the more rounds that Rijndael uses internally, the more resistant the cipher is to cryptanalysis. For more information on Rijndael, see James McCaffrey's article on the subject from the November 2003 issue of MSDN Magazine.

I chose to use cipher block chaining and 256-bit keys and block sizes with Rijndael to ensure the greatest level of security for the ciphertext. The CBC and initialization vector are critically important; many credit card numbers start with similar leading digits. If the initial ciphertext was the same for these, it would make it a lot easier for someone to gain access to encrypted credit cards should he manage to steal the database!

Creating Strong Keys

Predictable cryptographic keys, like predictable passwords, badly compromise the security of your application. Don't generate these keys by hand—humans are poor random number generators. Using the System.Random pseudo-random number generator is also not sufficient since its random sequences are deterministic and repeatable (hence the name "pseudo-random").

You can use RNGCryptoServiceProvider to generate strong keys. It generates cryptographically strong random numbers and is suitable for key and salt/IV generation. While RNGCyptoServiceProvider is technically a pseudo-random generator, it is NIST-certified as cryptographically strong. The following code is an example in Visual Basic® .NET that uses the RNGCryptoServiceProvider to get 32 random bytes of data:

Dim rng As RNGCryptoServiceProvider = New RNGCryptoServiceProvider() Dim key as Byte() = New Byte(31){} rng.GetBytes(key)

Another approach is to generate a cryptographic key from a hashed password. Starting with a plain-text, easily memorized passphrase, you can use the PasswordDeriveBytes class to generate a cryptographic key. The chief advantage of this method is that the passphrase is easy to remember. However, it is more involved, requires more code, and compounds the key storage problem because you have to also store the salt. This method is mainly useful when a user will input a passphrase, which will immediately perform encryption or decryption and then be discarded. This method won't work for the CryptoUtility since it's a server-side application. It could, however, be used to protect a key stored outside the application, as shown in the following lines of code:

Dim entropy As Byte() = ... Dim pwd As PasswordDeriveBytes = New PasswordDeriveBytes(txtKey.Text, _ entropy) Dim key as Byte() = pwd.GetBytes(32)

Creating Strong Salt

Salt provides additional entropy (degree of disorder) to the cryptographic algorithm. We'll use the same method for generating the IV as we did for the key, the RNGCryptoServiceProvider random number generator, as shown here:

Dim rng As RNGCryptoServiceProvider = New RNGCryptoServiceProvider() Dim entropy As Byte() = New Byte(15) rng.GetBytes(entropy)

The encryption algorithm uses this IV as the first block used to perturb subsequent blocks to obfuscate any regularity in the first block. In the CBC mode, each block is fed forward into the next block's encipherment; for the first block, the IV stands in.

With Rijndael, the strength of the cipher is not dependent on the secrecy of the IV. But the fact that the same cleartext encrypted with the same key and IV produce the same ciphertext could be used to guess a message's contents. For example, if your cleartext was limited to the words yes and no, it would be trivial to decode these messages if the same key and IV were used each time, as the ciphertext would repeat. Adding a random IV each time produces different ciphertext, thus foiling this guessing game.

The IV need not be private; in our examples, we prepend the IV to the ciphertext before storing it. On decrypting, we must use the same IV as used during decryption. We can easily grab this from the beginning of the message below. Some have advocated storing the IV separately from the ciphertext. However, CBC mode provides sufficient protection without having to keep the IV secret.

' now append the IV to the beginning of the ciphered text output = New Byte(IV_SIZE + data.Length - 1) Buffer.BlockCopy(newIV, 0, output, 0, newIV.Length) Buffer.BlockCopy(data, 0, output, IV_SIZE, data.Length) ' strip salt (IV) off first 16 bytes of input Buffer.BlockCopy(cipherBlob, 0, iv, 0, IV_SIZE) ' put actual ciphertext back into cipherText array Buffer.BlockCopy(cipherBlob, IV_SIZE, cipherText, 0, _ (cipherBlob.Length - IV_SIZE))

"Hygienic" Cryptography

As strong as the algorithm might be, there is always a window of opportunity where we will know the cleartext— during the entire chain of code leading up to the encryption. During this window, the cleartext is sitting unprotected in memory, waiting to be encrypted. Even after code has been encrypted, the original cleartext may still hang around in memory for an indeterminate time.

If the cleartext was ever a System.String object, it is immutable and will remain in memory until it is overwritten by a garbage collection cycle. TextBoxes, Labels, and a host of other controls store their .Text property as a System.String. This changes somewhat in the .NET Framework 2.0, which introduces the SecureString class specifically for storing strings securely. (See the Visual Studio® 2005 Beta 1 documentation for information about SecureString.) However, that does not solve the problem of cleartext embedded in memory from multiple other sources such as TextBoxes and the like. If the cleartext in any form (byte array, string, char array, and so on) was written to the swap file by the operating system, it is on the hard drive. Furthermore, if the system crashes and produces a memory dump while your cleartext is in memory, the cleartext will be written to disk.

The attack mode for these vulnerabilities ranges from attaching a debugger to your process and reading the cleartext or key (high privilege, complex, and low exploitability) to bribing the driver of the truck that moves backup tapes to the off-site storage facility and reading your dump files (low-tech social hack, but still not highly exploitable). The best approach to these paranoid scenarios is to know the value of the data and then elevate the difficulty of compromising it beyond that value.

When analyzing such threats to your data, a detailed threat tree can show if these represent the culminations of several sub-threats that, on their own, are not highly exploitable, or repeatable. For example, to attach a debugger to a process, you must have debugging privileges on the Web server and must search the memory on the machine to find the pertinent information. While certainly possible, this is so unlikely that were it to actually transpire, it would mean that you have much bigger problems.

Still, you can follow some hygienic code practices to minimize the risks of these attacks. First, minimize the time the key itself is in cleartext in memory by storing it encrypted with the Data Protection API (DPAPI) in a registry key protected by an access control list (ACL). Decrypt the key immediately before use, then clear the byte array with Array.Clear, which zeroes out the bytes. Never store the key in a String object, even Base64-encoded; remember that Base64 is not encryption. Note also that clearing the byte array in this fashion might not completely solve the problem as the garbage collector could move the byte array in memory before it is cleared. A workaround for this scenario involves pinning the byte array at a specific memory location, thus preventing the collector from moving it. For more on ACLs see "Safety in Windows: Manage Access to Windows Objects with ACLs and the .NET Framework" by Mark Pustilnik in this issue.

Second, minimize the number of places and length of time that your cleartext is in memory. It is impossible to avoid this entirely; after all, the code needs to see the cleartext to encrypt it. But you can minimize the number of String objects containing your cleartext by converting it to a byte array and then clearing it when you have finished the encryption. Unfortunately, keeping cleartext out of String objects is very difficult. As mentioned, most controls store their Text property as a System.String, so this is nearly impossible.

Storing the Key

The most important part of cryptography in a real-world application is the storage of the key. Bruce Schneier, a security technologist and the author of Applied Cryptography (John Wiley & Sons, 1996), neatly summarized the pitfalls of relying wholly on strong cryptography for security by comparing it to a picket fence with just one picket a mile high. Since the key is the "secret" in symmetric cryptography, how do you protect the key? From an attacker's perspective, compromising the key is a better avenue than brute-forcing the ciphertext. Why climb the mile-high picket fence when you can just walk around it?

For this project, we're working with a server-based application, probably exposed as a Web site or Web service, which must encrypt and decrypt data on behalf of client applications. This presents different constraints than if you were designing the cryptography for a desktop application. In a desktop application, you should always assume that someone can debug your process; after all, a serious hacker is always the admin on his own box. The Web application may be run in a Web farm across multiple front-end servers, and multiple machines will need to share this secret information. We also want to design cryptography modules that can be used apart from the rest of the application, thus encouraging their reuse.

We could store the key in the database, but then we'd have the circular problem of retrieving the key if we wanted to use an encrypted database connection string. We could store the key in a local file, which is a workable option since we can control access with ACLs. We could store the key in a COM+ constructor string, but this limits us to one key and requires careful, trusted administration. After considering all these options, we chose to store the key in the registry, keyed to the application name. This allows us to set ACLs to control access, store a key for each application using CryptoUtility, and easily set the key from the administration utility. Registry keys also provide auditing capabilities so that we can see exactly who has read and written to the registry keys in the security audit log.

In CryptoUtility there is an option for caching the decrypted key in memory which allows for higher performance at the expense of slightly less secure key storage. If your app does not require the utmost security, you might choose to cache the key as well. Accessing the registry and using DPAPI to decrypt the key on each call is expensive, but not enough to matter unless you anticipate hundreds of hits per second.

Protecting the Key with ACLs

We can protect the key in two ways. First, ACLs limit access to the key in the registry. Setting ACLs is not difficult to do manually, but every manual step is a possible compromise waiting to happen, especially when deploying an application to a Web farm that may consist of dozens of servers. Therefore, we strongly advocate programmatically setting ACLs on the registry where the key is stored.

By developing a code-based utility to set registry key ACLs, we can incorporate key setup into either the installer itself or into a simple command-line utility that automates the setup and securing process. Unfortunately, the .NET Framework 1.x does not expose a way of setting ACLs on registry keys natively. But thanks to the hard work of Renaud Paquet, there is an extremely useful managed library that can be used to set ACLs on just about anything. You can find Renaud's ACLs in .NET library on GotDotNet.

Using his Win32® Security library, we were able to restrict access to the key in the registry (see Figure 1). Now, retrieving the key requires the attacker to either assume the ACLed identity or become administrator on the machine.

Figure 1 Securing a Registry Key

' Create the dacl. Dim regDacl As Microsoft.Win32.Security.Dacl = _ New Microsoft.Win32.Security.Dacl() regDacl.AddAce(New AceAccessAllowed(Sids.Admins, _ AccessType.SPECIFIC_RIGHTS_ALL Or AccessType.STANDARD_RIGHTS_ALL, _ AceFlags.CONTAINER_INHERIT_ACE)) regDacl.AddAce(New AceAccessAllowed(New Sid(_userName), _ AccessType.GENERIC_READ, AceFlags.CONTAINER_INHERIT_ACE)) ' Open the registry key Dim hKey As IntPtr Dim rc As Integer = Win32.RegOpenKey( _ New IntPtr((int)Microsoft.Win32.RegistryHive.LocalMachine), _ regKey, hKey) ' Set the DACL for the registry key If rc = Win32.SUCCESS Then Dim sd As SecurityDescriptor = _ SecurityDescriptor.GetRegistryKeySecurity(hKey, _ SECURITY_INFORMATION.DACL_SECURITY_INFORMATION Or _ SECURITY_INFORMATION.GROUP_SECURITY_INFORMATION Or _ SECURITY_INFORMATION.OWNER_SECURITY_INFORMATION) sd.SetDacl(regDacl, false) sd.SetRegistryKeySecurity (hKey, _ SECURITY_INFORMATION.DACL_SECURITY_INFORMATION) Win32.RegCloseKey(hKey) End If

Protecting the Key with DPAPI

While we have limited access to the key with an ACL, we still have a cleartext encryption key stored in the registry and hence somewhere on the file system. This brings us to the task of encrypting the encryption key itself using DPAPI. Essentially, DPAPI provides "keyless" symmetric encryption by using parts of the currently loaded user or machine profile as a key. (For the full source of a DPAPI wrapper class, see the DpapiCryptographer.vb file in the sample code.)

In USER mode, DPAPI uses a loaded user profile to generate the key. With anything encrypted in DPAPI's user mode, only that particular user account can decrypt the encrypted data. This requires a profile to be loaded for a particular account, something that is not the default with a component in COM+. Additionally, to allow for encryption and decryption across multiple machines, roaming profiles must be enabled and used by all clients, which can be a management challenge.

Figure 2 CryptoUtility in Context

Figure 2** CryptoUtility in Context **

MACHINE mode DPAPI means that any code running on the machine has access to DPAPI's key and therefore can decrypt any secret encrypted in MACHINE mode. We want to limit access to the key, and DPAPI allows us to pass a unique salt to inject additional entropy into the key generation process. We will store this salt in the registry with the encrypted key, limited by ACL to the same user identity as the key. This is not a perfect solution; you may want to store the DPAPI salt elsewhere. The attack for this particular solution involves mounting code on the machine, gaining access to the ACLed registry key, and being able to run DPAPI with the stolen salt to decrypt the key. Considering the low exploitability of this attack, this seemed to be an acceptable risk. Figure 2 shows CryptoUtility in context with COM+, the client application, and the surrounding subsystems DPAPI and Registry.

Mutual Authentication and Key Obfuscation

You will notice that certain overloads of the Crypto class accept a partial key and others do not. In another application, we wanted to use mutual authentication, and we provide these overloads to satisfy both requirements.

On the one hand, storing the entire key encrypted and ACLed in the registry is a good thing: you know where it is, how it is protected, and what it will take to steal it. On the other hand, it is easily discoverable, and once an attacker has leaped the hurdles necessary to steal the key, he has complete access to the encrypted data.

For one of the applications using the CryptoUtility, we'll add another layer to the key storage: the client application itself. That application uses one of the overloads that accepts a partial key. The application stores half the key, and CryptoUtility stores the rest in the registry. The application obfuscates its half of the key by hiding it in multiple places: innocuous-sounding method names and enumerations, plainly named resource file strings, simple mathematical operations, and the like.

We use Mutual Authentication because the application is passing half of the key; it is not completely anonymous and we know, to some degree, who is calling us. To decrypt information, the application has to know its half of the key.

Obfuscation's "security through obscurity" has been rightly reviled as a poor practice; it violates Kerckhoff's Law. Bluntly put, obscurity is not security. However, it takes time to attack a system and discover everything about it. If we assume an attacker works over the network, then he must spend some time downloading our assembly and decompiling it to understand how we derived our half of the key. In that time, we hope that the intrusion detection systems will alert us to unusual activity. Security by obscurity, if it is the only defense, is woefully inefficient. It is, however, a smart part of an overall defense-in-depth strategy when it supplements stronger practices.

The worst-case scenario for the partial-key approach is an inside attacker who already knows the system (one of its creators, for instance) and who has network access to the machine. If this worst-case attacker is also an administrator on the machine, all bets are off—he will now own your system.

However, if your organization is following good practices and does not allow developers administrative access to production servers, the hurdle is again quite high. The knowledgeable inside attacker must gain access to the encrypted, ACLed registry key to obtain the half of the cryptographic key he doesn't know, raising the bar to the same level as the full key-in-the-registry scenario. In addition, with auditing in place, you may well be able to track back to the internal miscreant and catch him in the act. Remember that even the most secure systems are vulnerable to inside attacks. Technology must be supplemented by the right people and processes to ensure true security.

Another objection to the partial-key scenario is that, if the attacker elucidates the code-obfuscated partial key, the remaining key is only half-strength, making a brute-force attack exponentially easier. But since we are using the full 256-bit Rijndael, the 128-bit half-key still provides very high security. If storing half the key in the client application does not make sense for your scenario, then it's perfectly reasonable to store the whole key in the registry.

Protecting the Crypto Application Itself

We've discussed two layers of protection: protecting the private data itself with strong cryptography via Rijndael, and protecting the key to that data using ACLs and DPAPI. But there is a third layer that needs protection: the CryptoUtility itself!

Storing the entire key in the registry also means that any assembly with the right to call the Crypto assembly can use its encrypt/decrypt methods without supplying additional credentials, so the previous two layers are useless if an anonymous caller can get CryptoUtility to decrypt a key for them.

To help prevent this, we've used a feature of CAS called the StrongNameIdentityPermission (SNIP) LinkDemand. This declarative CAS feature allows us to decorate a method or class with an attribute containing the public key of the assemblies allowed to call our assembly. For example:

<StrongNameIdentityPermission(SecurityAction.LinkDemand, PublicKey:= _ "002400000480000094000000060200000024000052534131000400000100010" & _ "085525e9438e9fae122f71ec7124443bf2f9f57f5f3760b3704df168493004b" & _ "9ef68413f500d54fa9fa3869b42b1e2365204826e54b618d56e7e575f27f675" & _ "f0eae3ea8458a8ee1e92dc3f4bfc34fbe23851afa9d2c28fc8cd5b124f60a03" & _ "a06bfb598bc3acbd8c4380aef02cc58bdf955d140390f740a7e115c59e3b3b5" & _ "758ca")>

At run time, the .NET Framework confirms that the assembly calling our assembly has been signed with the exact Strong Name demanded by the attribute. If it has not, the call will fail. This applies to attempts to reflect over the assembly as well.

There are some weaknesses in this approach, however. The main weakness of the SNIP LinkDemand is that a sufficiently privileged user can simply bypass all such security checking by, for example, using the Code Access Security Policy (CASPOL) tool to modify the security settings in the .NET Framework. Keith Brown discusses such issues in his April 2004 Security Briefs column. This approach also fails if your SNK file is compromised; this could result from, say, an insider attack by someone on or near the team with access to the Strong Name key files.

Last, but not least, this particular approach fails in a predictable but slightly surprising place: COM. Since COM has no concept of Strong Names, it also has no understanding of StrongNameIdentityPermission LinkDemand.

COM Security

COM does not obey .NET CAS rules. If we choose to mount CryptoUtility as a COM+ application, it is accessible to COM to the same degree as any other COM component on the machine. This means that every bit of our carefully planned security system can be compromised by a simple, four-line VBScript snippet:

set crypto = CreateObject( "CryptoUtility.Crypto" ) original = "this is the secret message." cipher = crypto.Encrypt("ourApp", original) clear = crypto.Decrypt("ourApp", cipher)

Figure 3 Require Authorization

Figure 3** Require Authorization **

But all is not lost. COM provides strong, time-tested protections against illicit access. In order to limit access from COM callers, you should enable the COM+ authorization for the package, as shown in Figure 3 and Figure 4.

Keep in mind that these security checks apply to .NET callers as well, so this offers yet another level of security for .NET clients.

Putting It All Together

The sample code for this article includes CryptoUtility and its administrative application, CryptoAdminUtility. We covered some aspects of CryptoAdminUtility, but the full source code does much more, including adding Windows users, setting their group membership, creating and configuring COM+ applications, and placing DPAPI-encrypted keys in the registry.

Figure 4 Set Security Level on COM+

Figure 4** Set Security Level on COM+ **

CryptoUtility consists of six classes. The main class, Crypto, provides the only externally visible API through its interface ICrypto. Since CryptoUtility is a COM+ ServicedComponent, we followed best practices with its design, including explicitly implementing an interface. The Crypto class makes use of several internal helper classes which perform the actual work of encrypting. These include SymmetricCryptographer, DpapiCryptographer, Hasher (for hashing and comparing hashes), and a utility class for reading strings from a resource file.

The Crypto class manages key caching and allows client applications to specify whether to use caching. Internally, Crypto uses a contained class called CachedKey to hold cached keys. Crypto accesses these cached keys through an internal collection. Lastly, Crypto contains internal methods to fetch data from the registry, and in turn uses DpapiCryptographer to decrypt those keys.

Figure 5 illustrates a static diagram depicting the CryptoUtility classes. The encryption and decryption sequences through CryptoUtility are very similar. The sequence diagram in Figure 6 shows the encryption process, but the decryption process functions exactly the same way, just performing decryption where the diagram depicts encryption.

Figure 6 Encryption with CryptoUtility

Figure 5 The CryptoUtility Classes


While there is no such thing as perfect security, it is possible to use cryptography in a secure manner in a real-world application. The .NET Framework provides excellent support for building such secure applications out of the box. There are trade-offs in every architecture and design; we've described the ones we made so that you can make better choices for yourself. A large part of successful architecture and software design lies in understanding these trade-offs and then choosing wisely.

Michael Stuart is a senior consultant with Microsoft Consulting Services. He reminds you that Halo 2 is coming in November.

J Sawyer is a Developer Evangelist with the Microsoft Central Region Developer & Platform Evangelism team working in Houston. Previously, J was a consultant with MCS and has been developing solutions on the Microsoft platform for over 10 years.