Henrik Walther

Q Our organization's messaging infrastructure is based on Exchange Server 2007. We have a relatively strict message-size limit of 12MB set throughout the organization.

We have observed a strange behavior that seems to be related to the size of files attached to a message. When sending an e-mail message to an external user with, let's say, an 11MB attachment, the message is delivered to the recipient as expected. But if this message (including the attachment) is forwarded back to the sender on the internal network, the sender gets a non-delivery report (NDR), indicating that the message is larger than the current system limit or that the recipient's mailbox is full.

After taking a close look at the issue, we can see that at some point after the message leaves the organization, the size of the attachment increases by approximately 30%. The question is, why do attachment sizes increase while sending and receiving e-mail messages through the Internet? And more important, is this expected behavior?

A The short answer is yes. This is often expected behavior, not only for Exchange Server 2007 but also for earlier versions of Exchange Server as well as any other messaging system that supports MIME (Multipurpose Internet Mail Extensions) and uses Base64 to encode attachments. When an internal Exchange user sends a message to a recipient inside the Exchange organization, the message doesn't require any content conversion. This means that you won't see the message or attachments increase in size when they are delivered. Messages sent to external recipients, on the other hand, may require content conversion.

A standard SMTP message (also known as a plain-text message) consists of a message envelope and the message contents—the message header and message body. These elements are based on plain 7-bit US-ASCII text. When a message contains elements that are not plain US-ASCII text, the elements must be encoded. When dealing with such non-text content, including attachments, MIME is used for encoding. Both Exchange 2007 and earlier versions of Exchange Server use the Base64 algorithm to encode attachments. And the disadvantage of Base64 is that it bloats attachments by 33%.

In Exchange 2007, except when Outlook Web Access is used, most transport-related content conversion is performed on the Hub Transport server. For a detailed explanation, see the topic "Understanding Content Conversion" at technet.microsoft.com/library/bb232174.

Q We are in the middle of a transition from Exchange 2003 to Exchange 2007. We have moved all user mailboxes to Exchange 2007 mailbox servers, all of which have been configured using Cluster Continuous Replication (CCR). We are currently replicating all public folders from our legacy Exchange 2003 public folder server to a CCR-based mailbox server. However, we have discovered during testing that when a lossy failover occurs on the CCR cluster, the public folder database isn't brought online on the other node. We also cannot mount it manually after the failover.

We have a lab environment that mirrors our Exchange 2007 infrastructure in the production environment, and testing shows that the issue occurs here as well. We don't see this issue with any of the mailbox databases on any of the CCR clusters on which a lossy failover occurs, so it seems to relate strictly to public folder databases on CCR clusters. Since we want to achieve true redundancy for all databases, including the public folder database, do you have any insight into what would cause this behavior?

A The replication methods used by CCR and by public folder replication in Exchange 2007 are two very different beasts. Because of this, it is not recommended that you combine multiple public folder databases in an Exchange organization with CCR-based mailbox servers, if one of the public folder databases is hosted on a CCR-based mailbox server. You can actually do this during a transition, and the Exchange product group supports having a public folder database hosted on a CCR-based mailbox server and, for instance, a legacy Exchange 2003 server. But it's highly recommended that you remove the public folder database on the non-CCR-based mailbox server after all public folder data has replicated.

What you're experiencing in your Exchange messaging environment is normal. When you have multiple public folder databases and one of them is hosted on a CCR-based mailbox server, the public folder database on the CCR-based mailbox server will not be brought online during a lossy failover (that is, an unscheduled outage).

Actually, the public folder database can't be brought online before the previously active node is brought up again. Furthermore, all transaction log files for the storage group in which the public folder database is hosted must be available.

If this isn't an option, the first line of defense should be to restore the public folder store from the last good backup, play through the available logs, and then reseed the other node from the restored database. Alternatively, the public folder store could be created from scratch. In this case, the original active node must be recovered, and a new public folder database must be created and have public folder data replicated from another public folder server in the Exchange organization.

What may seem odd is that when a lossless (scheduled) outage is performed, the public folder database is brought online. This is expected behavior. For more information see "Cluster Continuous Replication and Public Folder Databases" on the "Planning for Cluster Continuous Replication."

Q All the mailbox servers within our Exchange 2007-based messaging infrastructure are configured using CCR. We are very satisfied with the way CCR works, but have a question we hope you can answer.

When online maintenance is run each night, one of the tasks is the online defragmentation. How do we ensure the databases on the passive node in a CCR cluster get defragmented during online maintenance?

A The online defragmentation task, which deletes any items marked for removal and then turns the space used by these items into white space, will generate new transaction log files during the process. Any transaction log files generated on the active CCR node will be replicated to the passive node, resulting in the changes being performed to the databases on the passive node as well.

With this in mind, please make sure you schedule the online maintenance window so that it doesn't conflict with your backup window, as this forces the online defragmentation to halt. Not that the defragmentation necessarily needs to complete every day, every week, or even every second week for that matter. In the past, guidance from the Exchange Product group specified having online defragmentation complete at least every second week. But that's changed with Exchange Server 2007 SP1, as each organization's environment is different. For more details on this new guidance, see the post on the Microsoft Exchange Team Blog.

Q We're planning to use Exchange 2007 CCR in order to achieve true redundancy for our mailbox servers. Currently, we're looking into how the transport dumpster is used in combination with CCR in order to ensure that no messages are lost during a lossy failover from the active CCR node to the passive node. Do you know of any transport dumpster gotchas we should be aware of?

A The transport dumpster ensures that you have a minimum data loss during a lossy failover from one node to another on an Exchange 2007 mailbox server that uses CCR. This is accomplished by redelivering messages that were recently submitted to the mailbox server. During a lossy failover, there's a real chance you will lose some transaction log files and, because of this, actual data as well. As noted, the transport dumpster redelivers the messages that were recently submitted to the mailbox server and thereby ensures that data lost during the lossy failure is restored. However, since it is only messages that are delivered via the Hub Transport server on which the transport dumpster resides, data such as tasks and calendar items created close to the lossy failover will be lost.

Q We are currently planning a cross-forest migration from an Exchange 2003 organization to an Exchange 2007 organization in a new Active Directory forest. We have extensively studied the cross-forest migration documentation, "How to Transition from Single Forest to Cross-Forest", which indicates that you should create a forest trust and not an external trust between the forests. Why is it that an external trust can't be used?

A Although the Exchange 2007 documentation on TechNet says you should use a forest trust rather than an external trust, it doesn't mean that you can't use an external trust. In fact, an external trust works just fine for a cross-forest Exchange migration, but there is one disadvantage. You must specify an account with the appropriate permissions to access a domain controller in the trusted forest when creating a linked mailbox (see Figure 1), since you cannot use the credentials of the logged-on user no matter what permissions were assigned to it.


Figure 1 Specifying an account on the Master Account page when creating a Linked Mailbox

Q Our organization just transitioned to Exchange 2007, and so far we are very pleased with the new version, with one possible exception. Back when we were using Exchange 2003 SP2, we were able to configure our environment so that the simple display name of a user mailbox was shown as the sender of an outgoing message. To our dismay, we have not been able to find a similar feature in Exchange 2007. Please don't tell us that this feature is missing in Exchange 2007!

A This feature was, in fact, missing from Exchange 2007 RTM, right up until Exchange 2007 SP1 Rollup Update 4 (RU4) was released back in October 2008. With SP1 RU4, you can once again, just as with Exchange 2003 SP2, configure Exchange to show the simple display name on outgoing messages. This task can be accomplished using the Windows PowerShell Set-RemoteDomain cmdlet with the –Use­SimpleDisplayName parameter. For example, to enable simple display names on outgoing messages that are sent to the contoso.com domain, use the command shown in Figure 2.


Figure 2 Using Simple Display names for outgoing messages

Q What is the best practice for defragmenting the database copies on the passive node on an Exchange 2007-CCR-based mailbox server? Will Exchange become confused if the databases on one of the nodes in the CCR are defragmented, but those on the other nodes aren't?

A If an offline defragmentation is required, it should always be performed on the active node in the CCR cluster, never on the passive node. Bear in mind as well that if you do an offline defragmentation of one or more databases on the active node, a full reseed of the particular databases to the passive node is required.

This means, for instance, that if you have a 200GB database (when using CCR, the recommended database size is 200GB when replicating over a 1GB network), it will take several hours to defragment it (a good rule of thumb is 2-4GB per hour). But after the defragmentation process has completed, you will also need to replicate 200GB of data to the passive node. If log file shipping is occurring over the public network, this could impact the overall network performance experienced by your end users.

In most cases, the reason for performing an off­line defragmentation is to remove any white space in the database in order to reduce the size of the database. But this is rarely necessary since white space will be reused before a database grows further. And it doesn't really matter if you have available space in the database or on the disk itself, does it?

If you have many gigabytes of white space in a database and you want to remove it, a much better approach is to move all mailboxes out of the database and into a new one.

Henrik Walther is a Microsoft Certified Master: Exchange 2007 and Exchange MVP with more than 14 years of experience in the IT business. He works as a Technology Architect for Interprise Consulting (a Microsoft Gold partner based in Denmark) and as a Technical writer for Biblioso Corporation (a U.S.-based company that specializes in managed documentation and localization services).