Playing with Exchange in a Sandbox

Greg Taylor and Simon Shepherd


At a Glance:

  • Calculating IOPS
  • Configuring JetStress
  • Gathering data with Exchange Profile Analyzer
  • Exchange Server 2003 Load Simulator
  • Exchange Server Stress and Performance tool

"Looks great! Now prove it works before we deploy it." If you’ve ever planned and pitched an Exchange Server solution, you’ve probably heard something like this. Many a messaging architect has quickly

responded that they can and will prove the design—throwing around names like LoadSim, JetStress, IOPS, Heavy Profile and Medusa for good measure. And then they proceed to either not run the tests properly or, worse, not run them at all. There’s a reason why that happens: these tools are not always easy to use!

Designing servers for use in anything but the smallest of Exchange deployments is complicated. Availability and reliability are key in Exchange design. So how can you be sure that the servers you have specified can cope with the load that will be placed on them if you haven’t been able to realistically test them over any period of time? Deploying into production should never be the time at which testing takes place!

This article focuses on validating and testing mailbox servers. These tend to rely on fast and reliable storage and also consume more memory the more users are serviced. We’ll discuss some important considerations when planning your tests and a number of handy tools to help you perform them. For more on the performance characteristics of Exchange Server, be sure to read "The Exchange Server 2003 Performance and Scalability Guide". And for some advice on determining the number and speed of processors you’ll need for your deployment, read the sidebar "Calculating Processor Power."

Simulating Real World Users

Sure, you can set up a test environment. But you need "users" to put the environment through its paces. How do you test and validate your designs under realistic conditions without recruiting a few thousand Outlook® users to log on to your test environment, send a few e-mails, and make a few appointments? Simple: you simulate the work. The problem is configuring these tools to represent your users. There are defaults, but how many default users do you have? If you don’t know what load your users place on the server or how many mails they typically send and receive, you can’t accurately simulate their activity.

This article will help you gather and calculate data for configuring the test tools beyond the defaults, so they better simulate your real users. We’ll look at how you can measure the impact users have on the disks of a server, which ultimately relies on how many messages each user sends and receives. We’ll discuss how you can calculate the number of inbound SMTP mails received each day. And we’ll show you how to use the new Exchange Profile Analyzer to look inside existing mailboxes to accurately see how users actually use the messaging system. Then we’ll move on to using these numbers for testing.

None of these calculations are an exact science. But by taking the right approach and properly interpreting the data, you should be able to profile and simulate your users’ impact with confidence, making your tests more accurate and reliable.

Measuring the Impact of Users

There are two ways to profile your users: one relies on actual measurement of existing data, the other relies on educated guesswork. If you already have Exchange, you can use the real data from that implementation to help you plan the new deployment. If you currently use a different messaging system, you’ll have to use a less accurate approach.

Real Data Approach There are several ways to measure performance and work load. Probably the best way to calculate load is to measure disk I/O operations per second, per mailbox (otherwise known as IOPS). This is a measure of the raw database usage. It’s also the key input parameter to JetStress (which we’ll discuss), so working this number out is essential to configuring the tool accurately.

The time of day at which you observe your data will significantly affect the results. For a fair representation of day-to-day usage, we recommend that you gather data during peak usage—Monday morning between 9 A.M. and 11 A.M., for example.

To measure IOPS per mailbox (IOPS/mailbox):

  • Select a production server with a typical user load.

  • Use the Performance Monitor tool (perfmon.msc) in order to monitor the Logical Disk\Disk Transfers/sec counter over the peak two hours of server activity on the database drives.

  • Calculate your current IOPS/mailbox using the following formula:

    IOPS/mailbox = (average disk transfer/sec) 
    / (number of mailboxes)

For example, if your server averages 2,100 disk transfers per second over the two hour period, and it hosts 4,000 mailboxes, the calculation would be:

2100 / 4000 = 0.525 IOPS per Mailbox

In this case, users are causing Exchange to perform 2100 disk read/write operations per second. That sounds like a lot and it is. This is why Exchange solutions generally require a lot of disks—not for capacity, but for performance.

Knowing the number of mailboxes to divide by is important. Counting the number of mailboxes on a server should be simple, but often isn’t. Many companies have a large number of dormant mailboxes sitting on servers—for former employees, for example. Should you count these? It depends—if those mailboxes still receive mail, then yes. If not, you can rule them out.

We use the logical and not the physical disk counters—if the server is clustered and we use the physical disk counters, we would get the data for all the disk resources in all the resource groups, whether they are on an active node or not. There won’t be any data in them, but it unnecessarily adds to the amount of data being collected.

You can’t just get a number from your disk vendor for how many IOPS their disks can provide and divide that by the average per user to come up with the number of users you can support on that disk. It’s important to factor in disk latency, RAID penalty, the read/write ratio, and throughput when calculating storage requirements. These factors are outside the scope of this article, as they relate more to the design of the storage. For now we just need to see how many IOPS each user generates so we can use that as input for our testing tools. For more information, see How to Calculate Your Disk I/O Requirements.

Finger in the Air Approach Sometimes, measuring real data just isn’t an option. For instance, there may be so many other applications running on the Exchange Server that you can’t isolate the work Exchange is doing. In situations like this, you have to guess. But we can give you some guidance to help you make more accurate estimates.

There are four fairly tried and tested Exchange Server 2003 profiles that have equivalent IOPS numbers, based on Outlook/MAPI users. Figure 1 provides typical numbers for each of these profiles. It is important to note that IOPS will increase as mailbox size increases—we’ll come back to that later.

Figure 1 User Profiles and Usage Patterns

User Type Database Volume IOPS Sent/Receive Per Day Mailbox Size
Light .5 20 sent/50 received 50MB
Average .75 30 sent/75 received 100MB
Heavy 1.0 40 sent/100 received 200MB
Large 1.5 60 sent/150 received 500MB

By choosing one of these profiles and using the numbers shown, you may be able to estimate the IOPS for your environment. This number will be useful when configuring LoadSim and JetStress. When using an estimate, however, it is particularly important to test your solution thoroughly before going anywhere near production.

Measuring Mail Flow

You need to measure mail flow so you can configure the Exchange Server Stress and Performance tool (ESP) to send the right number of SMTP mails inbound to the test environment. Measure on the SMTP Gateway server that accepts mail coming in from the Internet. You don’t need to measure outbound mail, since LoadSim cannot be configured to send mail to recipients that are not other LoadSim users.

To measure mail flow:

  • Select an SMTP gateway server with a typical load.

  • Use the Performance Monitor tool to monitor the SMTP Server\Messages Received Total counter over the peak 2 hours of server activity.

  • If you are able to restart the SMTP service, do so; this will zero all the counters and make calculations easier.

  • Calculate messages per second as described in the following formula:

    messages per second = 
    (total number of messages) / (number of seconds measured)

For example, if your server receives 15,654 messages over the two hour period, the calculation would be:

15,654 / 7200 = 2.17 messages per second

The number you come up with will be used to configure the instances field in ESP.

More Factors to Consider

There are a few more factors to consider before you use these numbers to configure the testing tools.

Concurrency You don’t necessarily need to design a system for 100 percent concurrency. If you know for sure that only 10 percent of the users will ever be logged in at any one time, adjust your figures accordingly. Be sure to allow for fluctuations.

Increase in Usage The number of employees may go up, e-mail usage will likely increase, and the average size of e-mail attachments will probably increase over time. When designing your system, account for these changes.

Large Mailboxes Large mailboxes require more IOPS. The ratio is pretty constant. You can assume that when the mailbox size doubles the IOPS also double.

Remote Device Usage When devices like Smartphones and Blackberries are used to access mailboxes, they affect the IOPS (to varying degrees). You need to assess the effect that a growing mobile workforce will have on your deployment.

The Tools of the Trade

Now let’s take a look at the tools you use to simulate the workload. We’ll look at four, each of which performs a different task. These are outlined in Figure 2.

Figure 2 Simulation Tools

Tool Name What It Does When You Would Use It
JetStress Used to test the performance and stability of the disk subsystem. Storage performance validation; storage reliability testing; end-to-end testing of storage components.
Exchange Profile Analyzer Used to measure actual user profiles, which can be used as input to LoadSim. LoadSim fine tuning; real-world data analysis.
Load Simulator 2003 (LoadSim) A benchmarking tool used to test the response of servers to mail loads. Comparison tests; scalability tests; benchmarking; MAPI load generation; end-to-end testing with third-party solutions.
Exchange Server Stress and Performance (ESP) Used to test stress and performance of Internet protocols (SMTP, IMAP4, POP3, OWA, and so on). Comparison tests; scalability tests; benchmarking; POP3, IMAP4, SMTP, DAV (OWA) load generation; end-to-end testing with third-party solutions.

The Exchange Server 2003 Load Simulator (or LoadSim) is probably the most well known and commonly used of these tools. You can use it to simulate MAPI clients. JetStress, which is also well known and has been updated recently to add a GUI, lets you test storage components. The Exchange Server Stress and Performance tool is used to send SMTP e-mails. It’s less commonly employed, probably because it’s very difficult to configure. The Exchange Profile Analyzer (EPA) is not used for simulating workload. It gathers data about your real environment, which you can then use to customize LoadSim for more accurate testing.

This article will not give you step-by-step instructions on utilizing these tools (you can get that from the documentation). Instead, we’re going to help you configure the tools for better results and introduce some important points you might not have considered.

Configuring JetStress

JetStress is used solely to test the storage subsystem of your Exchange servers. Typically, this is the first test you will run. Before getting started, you need to hear some warnings. Do not use JetStress on a production server, do not run it on a machine that already has Exchange running on it, and do not use the other testing tools at the same time. JetStress isn’t interested in users, mail flow, or any of those things—it just wants to beat up on the disks in the same way Exchange does, so test your system using JetStress, remove it, and then move on to the other tools.

Now let’s look at what you should do. Here are 15 of the most important points you need to know for accurately configuring JetStress. If you bear all these points in mind and run the tests several times you should get consistent, reliable results.

  • Download the latest version. There have been several important updates to JetStress over the last year or so. Most noticeably a UI was added!
  • Read the documentation that comes with the tool—it is packed full of valuable information.
  • Disable Storage Area Network (SAN) data replication while tuning JetStress. Once you have reached a steady state you can reenable SAN replication if required.
  • If other applications share the same disks (not recommended, by the way) or SAN, then use a tool like Iometer ( to simulate the additional load.
  • Make sure the versions of the ese* files you use are the same Service Pack version as those you will use in production. Some significant advances were made in SP2 for Exchange 2003.
  • Create databases equivalent in size to those you will use in production. Ten percent of the size of those planned for production is no longer recommended because disk latency increases as the disk fills up.
  • The number of disks field refers to the number of logical disks (mount points included) used in the storage group. One logical drive with five databases on it is one disk. If each database gets its own drive letter, the number of disks would be five.
  • Databases and logs must go on separate disks. If you have to run them on the same disk, check the NAS storage checkbox to bypass the drive logic check or use the command-line version of the tool.
  • The automatic tuning feature works well and should always be used, unless you really need to fine-tune the process or if JetStress reports that auto tuning won’t work in your configuration.
  • Don’t use the automatic tuning on several machines using shared storage at the same time.
  • If you have to manually configure the tool, start with the back of an envelope calculation of 1 thread being able to deliver 100 IOPS across 4 storage groups each with 1 database. Then you can build from there.
  • When manually configuring parameters like the number of threads, run several iterations of the tests in order to get those numbers right.
  • If a stress test will last 24 hours, you need to account for database growth and initialize your databases at approximately 50 percent of the actual limit you would use in production.
  • The report at the end of a test run will provide a summary. You can validate the conclusions yourself by using the Performance console to look at the .blg file produced during the test run.
  • Finally, use the backup facility (built into the latest versions). You will run the test a few times to get your results and using backups will save you a significant amount of time.

Since JetStress is all about disks, the counters you need to analyze are all disk related. Figure 3 outlines the performance counters you should be most concerned with and the values you should anticipate. Remember that averages can be deceptive. If the Performance Monitor tool gives you an average number, don’t just use it. Look at the chart and take a visual average, asking whether it agrees with the calculated average. Eliminate peaks and valleys from your calculations, instead looking at trends. Adjust the timeline to cut out the early test peaks to see how that affects the system-generated numbers.

Figure 3 Key Performance Counters

Object Counter Instance Definition Expected Value
Logical Disk Disk Transfers/sec All The amount of disk I/O per second. Total of Reads/Writes—this is the actual IOPS figure.
Disk Reads/sec All The rate of read operations on the disk. No expected value since this varies widely based on environmental factors.
Disk Writes/sec All The rate of write operations on the disk. No expected value since this varies widely based on environmental factors.
Avg Disk sec/read All The latency associated with each read from disk. At all times, this value should remain below 20ms for the database drives and 10ms for log file drives.
Avg Disk sec/write All The latency associated with each write to disk. This value should be below 10ms for the database and log file drives at all times.

Using Exchange Profile Analyzer

LoadSim generates workload against Exchange to simulate Outlook client activities. The difficulty is in accurately specifying that workload. LoadSim provides several predefined profiles—Medium and Heavy being the recommended profiles for pre-deployment scalability testing. The profiles represent collections of Outlook tasks, such as reading and sending e-mail, creating meetings, and browsing public folders. To get the most out of LoadSim, you need to customize the workload profiles. This is where the EPA comes in.

The EPA generates statistical usage data from Exchange by analyzing existing mailboxes. The results can then be interpreted and used to create a custom LoadSim workload profile.

We recommend that you run EPA remotely and use it to profile Exchange 2000 SP3 or Exchange 2003. The tool is intended for use in production environments, which introduces some potential concerns regarding security and system performance.

EPA accesses mailboxes directly using WebDAV. EPA requires access to every mailbox it analyzes, and this is explicitly denied by default to members of the Enterprise and Domain Administrators groups. Therefore, you must create an account for EPA that is not a member of either of these groups and grant the account Exchange View Only permissions at the Administrative Group or Organization level using Exchange System Manager (ESM). You also need to grant Receive As and Send As permissions to the account on each server with mailboxes being profiled. No changes are made to the mailboxes being profiled. Unread items, for example, remain unread. The only change is the last logon time.

EPA profiles mailboxes at a rate of about 0.5 to 1.0MB per second. You may want to target only a representative subset of Exchange users. The impact of EPA is similar to a cached mode full sync. Due to the impact EPA can have on a mailbox being profiled, we suggest you run the tool outside of normal hours. (A profiling job can be scheduled to run using a configuration file and the command-line version of the tool. But in this article, we’re concentrating on the GUI version of EPA.)

An EPA job can be targeted at the store level, and even at only those mailbox items created within a particular timeframe. You could tell EPA to look at all the mailbox stores in Storage Group One, but only items from the last 72 hours. This flexibility can help you limit the impact on the production environment.

For your first look at EPA, consider using LoadSim to initialize some test mailboxes using the built-in LoadSim profiles. Create two storage groups and initialize the first with some Medium users and the second with the Heavy profile. Then install EPA, fire it up, and select Connect to Active Directory. The tool offers a number of configuration options, some of which are detailed in Figure 4.

Figure 4 Configuration Options in EPA

Option Description
Active Directory You can specify a Global Catalog (GC) server in the forest that is hosting Exchange or you can leave this blank to use a GC in the current forest.
Open Configuration Use this to load a previous configuration. The GC and login account can be saved to an XML file for reuse.
Logging\Stats Option This lets you specify logging level. You can specify whether to collect statistics on individual mailboxes, which will increase the size of the output log file. By default, data is reported anonymously with each mailbox represented by an ID number.
Specify a time frame This, obviously, is used for specifying a time frame to look at. By default, it looks at yesterday and will not analyze your LoadSim data if you created it today.
Select the mailbox stores to be profiled EPA will display the Admin Groups, servers, and mailbox stores that can be profiled. This lets you select a representative set of information stores. Bear in mind the time it can take to profile mailboxes. And note that the System Attendant mailbox will generate an exception as this URL can’t be crawled.

Once you’ve specified all your configuration options, you can click Start Collect. If you have multiple stores with the same name on the same server, the respective report can’t be viewed at the individual store level. Be sure to rename any stores with duplicate names. For example, if you have multiple storage groups, each with a Mailbox Store One, you will need to rename them to something like Mailbox Store One (First Storage Group) and Mailbox Store One (Second Storage Group).

An EPA run produces three files in the folder below C:\Documents and Settings\<User Name>\Application Data\Microsoft\EPA. The log file records details of mailbox access during profiling. The logging selection found on the Configuration tab lets you control the level of detail. An XML file contains the raw data collected during the profiling process, and an HTML file contains the actual report. This includes details like the average mailbox size, the average number of messages and folders, and the like—all of which can be used to customize LoadSim.

You can analyze the report on many levels. For example, the Overall Statistics option gives you a summary for all servers and stores profiled. Information Store summarizes just the selected information store.

Customizing LoadSim

Now that you know how to use EPA to gather some profile data, let’s customize LoadSim. The workload can be customized at three levels: Topology, Initialization, and Profile. Let’s look at each level, discussing how some of the data you produced using EPA can be mapped into LoadSim to increase workload accuracy.

Topology The LoadSim topology defines the high-level structure of the simulated Exchange organization. The mailboxes required for the test are allocated to each store, and Distribution Lists (DLs) and Public Folder requirements are configured. Active Directory® is populated with the required mailbox-enabled accounts and DLs when the topology is generated.

It is only necessary to include the servers, storage groups, stores, and mailboxes required for the LoadSim test. Even if the production environment contains more servers, adding them to LoadSim will not make the results any more valid.

The DL values are critical, though often overlooked and derogated. The default values for DLs are: 1000 per site, Membership, Minimum 2, Average 10, and Maximum 20. For many organizations, the LoadSim defaults are not representative of the production system DLs. (One thousand DLs is quite a lot for a small or medium sized organization.) EPA can’t help you examine the production DLs in your organization, so you will need to perform a Directory export to get at that data. To export this information from Active Directory, use the Windows® Support tool csvde.exe and the following command line:

csvde.exe – d "ou=loadsim users, dc=forest,
    dc=net" –r  "(objectclass=group)" –f

The resulting export file can be manipulated in Microsoft Excel® to identify the values you should use to replace the defaults.

Changing the LoadSim default DL values can have quite a dramatic impact on the generated load. For any given user profile, the number of messages delivered will vary according to the Distribution List settings. For example, by using the default average membership of 10, a Medium user profile will result in 141 messages being received per user. If the average and maximum membership values are increased from 10 to 100 and from 20 to 200 respectively, then the number of messages received increases to 5,621. So while the LoadSim test will be submitting the same number of messages per user, the number of messages delivered will have rocketed due to DL expansion.

LoadSim is typically run for 8 to 24 hours and has the potential of using all of the available DLs during the test cycle. Organizations typically have a small number of large DLs that are used infrequently (perhaps composed of all full-time employees or all users in Europe, for example). This, unfortunately, is difficult to model in a short test cycle. The important thing is that the servers can cope with the normal required throughput. If this throughput can be generated with the slightly artificial DL values provided, the test results are valid. The test does not need to be run with DL values that match the production values, because LoadSim will not be able to generate a representative load. Of course production DLs are also typically out of date and may include unused DLs. LoadSim assumes that housekeeping is fully up to date and that all DLs are active.

Once the topology has been created, any changes to it will require that the initialization steps described in the next section be repeated. And you must ensure that the .sim file used to save the topology creation parameters is the same file you use for initialization. Small changes and mistakes in these areas can seriously affect the success of your test.

Initialization By default, the LoadSim profiles create mailboxes of approximately 60 to 70MB for a Medium profile and 100 to 120MB for a Heavy profile. This might not be appropriate for your testing. When LoadSim initializes the mailboxes, it uses a probability algorithm; so although all the mailboxes will have the same number of items in them, sizes will vary. By tuning the number of messages and folders, you can create mailboxes of any size.

Open the EPA report and look for the values in the left columns from the Overall Statistics, Server Statistics, or Mailbox Store Statistics. These values can be used to perform this tuning. The EPA report values are totals for the servers and stores profiled. The LoadSim values are totals per mailbox. Use the Mailbox: Total Count value from the EPA report to calculate the per mailbox values, remembering that the mailbox size for the stores profiled is shown as Mailbox: Aggregate of mailbox size.

Figure 5 shows how the EPA values map to LoadSim parameters. This allows you to fine-tune the size of the mailboxes that LoadSim creates.

Figure 5 EPA and LoadSim Initialization Mapping

EPA Report Value LoadSim Parameter
Folder Hierarchy: Total number of user-created folders Initialization: Number of new folders
Message Counts: Total number of messages Initialization: Messages per new folder
Message Counts: Total number of messages in Inbox Initialization: Number of messages in Inbox
Message Counts: Total number of messages in Deleted Items Initialization: Number of messages in Deleted Items
Calendar: Aggregates of appointments per weekday Initialization: Number of Appointments
Contacts: Aggregates of contacts count Initialization: Number of Contacts
Rules: Aggregates of rules count Initialization: Number of rules in Inbox
Folder Hierarchy: Total number of Search Folders Initialization: Number of Smart Folders

Profile A LoadSim profile has many settings to define the actions a "user" will perform during a test run. Figure 6 shows some of the settings and parameters the profile takes. These parameters generate various tasks over the duration of the test. Figure 7 lists the fields that can be mapped from your EPA reports into LoadSim parameters to create profile users that are more like your real users. At the beginning of the test, LoadSim will look at the profile and test duration and create an appropriate workload. Decreasing the duration of the test day length will increase the workload by increasing the task intensity. (Hint: If you want to increase IOPS, simply shorten the day length.) You can view a projection of your customized workload in LoadSim by looking at the Test\Logon tab of the Customize Tasks dialog.

Figure 7 EPA to LoadSim Workload Profile Mapping

EPA Report Value LoadSim Tasks\Parameter
Message Frequency: Aggregates of messages sent per weekday Send Mail: Frequency per day
Recipients: Aggregates of number of recipients across all messages sent out Send Mail: Recipients per message
Recipients: Aggregates of DLs across all messages sent out Send Mail: Add a single DL to X% of messages
Message Frequency: Aggregates of replying messages per weekday Process Inbox: Reply
Message Frequency: Aggregates of forwarding messages per weekday Process Inbox: Forward
Message Frequency: Aggregates of messages deleted per weekday Process Inbox: Delete
Calendar: Aggregates of meetings per weekday Request Meetings: Frequency
Calendar: Aggregates of meeting request received per weekday
Calendar: Aggregates of appointments per weekday Make Appointment: Frequency
Folder Hierarchy: Total number of Search Folders Smart Folders: Number of Smart Folders
Rules: Aggregates of rules count Rules: Number of rules
Contacts: Aggregates of contacts created per weekday Create Contact: Frequency

Figure 6 LoadSim Workload Profile

Figure 6** LoadSim Workload Profile **

The EPA tool does not provide data for every possible task customization in LoadSim, but it does provide sufficient data to improve on the per-defined profiles and generate more accurate workloads. Before starting the LoadSim test, let’s look at how you can add ESP into the environment to add yet another level of realism to your test environment.

Exchange Stress and Performance

Now let’s configure ESP to send inbound SMTP mails to your system (that is, to your LoadSim users). You do this configuration at the same time LoadSim is running, throwing some regular old Internet mail into the system. This creates an even more realistic setup for your tests.

To configure ESP you need three things: a list of users to send messages to, a script that actually sends the mail, and some sample e-mail messages. The easiest way to get the list of users is by exporting the SMTP addresses of the users created by LoadSim. Export these to a CSV file using Active Directory Users and Computers, edit the list in Excel so that just the actual addresses are left one per line, name the file SMTP_RECIPIENTS.TXT, and save it to the C:\ESP folder.

To create the script file, simply copy the following script into Notepad:

MAIL FROM: <testuserRANDNUMBER(1,10000)@domainRANDNUMBER(1,100000)>

Save this file as SMTPSCP.TXT and place it in the C:\ESP folder. The script randomly selects an SMTP recipient from SMTP_RECIPIENTS.TXT and attaches an e-mail file (.eml) to the message.

You can use Outlook Express to create a message. Don’t put any recipients into the message—just save it to the C:\ESP directory as an .eml file. Do this 10 times (if you are using the script we have provided) putting differing amounts of text into each of the files. If you want, you can also place various-sized attachments in some of the messages.

Now fire up ESP and add a host (see Figure 8). You can just use LOCALHOST to target the machine you installed ESP onto. Then click Connect Host and set a time duration (leaving that field blank will result in a test that runs infinitely).

Figure 8 Basic ESP Connection

Figure 8** Basic ESP Connection **

Right-click the machine you just added in the ESP console, select the Add Module option, and select SMTP. Go into the properties of the SMTP module (see Figure 9) and enter the path to the script file C:\esp\smtpscp.txt. (You may notice the browse button doesn’t seem to work—that’s why we’re explaining this step by step.) Enter the server name or IP address running Exchange, and finally enter the number of script instances. This indicates how many instances of the script will run concurrently—entering 10 means 10 e-mails will be sent in at any one time.

Figure 9 SMTP Script Parameters

Figure 9** SMTP Script Parameters **

ESP can produce an amazing load using very limited resources. A 5-minute duration with 10 instances using the test script we have provided will send nearly 2,500 e-mails. So use the instances parameter and the sleep parameter within the script to control the load being applied.

Each instance of our supplied script takes approximately 1 second to run; therefore, in the example mail flow calculation we showed you earlier, this value would be 2. As we calculated, there are approximately 2 messages per second inbound to our production system.

Testing Methodology

Once you’ve gathered all your data, done your calculations, and configured your tools, you’re ready to start testing. Remember the three tools weren’t designed to all work together. First use JetStress to performance- and stress-test the storage subsystem. This will help you confirm whether the disk configuration you have designed is working as it should. Once you’ve done that, move on to LoadSim and ESP.

LoadSim and ESP provide more of an end-to-end test of your Exchange system, helping you identify bottlenecks and misconfigurations. Be sure you approach your testing in a logical manner—if you plan on using multiple workstations to deliver the load, first configure one and test it and then scale up. Don’t just configure 10 clients with 1,000 users each and then launch them all at once.

Our final piece of advice is this: measure the counters at the right time. Exchange does a lot of optimization as it runs, and the first few hours of your test run will not be representative of how Exchange will run after several days or weeks. Let it settle, give the servers a few hours to adjust themselves, and then when the system is in a steady state start measuring numbers.

We hope this article has shown you why testing is more than just running LoadSim or JetStress in isolation. But we also hope you are now better equipped to use these tools to simulate your environment. Once you do that, you can really prove that your design works exactly as expected.

Calculating Processor Power

A common problem when designing an Exchange solution is determining the speed and number of processors you need in each server. A useful measure for calculating this is megacycles per mailbox, the amount of processor usage consumed by each mailbox. This calculation isn’t used as an input to any of the load simulation tools, but it helps ensure that the processors in the new servers will satisfy the load requirement.

To calculate megacycles per mailbox:

  1. Select a production server with a typical user load.

  2. Use the System Monitor tool to monitor the Processor\% Processor Time counter over the peak two hours of server activity.

  3. Calculate your current megacycles per mailbox as described in the following formula:

    Megacycles per mailbox = 
    ((average CPU usage) × 
    (speed of processors in megacycles) × 
    (number of processors)) / (number of mailboxes)

For example, if your server has two 1.8GHz processors, it hosts 1,500 users, and the processor time counter averaged 70 percent, the formula would be:

((1800 x 2) x 70%) / 
1500 = 1.68 megacycles 
per mailbox (per second)

Based on this information, if you are planning to buy a new server with four processors, each running at 3GHz, you can calculate the approximate processor utilization these same users will place on the new server by plugging in the appropriate numbers. And, in turn, you can calculate how many users that server can support, at least in terms of processor. (As a guide, we always aim to load servers to 80 percent as this allows for spikes in usage.) So, to calculate CPU usage, use the following formula:

CPU Usage = ((number of users) × 
(current megacycles per mailbox)) / 
((number of processors) x (speed of processors in megacycles))

The 4 x 3GHz server gives a processor utilization of just 21 percent:

(1500 x 1.68) / (4 x 3000) = 0.21 (21%)

Nmax users = 0.80 × (number of processors)
 × (speed of processors in megacycles) 
 / (current megacycles per mailbox)

0.80 x (4 x 3000) / 1.68 = 5714 users

Wow! So the new server should be able to support over 5,000 users with the same workload! Sounds great, but Exchange is about more than processors, as you know. Exchange performance is holistic by nature and the actual performance will depend on the combined performance of all the components. So don’t go loading that server up just yet. First you’ll need to do some thorough testing.


Greg Taylor is a Senior Consultant for Microsoft in the UK. He has been working for Exchange for seven years.

Simon Shepherd is a Principal Consultant for Microsoft based in the UK.

© 2008 Microsoft Corporation and CMP Media, LLC. All rights reserved; reproduction in part or in whole without permission is prohibited.