Best Practices Common to Multiple Architectures
Before concentrating on the specific best practices for your storage architecture, you should familiarize yourself with best practices that apply across multiple architectures. Regardless of your company's storage architecture, you must consider the following components:
Because optimizing hard disks is somewhat more involved than other components, that topic is discussed first. In general, the following best practices apply to hard disk optimization:
Capacity planning is an important aspect of storage planning. In order to optimize Exchange server performance, you should purchase many fast hard disks (higher disk access speed). For more information about planning storage capacity, see "Planning Storage for Each Server Configuration" in the Windows Server 2003 Deployment Guide.
For transaction log volumes (sequential disk access), use disks with faster rotational speeds. For database drives (random disk access), use disks with faster seek time.
Use disk systems that can detect imminent failures and that can salvage or relocate affected data. Most disk drives are capable of this functionality.
Depending on the hardware RAID configuration you use, plan for I/O penalties. In general, for each write request, hardware RAID generates the following I/O:
RAID-0 = 1 write
RAID-1 or RAID-10 = 2 writes
RAID-5 = 4 writes
Use the following formula to calculate your I/O penalty:
(IOPS/mailbox × READ RATIO)+ ((IOPS/mailbox × WRITE RATIO) ×RAID penalty)
For example, if you have 1,500 IOPS per mailbox (as calculated using procedures earlier in this guide), your read ratio is 66%/33% (two requests out of every three are read requests and the remaining one request is a write request), and you are using a RAID-1 or RAID-10 array, your actual hardware IOPS is:
(1,500 × 2/3) + ((1,500 × 1/3) × 2) = 2,000
Applying the same scenario on a RAID-5 array, your actual hardware IOPS is:
(1,500 × 2/3) + ((1,500 × 1/3) × 4) = 3,000
If all of your drives are 10,000 RPM, you will need at least 30 drives to obtain your required IOPS in a RAID-5 configuration. If you implement RAID-1 or RAID-10 instead, then you would need at least 20 drives (you can't have a RAID-1 or RAID-10 solution with an odd number of disks).
In most cases, you should use DiskPar to align your hard disk's tracks with the physical disk partition. Because Windows 2000 and Windows Server 2003 limit the maximum number of hidden sectors to 63, the default starting sector for disks that have more than 63 sectors per track is the 64th sector. All partitions created by Windows 2000 and Windows Server 2003 start at the 64th sector, causing one out of every eight blocks of data written to your disk to span two disk tracks. For detailed steps, see "How to Align Exchange I/O with Storage Track Boundaries" in the Exchange Server 2003 Performance and Scalability Guide. Note that Diskpar can only be used with basic disks. Diskpar cannot be used with dynamic disks.
The following table summarizes the best practices for each of the remaining storage components.
Best practices for optimizing common storage components
Because memory is cached to disk as physical memory becomes limited, make sure that you have a sufficient amount of memory available. When memory is scarce, more pages are written to disk, resulting in increased disk activity. For more information about tuning virtual memory, see Microsoft Knowledge Base article 815372,"How to optimize memory usage in Exchange Server 2003."
Also, make sure to set the paging file to an appropriate size. For information about how to set the paging file, see "Evaluating Memory and Cache Usage" in the Microsoft Windows 2000 Resource Kit.
More cache can help to offset peaks in disk I/O requests. However, you should note that more cache seldom solves the problem of not having enough spindles, and having enough spindles can negate the need to have a large cache.
If you have battery-backed cache, enable write caching to improve disk write performance on the transaction log file volumes and on the database volumes. Write caching provides a response time of 2 ms for a write I/O request, as opposed to 10 to 20 ms. Enabling write caching greatly improves the responsiveness for any submit that a client executes.
Read caching does not improve performance because it is only useful in sequential disk reads, which occur only in transaction log files. Transaction log files are read from only when they are being played back, such as after a database restore or when a server is not properly shutdown.
Larger caches allow for more data to be buffered, meaning that longer periods of saturation can be accommodated.
If your controller allows you to configure the cache page size, you should set it to 4 KB. A larger size, such as 8 KB, results in wasted cache because a 4 KB I/O request takes up the entire cache page of 8 KB, thereby cutting your usable cache in half.
Spindles are more important than capacity, and Exchange performance is improved if the spindles support a high number of random I/O requests.
If you are using Raid-1+0, you can calculate the number of spindles by using the following formula and rounding up to the next even number:
1.25 × [(Mailboxes × IOPS per mailbox / IOPS per spindle) + %Read I/O] / [%Reads I/O + (%Write I/O / 2)] = Spindles
This formula takes into account planning for no more than 80 percent total utilization and ensuring enough available I/O, even in the case of a spindle failure.
The RAID solution you use should be based on the cost and performance trade-offs that are appropriate for your company. As a result, in many cases, more than one type of RAID solution may be recommended for a particular data storage requirement. General recommendations are as follows:
Higher throughput speed provides better performance. . In general, SCSI buses provide better throughput and better scalability than IDE or ATA buses.
You can use the following equation to determine the theoretical throughput limit for your bus:
(Bus speed (in bits) / 8 bits per byte) × Operating speed (in MHz) = Throughput (in MB/s)
You can also improve performance by placing multiple drives on separate I/O buses.