Physical storage recommendations (Office SharePoint Server)

Applies To: Office SharePoint Server 2007

This Office product will reach end of support on October 10, 2017. To stay supported, you will need to upgrade. For more information, see , Resources to help you upgrade your Office 2007 servers and clients.


Topic Last Modified: 2016-11-14

The disks and arrays you choose — and how you place data on those disks and arrays — can significantly affect system performance. If you are unfamiliar with redundant array of independent disks (RAID), see the following resources:

This topic primarily describes SQL Server 2008, although Microsoft SQL Server 2005 and Microsoft SQL Server 2000 are also supported.

Use appropriate disks and RAID arrays

The following list provides some best practices and recommendations for choosing the best RAID level and hard disks:

  • More and faster disks or arrays improve performance. The key is to maintain low latency and queuing on all disks.

  • For high availability and performance (random read/write), configure your array for RAID 10.

  • Consult your storage hardware vendor or documentation before configuring RAID arrays. Take into account whether a database would benefit from faster random read response time — for example, for static Web content, where RAID 5 and RAID 10 provide similar performance. On the other hand, a faster random write response time might be more important — for example, in a collaboration site that uses mixed read-write usage, where RAID 10 has the advantage.

  • When you configure a RAID array, it is crucial to align the file system to the offset that is supplied by the vendor. In the absence of vendor guidance, refer to SQL Server Predeployment I/O Best Practices (

For more information about provisioning RAID, and the SQL Server I/O subsystem, see SQL Server Best Practices Article (

Before you deploy a new farm, we recommend that you benchmark the I/O subsystem by using the SQLIO disk subsystem benchmark tool. For details, see SQLIO Disk Subsystem Benchmark Tool (

Proactively manage the growth of data and log files

  • Pre-size the data and log files.

  • Do not rely on autogrowth alone. Instead, manually manage the growth of data and log files. You can enable autogrowth for safety reasons, but you should proactively manage the growth of the data and log files.

  • Configure autogrowth settings to meet the needs of your deployment.

    • When you are planning content databases that exceed the recommended size (100 GB), set the database autogrowth value to a fixed number of megabytes instead of to a percentage. This is to reduce the frequency with which SQL Server increases the size of a file. Increasing file size is a blocking operation that involves filling the new space with empty pages.


      SQL Server 2008 that is running on Windows Server 2003 supports instant file initialization. Instant file initialization can greatly reduce the performance impact of a file growth operation. For more information, see Database file initialization (

    • When you are planning content databases smaller than the recommended size (100 GB), set the databases to 100 GB when they are created by using the ALTER DATABASE MAXSIZE property.

    • If disk space is limited or databases cannot be sized, you should configure the autogrowth value to a fixed percentage. For example, configure the autogrowth value to 10 percent for databases under 500 GB and to a fixed number of megabytes if a database exceeds 500 GB.

  • Maintain a level of at least 25 percent available space across disks to allow for growth and peak usage patterns. If you are managing growth by adding disks to a RAID array or allocating more storage, monitor disk size closely to avoid running out of space.

Limit content database size to improve manageability

Plan for database sizing that will improve manageability and performance of your environment.


The limits we recommend apply only to a server that is running SQL Server 2008 and hosting Microsoft Office SharePoint Server 2007, and are not general guidance for SQL Server 2008.

  • In most circumstances, to enhance the performance of Office SharePoint Server 2007, we discourage the use of content databases larger than 100 GB. If your design requires a database larger than 100 GB, follow this guidance:

    • Do not use a database larger than 100 GB for more than a single site collection.

    • Use a differential backup solution, such as SQL Server 2008 or Microsoft System Center Data Protection Manager 2007, instead of the built-in backup and recovery tools.

    • Test the server that is running SQL Server 2008 and the I/O subsystem before you move to a solution that depends on a 100-GB content database.

  • Limit content databases that contain multiple site collections to approximately 100 GB.

Separate and prioritize your data among disks

Ideally, place the tempdb database, content databases, and SQL Server 2008 transaction logs on separate physical hard disks.

The following list provides some best practices and recommendations for prioritizing data:

  • When you prioritize data among faster disks, use the following ranking:

    1. Tempdb data files and transaction logs

    2. Database transaction log files

    3. Search database

    4. Database data files

    In a heavily read-oriented portal site, prioritize data over logs.

  • Testing and customer data show that Office SharePoint Server 2007 farm performance can be significantly impeded by insufficient disk I/O for tempdb. To avoid this issue, allocate dedicated disks for tempdb. If a high workload is projected or monitored — that is, the average read operation or the average write operation requires more than 20 milliseconds — you might have to ease the bottleneck by either separating the files across disks, or by replacing the disks with faster disks.

  • For best performance, place the tempdb on a RAID 10 array. The number of tempdb data files should equal the number of core CPUs, and the tempdb data files should be set at an equal size. Count dual core processors as two CPUs for this purpose. Count each processor that supports hyper-threading as a single CPU. For more information, see Optimizing tempdb Performance (

  • Separate database data and transaction log files across different disks. If files must share disks because the files are too small to warrant a whole disk or stripe, or you have a shortage of disk space, put files that have different usage patterns on the same disk to minimize simultaneous access requests.

  • Consult your storage hardware vendor for information about how to configure all logs and the search databases for write optimization for your particular storage solution.

  • Allocate dedicated spindles for the search database.

Use multiple data files for content databases

For improved performance for large content databases, consider using multiple data files.


  • The use of multiple data files for databases other than content databases is not supported.

  • The use of SQL Server partitioning is not supported for SharePoint Products and Technologies databases. Use only simple data files.

Follow these recommendations for best performance:

  • Only create files in the primary filegroup for the database.

  • Distribute the files across separate disks.

  • The number of data files should be less than or equal to the number of core CPUs. Count dual core processors as two CPUs for this purpose. Count each processor that supports hyper-threading as a single CPU.

  • Create data files of equal size.


Although the backup and recovery tools that are built in to SharePoint Products and Technologies can be used to back up and recover multiple data files if you overwrite in the same location, the tools cannot restore multiple data files to a different location. For this reason, we strongly recommend that when you use multiple data files for a content database, you use SQL Server backup and recovery tools. For more information about backing up and recovering Office SharePoint Server 2007, see Choose backup and recovery tools (Office SharePoint Server).

For more information about creating and managing filegroups, see Physical Database Files and Filegroups (

Use multiple filegroups for the SSP search database

For the search database, we recommend that filegroups be used to segregate the tables that are used primarily for crawl and query processing. The filegroup that hosts the tables that are most affected by crawling should be moved to a different set of spindles from the primary filegroup to provide the most reduction in impact on the I/O subsystem.

The database tables that are listed in the following table are primarily related to crawling.

















Transact-SQL scripts are available to use in moving these tables to a filegroup. These scripts are the only supported mechanism for moving the tables that are related to crawling. Splitting the filegroups into multiple files is not supported. The scripts are included in the blog post SQL File groups and Search (, which is posted on the Microsoft Enterprise Search blog.

Follow these recommendations for best performance for search databases:

  • Move the tables out of the primary filegroup for the database.

  • Distribute the files across separate disks.


The process of moving tables to a new filegroup is very expensive and can take hours to complete because it involves dropping and re-creating many clustered indexes. Assume that your database will be offline during the move.

Known issues

The following sections describe known issues with using filegroups for the search database.

Backup and restore

Backup and restore in SharePoint Products and Technologies are not filegroup-aware. There is no way to indicate where the new filegroup should be restored to. The restore process tries to put the crawl filegroup on the same drive that it existed on when you made the backup. To restore, you must have drives for the crawl and primary filegroups with the same drive letter as that of the initial backup drive.   

Future updates, service packs, and hotfixes

Each update, service pack, and hotfix that you apply to the server has the potential to modify the index that was moved to the crawl filegroup, or to add a new index to one of the tables that was moved to the filegroup. If one of these events happens, the index will be moved back to the primary filegroup or will be re-created in the primary filegroup. 

Because of the risk of index modification, after you apply any update you should repeat the process of moving the tables to the filegroup by running the scripts that are provided on the Enterprise Search blog.

Requires at least SQL Server 2005, SQL Server 2008 preferred

The product team script that is used to move the indexes uses features that were released in SQL Server 2005 and are continued in SQL Server 2008. This optimization can be performed only if you are running SQL Server 2005 or later..

Follow vendor configuration recommendations

For optimal performance when you configure a physical storage array, adhere to the hardware configuration recommendations supplied by the storage vendor instead of relying on the operating system’s default values.

If you do not have guidance from your vendor, we recommend that you use the DiskPart.exe disk configuration utility to configure storage for SQL Server 2008. For more information, see Predeployment I/O Best Practices (\&clcid=0x40).

Download this book

This topic is included in the following downloadable book for easier reading and printing:

See the full list of available books at Downloadable content for Office SharePoint Server 2007 (