Performance of Internet Information Services 5.0: The Internet Service Provider Scenario


Internet Information Services (IIS) 5.0, included with Windows 2000, offers significant improvements to the IIS 4.0 high-powered Web server. Using IIS 4.0, Internet Server Providers (ISPs) raised the question: "How many Web sites can I run efficiently on one server?" After talking with our ISP customers, we discovered the answer to be around 250 Web sites for a single IIS 4.0 server; this creates a problem for ISP customers who need to host thousands of sites on a single server. The IIS team addressed this problem for IIS 5.0 and greatly improved the performance of IIS in the hosting scenario. Now, thousands of sites can run on one server. With IIS 5.0, the answer to the IIS 4.0 question lies in a new question: "What load is this server expected to handle?" The number of sites an IIS 5.0 server can host is directly proportional to the expected load.

Expected load equals the expected number of requests to an individual server per second and is affected by the content type, where content type refers to the proportion of dynamic pages to static pages. In the last few years, published Web content continues to become more dynamic, which causes new performance considerations. Focusing on this trend, we created a test to analyze the performance of dynamic and static pages running on an IIS 5.0 server in an ISP setting, with dynamic content developed from ASP scripts and Perl scripts.

The Test

We designed this test to analyze the performance of IIS 5.0 in an ISP setting. Web sites containing dynamic and static content were created to determine the number of requests per second a server could handle. As the number of sites increased, the performance impact was measured.

The following hardware was used as the Web server:

  • DELL PowerEdge 6350 Enterprise Server (4 Xeon III 550 MHz processors, 4 Gigs of RAM, 100 megabit Ethernet NIC)

  • DELL 850F external disk array connected via FiberChannel.

We followed the performance suggestions found in the "Windows 2000 Performance Tuning and Benchmarking" white paper. The disk arrays were not connected via any hardware RAID schema; instead, each disk was assigned a drive letter and the content for the web sites was evenly distributed across the disks. We used the Windows 2000 boot options "NumProcs" and "MaxMem" to show the impact of RAM and the number of CPUs when dealing with thousands of sites.

We created a large number of files to mimic the ISP setting, and the IIS 5.0 Static File cache became a possible performance hindrance: consuming resources while receiving very few requests. Hence, we created two test scenarios:

  • IIS Static File Cache ON

  • IIS Static File Cache OFF

Because requests were distributed evenly across all sites, we used the "DisableMemCache" global registry setting to disable the IIS Static File Cache. (The "NoCache" metabase property could also have been used.)

We divided the test into three stages. First, we created 5000 sites containing 1010 files: 910 static files and 100 dynamic files. Each of the sites contained the following breakdown of static content:

File Size (in Bytes)

Number of Files

















This produces a total of 20,000,000 bytes for each site. Dynamic files were divided into two scripting methods and distributed as follows:

  • 50 VBScript in ASP pages

  • 50 Perl script using the ActiveState Perl for ISAPI

For each language, five of these dynamic pages were Guest Book applications and the other forty-five simply parsed the query string and output static text. Each dynamic page outputs approximately one kilobyte of content. All scripts were hosted in the new "Pooled" application, since it is perceived that Web service providers will achieve higher reliability of the Web server by implementing this method.

Second, we started by applying load to 250 sites and finally, incremented this by 250 sites until we reached a total of 5,000. Load was applied using the WebCAT stress tool. WebCAT was configured to have 180 concurrent connections to the Web server with no delay between connections. The requests were distributed as follows:

  • 70% of all requests went to static content

  • 30% of all requests went to Dynamic content (ASP or Perl).

Also, within each site 90% of all requests went to 10% of the files, such that 91 of the static files and five of the dynamic files within each Web site received 90% of the requests.

The Results

Customers have raised concerns with the scalability of the IIS 4.0 implementation of the ASP programming model. Though dynamic data always has a larger impact on the CPU than static data, the following charts show that performance remains good even with tens of thousands of ASP-scripts-being served along with static data. Since these graphs are based on requests per second, it is important to note that every 100 requests contain approximately 1.2 megabytes of data. The lines of the graphs represent the different processor and RAM configurations used for this test.

The first two graphs deal with the results using the ASP-based dynamic content. In the first of these, we see that as the number of sites increase, increasing the number of CPUs has less impact on performance. Though performance degrades as the number of sites increases, IIS 5.0 can still output five megabytes of information per second.


In the next graph, we again see a similar trend. With a small number of sites, adding more CPUs gives a significant increased in performance; however, a smaller performance increase results from large sites.


Contrasting the two graphs above shows that an improperly tuned static file-cache can be a hindrance to performance. With a small number of sites, the number of CPU's determines performance. As the site number increases, performance becomes closely tied to the number of CPUs and the amount of memory available. Poor performance results when the amount of CPU power is not equally balanced by the amount of memory available. Though it is not shown here, a properly tuned cache (caching only sites that are hit often) is expected to increase performance.

The next two graphs show the performance results using Perl-based dynamic content. Once again, the number of CPUs plays a key factor to performance so only the 4 Gig RAM configurations are shown. It should be noted that in both these graphs, we see data points close to 900 requests per second (approx. 10 MB of data per second); at these times the load pushes the network capacity.

The first Perl graph clearly shows that a small increase in CPU dramatically affects the overall performance for a small number of sites. We also see how matching the CPU power to the amount of memory dramatically improves performance.


In the following graph, we again see the 4 CPU-4 Gig configuration performing considerably better than the other configurations.


The Perl graphs show that turning on the static file-cache has a positive performance impact when you have enough CPU power and while hosting a small number of sites. However, as the number of sites grows, we see that the cache cannot grow sufficiently large enough to continue the ultra high performance.


The results show that with IIS 5.0, one Web server can host considerably more sites than the IIS 4.0 perceived limit of 250. Though each graph shows a performance decrease as the number of sites increases, most of the hardware configurations can host 5,000 Web sites more than 17 million requests per day (with the 4 CPU-4 Gig configuration handling from 30.8 million to 46.2 million requests per day). We see that the number of sites created on a single server is not the limiting factor for performance; instead, the limiting factor is the number of sites receiving load. When hosting a small number of sites, an undeniable relationship exists between site performance and the number of CPUs. However, when hosting a large number of sites, this relationship becomes more vague and includes RAM. Also, without proper tuning, the IIS 5.0 static file-cache can hinder performance; however, proper tuning (even if it is tuning by removal) improves the performance of the Web server.

Comparing the four graphs shows that the performance of the Perl for ISAPI is on par with that of ASP. Perl has been generally viewed as non-IIS compatible because of the poor performance of Perl when tested under the IIS 4.0 CGI-process-model. With the introduction of IIS 5.0 and ActiveState Perl for ISAPI, customers can easily port existing Perl scripts from other platforms and have them run on the IIS/Windows 2000 platform. Using the ActiveState Perl for ISAPI, IIS 5.0 users will experience the same high performance they expect from their ASP applications.

Additional Resources

IIS Tuning Guide:

IIS 5.0 Technical Overview:

Windows 2000 Web and Application Services:

The ActiveState Web site:

The DELL Web site: