Web Hosting with IIS 5.0


This module covers the Web hosting features of Microsoft® Internet Information Services 5.0 (IIS), the Web server built into the Windows® 2000 Server operating system.


Because IIS is an established, critical technology, Microsoft made some very deliberate design decisions in version 5.0:

  • IIS needed to be integrated fully into Windows 2000.

  • It had to be more reliable. One of the biggest concerns customers expressed to Microsoft was that numerous events could take an IIS 4.0 server down.

  • Microsoft wanted to polish the product—that is, make it meet the industry standards (such as HTTP 1.1) and enhance its usage and functionality.

  • It's very important to understand that with version 5.0, new features were not the focus; the primary goal was stability. Any new features that made it into the product were bonuses, not the stated goal of the IIS 5.0 project.


Unlike previous versions of IIS, version 5.0 is fully integrated with the operating system. When you installed the Windows NT® 4.0 operating system, you could choose whether or not to install IIS 2.0. The product was tightly integrated with the operating system, but not part of it. IIS 4.0 introduced the concept of the Metabase. This meant that a number of the IIS settings were taken out of the registry and saved in the Metabase.

One of the things that IIS 5.0 does is "come back to the fold." That is, IIS 5.0 is reintegrated with the operating system and therefore takes advantage of the features of the operating system.

For example, IIS 5.0 supports Kerberos and the Active DirectoryTM service. One way that you can use this integrated support for Kerberos is to have the IIS server impersonate the client and use the client's credentials to make calls to a back-end Microsoft SQL ServerTM database.


Because Microsoft received feedback that reliability was a problem, the goal of IIS 5.0 development was 365-day-a-year, 24-hour-a-day uptime, as well as reliability of 99.999 percent.

A lot of things could hang an IIS server, and a lot of tasks required you to shut it down and restart it. The development team's goal was to eliminate server hangs and server shutdowns.

Another focus was protecting the server and applications from crashes. With Internet Information Server 4.0, the system could be running fine, but a poorly written Active Server Pages (ASP) file could take the whole system out. The development team wanted to try to isolate those runaway applications so that they didn't affect the entire Web server.


Internet Information Services 5.0 also supports some protocol updates. HTTP 1.1 includes support for compression and WebDAV (Web distributed authoring and versioning), features that IIS 4.0 did not take advantage of. Therefore, a development goal of IIS 5.0 was to bring it into full compliance with HTTP 1.1, including compression and WebDAV support. IIS 5.0 also supports FTP restart, an important feature for FTP sessions. (These technologies are covered in more depth later in this module.)


Scalability was another goal; specifically, Microsoft wanted to see IIS 5.0 support more sites per box.

The ultimate aim was that customers could run Windows 2000 for all their solutions—you shouldn't have to run Windows 2000 for most services but bring in anther product to run your Web services. To this end, IIS 5.0 was developed to run hundreds of applications on a single box. And it has better load balancing and Web farm support.

Customers also wanted to move applications that could potentially misbehave away from critical IIS 5.0 files. Version 5.0 addresses this issue with a concept called Out-of-Process Pooled Applications. This enables you to put ASP applications into a separate pool. If one of the applications goes awry, it may take out the others, but it won't take out your Web server.


The next important feature of IIS 5.0 is its setup process. There are basically three types of IIS 5.0 installs:

  • The default is the clean installation, which creates the basic IIS 5.0 directories. This is the process to follow if you have just set up a Windows 2000–based server and you want to add IIS 5.0 services to it. The clean installation essentially sets all the defaults.

  • The next option is an upgrade over IIS 3.0 and earlier. This option is more of a replacement than an upgrade, because IIS 3.0 is very different from later versions. For example there was no Metabase in version 3.0 and earlier.

  • Finally, you have the option of upgrading over IIS 4.0. Not only is this the cleanest and easiest upgrade option, but also it was highly tested during development.

Microsoft engineers did test the upgrade from versions 3.0 and 2.0, but not nearly as extensively as they tested the upgrade from IIS 4.0.


The first thing a fresh install does is create the directories: INETSVR, INETPUB, IISHELP, and IISADMIN. It then copies the files, primarily the DLL files that work with IIS. Next, it creates the Metabase.bin file and populates it with the defaults.

If you are familiar with IIS 4.0, you know that the Metabase is essentially the repository for all of the IIS settings. During installation, those settings are taken out of the registry and placed into the Metabase. Setup then creates two accounts:

  • The IUSER computer name account, the anonymous access user account

  • The IWAM computer name account, the account that allows you to run ASPs, CGIs, and so on


In an upgrade over version 3.0, Internet Information Services 5.0 purges all of the IIS 3.0 settings from the registry. It then creates the IIS 5.0 directories, copies the IIS 5.0 files, and populates the Metabase.bin file with defaults. Then, referring back to the registry entries that were purged, it makes any necessary changes to update the settings to version 5.0 and creates the IWAM computer account. (The IUSER account already exists, so it is not necessary to create it.)

In short, in an upgrade to IIS 5.0 from IIS 3.0 and earlier, your data is retained, it's just brought up-to-date with the software.


When upgrading from IIS 4.0, Internet Information Services 5.0 creates a couple of new directories and updates the version 4.0 files. The two new directories are:

  • IISADMIN, which holds Active Server Pages that are used with the HTML administrator. The HTML administrator provides a Web interface that allows you to administer the Web server from another machine.

  • IISHELP contains the IIS online documentation. This is really just a minor structural change.

Some directories are the same as they were in IIS 4.0:

  • INETPUB contains your WWW root, your FTP root, and so on.

  • INETSVR holds the system files, subdirectories, and the backup of the Metabase.

IIS 5.0 also uses the Metabase.bin file that version 4.0 created. IIS 4.0 did not have quite as much coverage in the Metabase as version 5.0 does, so the IIS 5.0 installation strips practically everything out of the registry relating to IIS and puts it all into the Metabase. No accounts need to be created, because the IUSER and IWAM accounts already existed in IIS 4.0.


What effect does IIS 5.0 have on the registry? First, version 5.0 is the final step in the migration to the IIS Metabase. IIS 5.0 relies very little on the registry, if at all. Only data for legacy applications is kept there.

However, there are three keys that IIS may still rely on that are left in the registry:

  • HKLM\SOFTWARE\MICROSOFT\INETMGR: If you have certificate services set up on this machine, this key holds the local server certificate information.

  • HKLM\SYSTEM\CURRENT\CONTROLSET\SERVICES\INETINFO, which holds some legacy information for Performance Monitor.

  • HKLM\SYSTEM\CURRENT\CONTROLSET\SERVICE\IIS ADMIN, which holds data required for backward compatibility with IIS 4.0. This allows you to administer an IIS 4.0 server using Microsoft Management Console on your IIS 5.0 server.


Internet Information Services 5.0 makes no changes to groups; it doesn't add any new groups, nor does it change memberships.

As was mentioned earlier, it does add two new user accounts. IUSER handles anonymous connectivity, and IWAM is used to launch ASP and CGI applications.


IIS 5.0 gives you a couple of options for administrative tools:

  • The Internet Service Manager MMC console. If you've worked with IIS 4.0, you will have seen the Microsoft Management Console (MMC) component of Windows. MMC is the default console that Windows 2000 Server uses for much of its system administration. One of its big advantages is that it gives you a common user interface to handle a lot of different tasks. It's important to keep in mind that the Internet Service Manager MMC is only usable on a local LAN or WAN; it cannot cross proxies or firewalls.

  • The HTML Administrator. If you have to go across a proxy or firewall, you can use the HTML administrator, which has been greatly improved in IIS 5.0. In fact, it can now do about 98 percent of what MMC can do, plus it works across proxies and firewalls.


If you are upgrading, you will want to be aware of a couple of issues.

First, remember that upgrading to IIS 5.0 from version 3.0 and earlier is essentially a replacement. The registry settings are not migrated because they really don't apply anymore. However, the data directories are left intact, and existing permissions are kept.

Also, custom code can break. For example, ASP code that was poorly written or that takes "shortcuts" can cause IIS 5.0 to break. Also, some of your old directory access code and old ActiveX® Data Objects (ADO) code may break.


Upgrades from IIS 4.0 are the most common upgrades. Extensive testing by Microsoft indicates that they are also the most stable. There is only one known issue with upgrading from version 4.0, and that is the caused by old ADO code, mentioned on the previous page.


The next part of the module discusses features that are new in Internet Information Services 5.0. These new features fall into three categories:

  • IIS features that were developed for the specific purpose of addressing IIS 4.0 issues.

  • Updates that enable Internet Information Services to take advantage of new security functionality and make IIS security easier to use.

  • Updates for technology—for example, extensions to HTTP 1.1 are now fully utilized in IIS 5.0.

Note: The following pages mark features as "New" or "Improved" to help clarify which IIS 5.0 features are brand-new rather than just enhancements to established features.


The features that target specific IIS 4.0 issues include both new and updated features.

New Features

  • The clean IIS restart makes it possible to restart yet keep the system up; it's not necessary to shut the whole system down.

  • Process accounting and process throttling allow you to monitor how much CPU bandwidth different Web sites use. You can then throttle the CPU resources allocated to a particular Web site to prevent it from taking over the entire server.

  • Pooled Out-of-Process Applications give you added stability.

Updated Features

  • Backup and restore of the Metabase has been improved. This is very important, because if you lose your Metabase, you've effectively lost all your setup.

  • Better custom error messages mean that you can return more informative messages to users when things go wrong.

  • The remote administration tool—the HTML Administrator—has been improved dramatically.


IIS 4.0 didn't stop and restart well. One of the big problems with IIS 4.0 is that it had a large number of dependent services. If you wanted to stop your IIS server, you had to stop all the various services that were depending on it in addition to any ASPs and DLLs that might be executing.

This worked fine if everything was registered properly. But that rarely happens in the real world, so what would happen is that upon shutdown, services would fail to stop. Then you couldn't bring IIS back up, because it was waiting for other services to stop. The only way out of this situation was to restart, which wasn't an elegant solution.


IIS 5.0 works with the Service Control Manager to track the job objects that make up your IIS 5.0 environment. In this way, IIS detects the status of dependent processes. Therefore, when you shut down the IIS server, IIS is able to track down and kill services that are hanging.

This also relates to better resource recovery. Another big problem of IIS 4.0 was that if an application stopped, the resources being used by that application were still being grabbed. Even if you got your IIS server back up again, you'd already lost those resources until the next time you rebooted. With IIS 5.0's ability to kill a job and a task, you will actually get those resources back.


This screen shot shows the Stop/Start/Reboot dialog box in the Internet Information Services Manager. From this screen, you can carry out a variety of functions, such as stopping, starting, and rebooting the server or restarting IIS.

Note: Keep in mind that you want to restart IIS from the Internet Service Manager MMC, not from the Services MMC, which is normally the way that most other Windows services are restarted. The Internet Service Manager is aware of dependent services, but the Services MMC is not.


As was mentioned earlier, application restart is more reliable because IIS 5.0 has the ability to kill and restart a hung application, without requiring reboots.

Another new feature is that the user interface works locally or remotely. The restart functionality now works through the Web interface and through remote administration.

Finally, IIS 5.0 has full command-line support through a program called IISReset.exe. This command-line version of the clean IIS restart allows you to run batches and other scripting.


Process accounting tracks CPU usage per Web site. You must be using the W3C Extended log file for this to work.

Process accounting helps you determine the optimal hardware needs of a site as well as find rogue or malfunctioning scripts and processes. For example, if you've got a Web site that's utilizing 60 percent of your processor resources, now you can track down the problem and fix it. It is important to note that CPU usage is not tracked by application, but by Web site. This is so that you can monitor each of your customers' sites separately.

Finally, process accounting can help you determine whether processor throttling is in order.


In order for process accounting to work, you've got to use the W3C log format; if you're using any of the other log formats, you're not going to get the information you need.

Process accounting and throttling allow you to limit the daily amount of processor resources a Web site gets in a day, which can be very useful if you have sites that tend to overrun your servers. Keep in mind, because this feature is applied at the Web site level, not at the application level, you can't limit a particular ASP. You've got to take out the whole site or nothing at all.

And process accounting will actively monitor and kill processes if necessary. First it will attempt to lower the priority of the process that's burning resources. Then it will halt the service and allow it to be restarted later. Finally, if the service is still causing trouble, process accounting will kill the service and refuse a restart.

Obviously this feature has to be exercised with care—it's probably not a good idea to routinely kill the processes of a really big customer, for example.


You control the CPU process by setting a percentage usage for a 24-hour period. For example, if you specify that a certain process is allowed 10 percent of the CPU, that Web site will getting roughly 2.4 hours of CPU usage a day and no more.


This screen shows the user interface for logging process usage. When you right-click Properties for a Web site and choose Extended Properties, you get this dialog box. From here you can enable process accounting and choose which events you want to monitor.


The Extended Logging Properties dialog box offers you a number of options. You can choose to monitor:

  • Process events, to find out which type of application you see in the event. Was it a CGI? An ASP?

  • Process types, to find out the type of event that took place. Was it a warning that the site was stopped, started, or paused?

  • EventLog Limit, which is an event that is logged if the processor reaches its daily maximum.

  • Faults, which includes page restarts, crashes, and other activities.


IIS can be set to kill a rogue application. It was mentioned earlier, however, that this capability is best kept for occasional use, such as when you have a problem to troubleshoot.

How does IIS enforce limits? When a particular Web site hits 100 percent of the CPU utilization that you've set for it, an event is recorded to the Event Log. At 150 percent of CPU utilization, IIS records another event and lowers the thread priority. At 200 percent, IIS records an entry, kills the process, and refuses a restart.

Of course, this method is not true CPU throttling; it's more of an after-the-fact approach. But another IIS capability, discussed on the next page, addresses this limitation.

Note: For more information of how true CPU throttling is implemented in Windows 2000 Server, see the "CPU Throttling" module.


The solution to the limitation discussed on the previous page is an IIS 5.0 capability called early enforcement of limits.

In this approach, Internet Information Services takes a sample of the CPU every 10 seconds and performs an extrapolation based on the Current Past Time usage. For example, IIS will extrapolate that the usage at a certain point in time indicates that the process is likely to exceed its allotted 2.4 hours in a 24-hour period. IIS 5.0 then may take several steps, depending on the severity of the situation. First, IIS can lower the process's priority, then it may stop or even kill the process.

In this way, early enforcement of limits is closer to actual CPU throttling because it is ongoing. It offers a nice safety net for rogue applications that start chewing up CPU time.


To enable CPU throttling, right-click the Web site you want to configure, choose Properties, and then Performance. This dialog box will let you enable process throttling by percentage of CPU use. Choosing this option alone will set up IIS to make log entries as thresholds are hit, but it will not enforce anything or shut off any processes.

Clicking the Enforce limits button activates enforcement; this is the option that actually takes steps to prevent processes from going awry.


To understand some of the new ASP features in IIS 5.0, it helps to review the history of Active Server Pages.

IIS 3.0 introduced Active Server Pages. All ASP pages ran in the InetInfo.exe process. They were very fast, but unstable; in fact, an ASP application that went bad could bring down IIS.

To alleviate this problem in IIS 4.0, an ASP could run in the InetInfo process if you wanted it to, or it could run in its own memory space and its own process. But if you chose this latter option, every ASP you started got its own resource, which could very quickly exhaust the available server resources. In short, your options were to run ASPs in InetInfo and hope that they wouldn't take down your IIS server, or to run them all in separate memory spaces, which was stable but required you to buy a really, really big machine.

IIS 5.0 aims to deliver the best of both worlds with a concept called Pooled Out-of-Process Applications. Applications run outside of the InetInfo process. But instead of each process running in a separate memory space, all ASPs run in a pooled memory space. In this way, one misbehaving ASP might take out the other ASPs, but it won't take down your Web server.


To set up the pooled data process:

  1. Open the Default Web Site Properties dialog box.

  2. Choose the Home Directory tab.

  3. In the Application Protection list box, choose the level of pooling you want.


Backing up your Metabase is crucial to protecting your site from malfunctioning Web applications and rogue services. Therefore, IIS 5.0 backup and restore is faster and offers you a few more options.

Keep in mind that this is a still a simple Metabase backup. It's only backing up IIS configuration settings, not data. And it can't be used to mirror a site. You can't, for example, configure a server, set up IIS, do a backup, and then restore the backup on another server.

The primary use for IIS 5.0 backup and restore is in the event that you have a Metabase.bin file go bad, you can recover it without reinitializing.


The Configuring Backup/Restore dialog box is found in the IIS MMC.


One of the annoying things about the Web has been that when something goes a little wrong, it issues a generic error code that doesn't tell you much, just that something was broken. They can be very frustrating for customers.

IIS 5.0 allows you to customize your error codes so that you can provide useful information and even common troubleshooting suggestions to users when one of their applications go bad. These custom error updates take advantage of Microsoft Internet Explorer 5.0 features, so they work best if your customers are using Internet Explorer 5.

Now ASP also has custom error messages. What's more, ASP now includes a 500-100 ASP error handling subsection to which it can route the error. And that subsection may be able to diagnose and resolve the problem without the user's awareness or intervention.


Remote administration has been enhanced in IIS 5.0. The HTML Administrator interface almost completely mirrors the MMC interface, and it delivers 98 percent of MMC's functionality. Plus, it is platform and browser independent. These improvements mean, for example, that you can now administer 98 percent of the functionality of your IIS Web server from your UNIX server through the Web browser interface.


This graphic shows what the new HTML Administrator interface looks like.

Note: HTML Administrator is set to the local host by default. You can allow administration from other machines, but you have to enable that when you set it up. In this way, you can tightly control which machines are used to administer your sites.


Internet Information Services 5.0 introduces a number of new security features:

  • Wizards for certificates and certificate trust lists make it a lot easier to administer certificates.

  • The Permissions wizard simplifies the process of setting up security settings.

  • Support for server-gated cryptography allows a server to determine what level of cryptography it's going to use, such as 56-bit, 128-bit, and so on.

  • Digest authentication is a relatively new form of authentication in which two clients prove to each other that they know a secret without actually transmitting the secret across the wire. This is, of course, one of the fundamental underpinnings of Kerberos. And as previously mentioned, IIS 5.0 supports Kerberos integration with Windows 2000 Server.

  • IIS 5.0 supports Fortezza, a security protocol suite developed for the U.S. government.

  • Certificate storage has been improved in IIS 5.0.

Note: For more information on Windows 2000 Server security features, see the "Security" module.


The first updated technology standard that IIS 5.0 supports is HTTP compression, the process by which Web content is compressed before it is put out on the wire. This technology makes better use of your network bandwidth, but it does draw on CPU bandwidth to do the compression. Many people consider this an acceptable trade-off, because the price of CPU bandwidth is dropping while the cost of network bandwidth remains high.

HTTP compression works with static content as well as dynamic content.

Because HTTP compression requires both server and client support, you need to be sure you are using a compatible client if you want to take advantage of it. Both IIS 5.0 and Internet Explorer 5 support HTTP compression.

The compression itself, however, is a server-level setting only, so from the user's perspective it happens transparently. You take care of everything on the back end.


Currently, IIS 5.0 ships with two compression standards. Gzip and Deflate. Both of these standards are implemented in the Gzip.dll file that ships with IIS 5.0. The Gzip.dll handles the compression and decompression.

The Windows 2000 Platform Software Development Kit includes information on creating customized compression filters. In this way, you can build your own compression filters that use a different compression algorithm or implement security. For example, if your customer wants to send compressed data with some encryption built in, you can develop a filter to do this.


These are the steps that make up a compressed session:

  1. The client connects to the server over TCP/IP in the usual way.

  2. The client sends an HTTP packet that includes the compression options (Gzip or Deflate) in a header. In this way, the client informs the server that it is compression-enabled. This step illustrates why the client needs to be compression-enabled: The client needs to be able to send this header in order to elicit a response from the server.

  3. That header triggers a .dll compress file on the server. The requested content will either be retrieved from the compressed cache or compressed on the fly using the Gzip.dll file.

  4. The content is then returned with a header indicating which method was used to encode the content so that the client knows how to handle it. If the client expects Deflate and gets Gzip, it's not going to know what to do with the compressed file.


HTTP compression uses an approach called compress on demand, in which dynamic and static content are handled separately:

  • Dynamic content is compressed in one pass and then pushed out onto the wire.

  • Static content is compressed in two passes. The first time a client requests particular information, that first copy goes out to the client uncompressed. The Gzip.dll compresses that data and stores it in a compressed file cache. The next time a user requests the same content, the compressed content will be dispatched out of the compressed file cache.

The reason why these two types of content are handled differently is that dynamic data is usually requested in response to a particular set of queries. Therefore it is rarely, if ever, requested more than once. On the other hand, static data such as Web pages will be requested by many users over and over again.


The second update to supported technology in IIS 5.0 is FTP Restart.

FTP Restart is the ability to recover a disconnected FTP download by noting how far the download has progressed and restarting it where it left off instead of all the way back to the beginning.

The three types of FTP transfer—block, compressed, and stream—are discussed on the following pages.

FTP Restart has one limitation: It will not work with multiple requests, only single requests. For example, if you're issuing a multiple file transfer, FTP will resume transferring only the file it was working on when the download was disconnected. You must re-enter the additional files you had requested.


As the previous page noted, there are three types of FTP transfers: block, compressed, and stream. FTP Restart works slightly differently, depending on the type of transfer.

With block or compressed content, the data is encoded before the transfer, so the data is sent in a standard, known format. A progress marker is placed at periodic intervals through that data. If the download fails, the client just restarts the transfer and sends back the last progress marker, which the server uses to determine the point at which to pick up and continue the transfer.


FTP Restart works a little differently with streamed content. Because streamed content is in a sense dynamic, it can't be encoded before it is pushed out onto the wire. Therefore, the streamed files are sent 1 byte at a time, without encoding and with no progress markers being placed.

Instead, the client is counting the bytes as they come in. If the download fails, the client counts up what it has received and transmits back to the server that it received, for example, 11 kilobytes of the file. The server then knows to start retransmitting on the twelfth kilobyte.


The third updated technology that IIS 5.0 supports is WebDAV (Web distributed authoring and versioning). WebDAV is a proposed extension to the HTTP protocol, and Microsoft is contributing to the standard. (The Microsoft FrontPage® Web site creation and management tool was a forerunner for WebDAV.)

WebDAV is essentially simple file input/output across the HTTP protocol. It allows you to create, modify, and delete documents and document collections, and provides simple file locking and security methods.


A very powerful technology, WebDAV makes it easy for your customers to create Web pages and post them to your Web without having to code their own ASP, HTML, and so on. It also enables them to handle tasks like creating, editing, and deleting files and directories.

WebDAV works by adding seven new HTTP verbs to the protocol, including:

  • PropFind searches for properties describing a particular object, such as date and size.

  • PropGet retrieves a particular property from the object.

  • PropPatch updates a property on a server.

  • MKCOL and DelCOL make a collection and delete a collection, respectively.

  • Lock and Unlock are for locking and unlocking files.


The WebDAV header notifies IIS that a packet contains WebDAV instructions. The headers include information about depth and translate:

  • Depth is very important. Depth tells a server exactly how far down in a file tree that an operation is going to be performed. In other words, if the header sends a delete collection, the depth will communicate how far down the delete extends. Is everything supposed to be deleted? Or just a subset?

  • Translate renders the file in raw form. That is, if you have all the programming built into a file, translate shows not the delivered version of the file, but rather the code behind it.

    Translate is currently not part of WebDAV. Microsoft is working to add it to the WebDAV RFC.


WebDAV does raise some potential security concerns.

  • Remember that Internet Information Services provide only application-level security. If you allow a right to one user, you've allowed that right to all users. Therefore, if you want to secure your Web site, you'll want to use the NTFS file system to lock down directories and subdirectories so that only certain users have access to it.

  • Enforcing authentication isn't forced, but it is highly recommended. When users authenticate to a WebDAV server, often they're sending across a user ID and password. You probably will want some reliable authentication method in place to make sure that the user sending data is, in fact, the user you're expecting data from.

    There are some service attacks to be aware of with WebDAV:

    • Denial of service attacks. For example, a hacker could start firing a lot of heavy files at your system, essentially clogging and bringing down the network connection. Or, someone could request a lock and keep it indefinitely.

    • Privacy attacks. Your customers create a lot of documents and post them. These documents may contain personal information about themselves and their companies. If you don't have the proper locks in place, it may be possible for a hacker to make a request on the property (PropGet, for example) and pull down information that those users may not want out in the field.

These few concerns are serious enough that you want to make sure that you have the appropriate security measures in place if you're using WebDAV.


Enabling WebDAV support doesn't require you to select any check boxes or options. Instead, WebDAV capabilities are installed by the Httpext.dol file, which is loaded by default in IIS. And then you have to enable an ISAPI (Internet Server application programming interface) filter to activate WebDAV.

Note that, once you've done this, the standard properties—such as read, write, and directory browsing—all become WebDAV enabled. In this way, read access becomes read WebDAV access, write access becomes write WebDAV access, and so on.


The final topic of this module is scaling Internet Information Services 5.0.

When you're building a very large Web server, you have three main scalability strategies to choose from:

  • You can upgrade your hardware to increase RAM and processors.

  • You can upgrade the network bandwidth to realize greater throughput.

  • You can create Web farms and implement load balancing by adding more servers and then sharing the load among multiple servers.

Obviously you don't want to start implementing these potentially costly strategies without understanding what kind of content you're pushing across the wire. So first of all you need to understand how IIS runs. You also need to understand how static content differs from dynamic content and how IIS works with other Windows 2000 services.


First consider static content. An average page—an 8.5-by-11-inch paper with writing on it—takes up about 5 KB. Transferring this 5-KB page requires:

  • 200 bytes for the TCP connection

  • 300 bytes for the clients get requests

  • Approximately 110 bytes of overhead for the header, coding, and that sort of information.

Therefore, to transfer that 5-KB packet you'll need 5,600 bytes, or about 5.6 KB.

If the page contains a 2-inch-by2-inch picture, that will add another 10 KB to the file, resulting in an additional 11,000 bytes on the wire. It is easy to see how your static content can grow large quickly, even though it might start out quite small.


Now consider Web content. Although you'll have roughly the same data output as in the previous example (5 KB), more resources will be needed for processing because you're actually running an application. Input/output threads, RAM usage, calls to the ASP subsystem, and roundtrips to the client for data all add require processing resources.

Plus, a Web application involves a client making requests. For example, imagine that the client has to go to SQL Server for more information; that same 5-KB files suddenly requires 6,500 bytes because it's being handled through ASP. And, as in the previous example, if a 2-inch-by-2-inch picture is added, the network resources requirement grows drastically.

In short, Web applications utilize a great deal of network bandwidth.


Usually measured in bytes per second, network bandwidth is the amount of data that can be sent in a given time interval.

So, using the 5.6-KB static Web page from the previous example, you can figure that you can send:

  • Approximately one-half of a page per second with a 28.8-kbps modem

  • Approximately one page per second with a 56-kbps modem

  • Approximately two pages per second with a 128-kbps ISDN connection

  • Approximately 11 pages per second with a 640-kbps ADSL connection

As you can see, the download time required varies widely depending on the connection speed, even for the exact same piece of data. Therefore, the faster the connection, the faster you can deliver data to your customer.


Another IIS scalability issue is RAM and processor usage.


InetInfo requires 2.5 MB of RAM to start. This is nonpageable RAM—IIS utilizes this RAM and keeps it, so forget about having a page file.

Plus, every client connection requires 10 KB of RAM, which adds up to another megabyte for every 100 connected users. The file and object cache starts at 1 MB; as files and objects are added to the cache, more RAM is required. Each log file also needs 64 KB, and that's mapped to memory.

Processor Utilization

A number of IIS 5.0 services consume processor resources, including the InetInfo process itself, as well as every user connection and ASP and CGI request. DLLHost and Out-of-Process applications also utilize processor resources.


When examining Internet Information Services 5.0 scalability, you also need to keep in mind that service response time is a factor.

Connecting to an external service, such as SQL Server or an application server, takes time. The client has to wait for its request to go into the server, out to another server, come back the server, and return to the client.

The response time will slow down if the service is running on the same box because making a request or running an ASP takes processor resources and memory away from InetInfo.

But that response time is affected even more if the service is on another box: InetInfo on the IIS server is still using RAM and processor resources, only now network bandwidth is added on top of that.


Discussions about network bandwidth need to include not only your Web server but your network hardware as well. Service is affected not only by the client's connection to you, but also by your connection to your backbone service provider. In the example of the 5-KB document:

  • A T1 line can send 26 of those 5-KB pages per second

  • A 10-megabit Ethernet can send 136 pages per second

  • A T3 can send 760 pages per second

  • A 100-megabit Ethernet can send 1,300 pages per second

Clearly, you can have all the RAM and processors in the world, but if you have a lot of data to transfer, you're going to have a bottleneck in the network.

Therefore, when selecting hardware, it's imperative that you know your customers. How many customers will you have? How often will they connect? What are they running? What kind of content will they be accessing? For example, is it a lot of data that will be sent through a small pipe? Or is the content more likely to place demands on memory and processor bandwidth?

And to understand how your customers will access the content, you have to examine the content itself. Is it static, ASP, or a mixture of both?

Finally, you need to choose the right hardware for your unique needs:

  • A predominance of dynamic content means that processor speed is important to giving you faster application processing.

  • RAM is always important. RAM is needed for application memory as well as for large cache. (However, RAM isn't going to help much with a small pipe.)

  • If you need to pump a lot of pages out onto the network at any given time, network bandwidth is critical.

  • Obviously, a fast hard disk accelerates content retrieval.