Deployment and Migration Guide

This section of the Microsoft Hotmail Migration Technical Case Studycovers the deployment aspects of successfully migrating FreeBSD and Apache server to Windows 2000 and Internet Information Services.

On This Page

Introduction Introduction
Scheduling/Logistics Scheduling/Logistics
"Cron" Jobs/ Task Scheduler "Cron" Jobs/ Task Scheduler
Server Migration Process Server Migration Process
Deployment Options Deployment Options
Hotmail Migration Methodology Hotmail Migration Methodology
Application Migration Application Migration

Introduction

As mentioned earlier, it is imperative that you understand what the end-state server configuration will be before undertaking the technical engineering of the automated process. The method for determining the end-state configuration is beyond the scope of this technical case study. Therefore, assuming the end-state configuration has been determined and tested, this configuration is known as the reference implementation. The reference implementation is the desired server configuration, less the server unique settings, of every server functioning in a particular role(See footnote F1 at the end of this page). It is also known as the master image that the other servers should be configured to match.

This section describes:

  1. The high-level process used at Hotmail for rapidly deploying the reference implementation across the Web farm.

  2. Alternatives for the basic aspects of the deployment.

  3. A detailed example of a variation of the unattended automated method used at Hotmail.

The variation from the actual Hotmail migration (explicitly detailed in Appendix 1 is being presented for a couple of reasons:

  1. The example contains the basic elements that are common to all the approaches described in this technical case study (i.e. Whether the solution leverages: i) boot from network, ii) boot from alternate partition, iii) a boot loader, iv) a virtual floppy loaded on a diagnostic board or network interface card, each is using a variation of a virtual boot floppy. The example uses a boot floppy that never needs to be removed and therefore never requires an operator to visit the server after it is physically placed in the rack on the computer room floor, unless a component or the entire server needs to be replaced because of hardware failure.

  2. The example represents a method, which will be effective for new deployments (no embedded base of servers) and migrations. The detailed example provided in Appendix 1 is applicable to a greater number of deployment scenarios. The variations, where they exist, were used for and proven effective with several large customer implementations.

Scheduling/Logistics

A detailed plan of which servers will be migrating is essential for attaining the objectives of the migration including "zero impact on users and uptime" and "minimizing operations impact".

The design and service level agreements (SLAs) of most data center services require fault tolerance to be an integrated part of the product. Most service offerings are designed so that servers are pooled in "service groups" or "clusters". Each service group may have hundreds or thousands of front-end servers that are load balanced by using network load balancing or clustering. Load balancers detect failures and simply route traffic to servers that are functioning when failures occur. This fault tolerance should be leveraged when deploying new operating systems because the server may be unavailable for a minimal amount of time (75 minutes in the case of the Hotmail FreeBSD to Windows 2000 conversion).

Highly available data center service offerings are designed so that a percentage of servers or system components can fail before users, uptime, or Service Level Agreements (SLA) are affected. Server migration activities can leverage this fault tolerance by selecting a small percentage of servers in the service group to be re-imaged during each batch of updates. The migration plan was designed such that service availability would not be compromised, even in worst-case scenarios. The migration plan should provide a mechanism for rapid rollback to the previous state if major problems are encountered.

During setup, Windows 2000 generates a number of log files, these logs contain information about the installation process, which may help resolve any problems that occur after setup completes. Two logs are especially helpful for troubleshooting: the action log and error log. The action log (setupact.log) provides a description of the actions that setup performs, and the error log (setuperr.log) contains a description of any errors that occur during setup. In a large deployment, these logs should be copied to a centralized directory or console for operational verification. This can be accomplished through a script file that executes at the end of setup.

Hotmail deployed around the clock, leveraging shifts to monitor the migration activities and ensure that a sufficient number of servers were online per cluster, such that service would not be impacted. It was not necessary, per se, to use the log files mentioned above, on the target machines for debugging. When a server failed the process for recovery was to retry. If that did not work, the old OS was restored. If the latter failed, that probably meant there was a hardware or network failure, then the server would be replaced.

"Cron" Jobs/ Task Scheduler

Administrators generally find benefit from porting "cron" jobs to Windows Task Scheduler events. Both Microsoft Interix 2.2 and SFU allow administrators to port "cron" files to Windows 2000 without any changes in most cases, allowing administrators to gradually transition scheduled events and scripts without impacting operations i.e. at migration scheduled events can still run as "cron" jobs. After the migration, the "cron" jobs can be migrated to Windows Task scheduler events. The Windows task scheduler has better integration with event logs.

Server Migration Process

Preplanning Steps and Requirements

In order to migrate servers from one operating system to the other and not affect operations, some basic information is needed. Given the number of servers to migrate, an automated process to gather information such as server name and network information is required. The automated process should generate the information and place it in a format that can automatically be used when migrating the server,that is, pre-provisioning database files. These files contain the unique settings such as server names, network information, server role, and so on and can be used during the unattended install.

There are two basic methods, plus a hybrid of the two, which are, supported out-of-the-box for unattended automated server installations(See footnote F2 at the end of this page). These are:

  • Native setup by using winnt.exe/winnt32.exe, unattend.txt files (answer files), scripts for application software installation, and the enterprise deployment tools. Reference the Windows 2000 deployment guide for a detailed explanation of these tools. This method uses a file with a fixed format called the uniqueness database file (UDF ) combined with a unattend.txt answer file for automating the server built.

  • Cloning completely installed server configurations. Use the Windows 2000 System Preparation (Sysprep) Utility and an image copy utility. (Sysprep is included on the Windows 2000 product compact disc (CD). The image copy utility is not). This method uses a file called the sysprep.inf file as the answer file and is explained in greater detail later in this document.

  • The hybrid approach. Use the system preparation tool (sysprep), an image copy utility and the Windows 2000 enterprise deployment tools to create the basic server build, which could consist of the operating system, application software, system utilities, backup agents, virus protection software, and so on, which will be on every server of a particular role. This is identical to 2 above. Then, use quiet (scripted) versions of applications' native setup functionality to install the additional applications, which break when cloned or are not installed on every server and therefore would not make sense to include on the base image.

The most noticeable difference between the first method and the other two is the time to install the image on the target machine. The cloning approach typically can be done in a shorter period of time. For example, a variation of option 3 was used at Hotmail. The server build time was 60 minutes total plus 15 minutes for the final customizations to complete. Option 1, would have taken at least 90 minutes, plus 15 minutes for the final customizations to complete.

Systems management packages such as Microsoft Systems Management Server can provide data to help generate UDF files based on asset management databases. Many companies may store asset information such as server name and network information in a proprietary database or use network devices to store information in Bootp or Dynamic Host Configuration Protocol (DHCP) tables. Depending on the number of servers involved in a migration, it may be cost effective to generate a utility to extract the information in databases or bootp tables and automatically create UDF files. In the case of the Hotmail implementation, a Perl script was used to dynamically build the sysprep.inf at the beginning of the migration process.

A greater discussion of the native setup approach, beyond what has been covered, is out of the scope of this paper. For additional information, see the Windows 2000 Server deployment guide.

No matter which of the options above are selected, the same basic subset of data is required to be gathered in advance.

Using Images for Deployment

Migration of a large quantity of servers is most efficiently accomplished through the use of imaging software, such as Microsoft Remote Installation Services (RIS), Norton Ghost, Altiris eXpress, StorageSoft ImageCast, in addition to many others. Imaging allows the administrator to quickly "blast" images from network or CDs to an "un-provisioned server" more rapidly than a manual or native unattended installation of Windows 2000, that is, the Microsoft Windows NT® or Microsoft Win32 approach.

Creating a Base Image Building the Reference Machine

Creating the base image is key to the syprep/imaging approach to mass server deployments. In most cases, a generic image may be deployed to the majority of servers in a data center. Further customization relative to the role of the server is accomplished by automatically launching scripts at the end of the mini-setup portion of the server build. Care must be taken to ensure that diverse types of hardware can use the same image or can alternatively be specified in UDF files. However, with sysprep 1.1 the only requirement for commonality between the hardware used for the reference server build and the hardware on the target servers is that both must use the same Windows 2000 HAL. Windows 2000 supports the "Advanced Configuration and Power Interface" (ACPI) specification and many vendors implementation of "Advanced Power Management" for backward compatibility. ACPI is the follow-on and replacement specification for APM. Therefore, the Windows 2000 Server/Advanced Server operating system (OS) will detect all other hardware differences between the reference and target servers and install the applicable device driver. However, to ensure an unattended installation the necessary device driver files need to be provided in the %systemdrive%\sysprep\$OEM$ directory structure.

Base images usually include all base software, such as operating system, management software, and custom software. After an image is installed, post-operating system scripts may be used to install software and settings unique to each machine.

Images are generally used for initial rollout of servers, deployment of major updates such as service packs, redeployment of boxes with software/hardware failures, and reprovisioning a server for a different role. It is critical that each image be created consistently over time and recommended that creation of images on the reference server utilize answer files. Answer files are not only to automate server creation, but they rule out a human error during the image creation process. For more information about answer files, see the Microsoft Windows 2000 Guide to Unattended Setup (Unattend.doc) provided in \Docs of the Sysprep Update package.

Sysprep

This section provides a basic general description of the sysprep utility in relation to the deployment options section, which follows.

Windows 2000 System Preparation (Sysprep) Utility is a component that ships in \Support\Tools\deploy.cab on the Windows 2000 product CD. Riprep is a utility similar to sysprep that prepares images for use by Microsoft RIS.

Note: Microsoft RIS does not support Windows 2000 Server or advanced server. RIS also requires user intervention to provide basic information and cannot be used for fully automated installations

Sysprep is a simple utility that prepares a PCs hard disk for:

  • Disk duplication: Sysprep allows you to copy fully installed systems when the hardware is similar. It removes/modifies the local computer security identifier (SID), so that the image can be copied. It also modifies the operating startup environment such that the mini-setup wizard is launched the next time a server is powered on.

  • Automating Mini-Setup: Sysprep creates a shortened GUI-mode setup, called mini-setup that takes 3 to 10 minutes instead of 20 to 30 minutes and prompts the end user only for required and user-specific information, such as accepting the Microsoft license agreement, entering the Product Key, and adding their user and company names.

  • Auditing Sysprep allows you to audit the system and then return the system to a ship-ready state.

Sysprep removes unique settings for each server and prepares the hard drive for disk duplication. When a target server boots up the first time, the mini-setup wizard is invoked. The mini-setup wizard will ask a subset of the questions that a manual setup process will ask. However, all these questions can be answered in advance by providing an answer file (sysprep.inf), thereby making the restore process unattended. After the image is placed on the proper partition on the local server, it reboots and setup looks for a sysprep.inf on the A: drive first. If one exists on A:\ mini-setup uses it, else mini-setup (setupcl.exe) looks in the %systemdrive%\sysprep\ folder that is included in the image.

One approach to automating the mini-setup portion of an installation is to place a generic boot floppy disk in each server. Boot from floppy disk. Restore the "sysprep-ed" image to the target machine. Then manipulate the boot order to sequence the boot floppy disk last. Another approach is to include a sysprep.inf template file with each image. If the latter is selected you may need to consider formatting the image partition as FAT32 and converting to NTFS file system as a Post OS task after setup has run. This would allow the sysprep.inf to be updated on the partition prior to invocation of the mini-setup wizard.

Note: The floppy disk drive should be removed or placed last in the server boot order after the target server is built. If the server needs to be re-provisioned or repurposed at some time in the future, the boot order can once again be modified to make the A:\ drive first in the boot order.

After an operating system is "sysprep-ed," it is ready to be imaged by a variety of third-party imaging tools. Delivery of images to remote boxes can be accomplished through various methods including network (unicast or multicast) or CD. For more information about sysprep and answer files, see the Microsoft Windows 2000 Guide to Unattended Setup (Unattend.doc), which is provided on the Windows 2000 Server product CD.

Deployment Options

Windows 2000 deployments and migrations may include several thousand servers in decentralized data centers, necessitating the need for "noninteractive" operating system, middleware software, and application installation. Customers can choose from many technologies, such as boot managers, Pre Execution Environment (PXE) Servers, or unattended and answer files on CD-ROM or floppy disk.

Boot managers leverage separate partitions on hard drives that allow administrators to change boot parameters or perform events prior to the operating system loading. Events may include re-imaging a server, changing basic input/output system (BIOS) settings, or editing server configurations such as network parameters.

PXE-enabled network interface cards (NICs) can serve much of the same purposes as boot managers, except that they boot from the network, and the server BIOS must support "Boot from network".

Floppy disk or CD deployment has been the traditional means of deployment for small amounts of servers or workstations, and can be used for server deployment utilizing a customized answer file for each server.

Regardless of the type of server boot mechanism, the premise of server deployment is to utilize a deployment mechanism, blast a "sysprep-ed" image to the server, and utilize a sysprep.inf to configure unique settings for each server and install the applicable applications based on server role.

Boot Managers

Because many existing servers may not include PXE-enabled network cards, it is necessary to use an alternative method of controlling the server prior to OS load. Boot managers are a small partition on the local hard drive that is booted every time that the machine starts. Boot managers may be a variety of operating systems including FreeBSD, Linux, Windows 98 and MS-DOS operating system. They generally load an IP stack and connect to a centralized management server or boot server.

The boot manager announces itself to the management server and determines if there are any events that need to be run prior to local hard drive start-up. Events can include running scripts (including unattended Windows 2000 installs) or installing new images on the hard drives. If no events are waiting, the boot manager boots from the primary operating system partition.

The following flow diagram illustrates how Boot manager works.

htmail03

Note: An example of an autoexec.bat is located in Appendix 1A, it may need to be modified to support the drive letters of the hard drives when using boot manager partitions.

PXE

PXE deployments are very similar to boot partitions, but the physical boot partition on the hard drive is conceptually replaced by a boot programmable read-only memory (PROM) that is integrated into the NIC, allowing the use of the entire hard drive. Many companies have built products based on Intels Wired for Management (WFM) initiative that include PXE (https://www.intel.com/design/archives/wfm/), which utilizes a NIC with a BOOT ROM that attaches to a PXE server to determine how to boot the server. Products such as Altiris eXpress version 5.0 (www.altiris.com) or Symantec Ghost (www.symantec.com) support PXE and allow the operator to install an image automatically during a network boot. These products also allow the operator to create floppy disks or install a small boot manager partition on the local hard drive if a diverse set of hardware is being used and PXE enabled network card is not present in each server.

Pre Execution Environment (PXE) is part of the Intel Wired for Management (WFM) specification. PXE technology makes it possible to configure or reconfigure a system remotely, even with a blank hard disk drive. The computer system has a universal service agent loaded locally in the BIOS. This agent allows the system to interact with a remote server to dynamically retrieve the requested boot image across the network, making it possible to install the operating system and user configuration of a new system without a technician present.

If the machine supports Network Boot settings in the BIOS and has a PXE-enabled NIC, a software package, such as Altiris eXpress (www.altiris.com), can be used to deploy large numbers of servers simultaneously through multicast technology.

The PXE boot image generally loads an IP stack and then connects to a centralized management server. The client announces itself to the management server and determines if there are any events that need to be run prior to local hard drive start-up. Events can include running scripts (including unattended Windows 2000 installs) or installing new images on the hard drives. If no events are waiting, the client computer boots from the primary operating system partition.

The following flow diagram shows how PXE works.

htmail04

There may be advantages to using PXE to set up servers in the near future. Most PXE servers have a centralized console that allows the administrator to specify server settings, such as name and network settings. Consoles may be able to generate sysprep.inf images and place them in a location that can be read during the image load. This would allow administrators to manage all information from a centralized database and automate deployment.

Over-the-Network Boot Floppy Disk or CD-ROM

The over-the-network boot floppy approach for automated deployments is still a valid and easy approach to automating server deployments. Historically, the drawback to this approach requires that an operator insert the over-the-network boot floppy disk in the server prior to build time. After the server build process is underway, the boot floppy disk must be removed.

However, this is no longer an issue for many of the servers from various hardware manufacturers. Enhancements have been made, relative to modifying the device boot order in a remote unattended fashion, or loading the boot floppy as a virtual device so that a boot floppy (whether real or virtual) can remain loaded all the time.

When a server is initially installed in the rack or a new image or pre-OS task must be conducted, the boot floppy disk is set as the first device in the boot order, not necessarily at every boot. At initial server power-on, the server boots from floppy disk. The floppy disk loads the IP stack and determines if there is an event waiting. If there are none, then the boot order is modified, such that the local hard drive precedes the floppy disk drive in the boot sequence. The server is automatically rebooted and the server boots normally off of the C: drive. The next time that a server needs to be re-imaged, the boot order is modified remotely to make the A: drive the first device in the boot order and a reboot is automatically scheduled. The server boots from the A: drive at which point a new image is downloaded.

The boot floppy disk would include a unique sysprep.inf for each server, which could be accomplished through either of the following two alternatives:

  1. All floppy disks get created with a unique sysprep.inf prior to imaging. Each server has a unique floppy disk.

  2. Each server contains the same generic over-the-network boot floppy disk, which on boot, queries the serial number of the server or some other parameter that would be unique to that server. Assume that a server serial number is queried for the purposes of this example. The value returned is then used as a key for a database query of the pre-provisioning database, which spawns a process to generate a unique sysprep.inf file for that server. The sysprep.inf is copied to the A: drive of the server where it is used as the answer file for the mini-setup part of sysprep.

If the floppy is always present in the floppy disk drive, this allows the server to be re-provisioned at any time and unique settings applied.

To avoid the floppy boot each time, some hardware vendors such as Compaq allow the administrator to script the BIOS and remove the floppy disk from the boot order when it is not needed. This saves boot time, and may prevent accidental reimaging of servers.

An example of a generic over-the-network boot floppy autoexec.bat is located in Appendix 1A. All the necessary files and the network distribution share can be downloaded from here. This boot floppy disk and the associated distribution share execute a complete unattended sysprep restore to a server.

Note: The actual image to be restored is not included. However, you should be able to walk through the autoexec.bat file, command files, and script files called on the server to understand how a completely unattended remote installation, through sysprep image restore, of Windows 2000 Server/Advanced server with applications and utilities can be performed.

The only functional difference between the detailed example given in this case study and the process that was used with Hotmail is that Hotmail used one of the boot manager approaches outlined previously to load the pre-execution operating system. This invoked the restore of the sysprep image, rather than using the generic boot floppy disk. The boot server mechanism was already implemented and was in use for the FreeBSD. So, it was a straightforward decision to continue to use it to deploy Windows 2000.

Hotmail Migration Methodology

Preface

The following process outlines the overall process used to migrate the FreeBSD servers to Windows 2000 Server.

Reference Server: Image Creation Process Master Server Image

  1. Create reference server:

    1. Install Windows 2000 Server plus base system software, that is, service packs and security alerts. Configure Microsoft Internet Information Server 5.0.

    2. Install and configure Microsoft Services for UNIX, Interix 2.2, miscellaneous resource kit tools, and Windows 2000 Support tools.

    3. Run the Windows 2000 and Internet Information Services (IIS) version 5 automated security scripts (See appendix 4).

  2. Prepare the reference server for cloning.

    Create the %systemdrive%\sysprep directory. Copy the sysprep.inf, setupcl.exe and create the $OEM$ directory structure, cmdlines.txt file etc. See https://www.microsoft.com/windows2000/downloads/deployment/sysprep/default.asp

  3. Run sysprep on the reference server.

  4. Copy the image to the remote boot server through image copy utility.

Target server: FreeBSD\Apache to Windows 2000\IIS 5 Rebuild Process

  1. Requires three boots (60 min. total process time).

  2. Use the kickstart server to schedule the image and script file distribution. The script file:

    1. Formats and partitions the hard drive, and modifies the boot kernel on the target machines for a given batch of servers.

    2. Auto-generates the sysprep.inf through Perl script.

    3. Schedules the shutdown/restart to occur at a predetermined time to invoke the sysprep image restore to the server.

    4. At the designated time, the reboot scheduled in the prior step, occurs.

  3. After first reboot, the sysprep image is downloaded from the kickstart server, a unique sysprep.inf is copied to the %systemdrive%\sysprep drive, and the server is automatically rebooted.(Second reboot)

  4. On system startup:

    1. The Scripted mini-setup wizard runs, using the pre-created sysprep.inf file to answer all of the mini-setup questions, such as Hostname, IP, and so on.

    2. The boot ordered is modified such that the floppy disk drive is placed last in the boot sequence.

    3. Auto reboots the server at the end of the mini-setup. (Third reboot)

  5. On system startup:

    1. Windows 2000\iis5\SFU\Interix fully functional and ready for rdist distribution of script files if necessary.
  6. Rdist could be called automatically from setup to further deploy customized scripts.

Implement Quality Assurance and Test the Windows 2000 Server\IIS 5 Build Image

Hotmail has three preproduction test configurations in addition to a performance testing and functionality lab. Software is distributed to the test environments through the same mechanism as would be used for the production implementation (RDIST, "cron", script files). This provides the added benefit of testing the software distribution functionality and allows for familiarizing staff with the software distribution method, prior to using it for production. The state of the code would determine the appropriate test environment. The initial testing beyond the lab would be tested in environment 1. After the code proved to be stable in environment 1, it would be tested in environment 2, and so on.

Test environments:

  • Environment 1 - test front-end Web servers (all Windows 2000), test storage servers.

  • Environment. 2 - test front end Web Servers (50% Windows 2000, 50% Apache), test storage servers.

  • Environment. 3 - test front-end Web servers (50% Windows 2000, 50% Apache), production storage servers.

Perform testing (functionality and performance):

  • QA the servers: built from the remote boot process/sysprep image.

  • Detect bugs, implement fixes on the reference machine.

  • Redistribute the image through the remote boot process.

  • Reiterate through the process In addition to the obvious quality testing, this iterative process of build, deploy, test, provided a mechanism to rehearse the actual migration.

Stake holder release sign-off:

There is a formal process and sign-off required by the key stakeholders prior to scheduling changes to be distributed into the production environment.

  • Operations / customer service / development.

  • Security (solicited internal Corporate Security with help any break-in).

Distribute the Gold Image to Production

Use identical remote-boot process as used in test and development.

Application Migration

All of the Hotmail web servers are dual Pentium processor servers. Originally, these servers were built with FreeBSD running Apache as the web server. Most of the Web pages were generated by Perl-based CGIs. The version of Apache that was being used was not multi-threaded so each request was handled by another Apache process that was spawned off by the parent process. Spawning a new process is costly and Perl is an interpreted language so the performance of these machines was not optimal.

One of the first tasks undertaken by the dev team when Microsoft purchased Hotmail was to convert all the CGIs from Perl to C++. This was done for several reasons— the most important of which was performance. After this was completed, a couple of developers were tasked with getting the code to build and run on Windows NT operating system. This was done because of the need for better debugging tools.

After this port was done, Windows NT and Microsoft Visual C++ became the development environment. The code was written and debugged on Windows NT and then built and tested under FreeBSD. This made debugging much easier. At this point the production code was still built as individual CGIs with gcc on FreeBSD.

Eventually a number of technical and performance issues were compelling enough to consider moving the live site from FreeBSD to Windows 2000. The number of front-end machines was growing at an alarming rate to keep up with Hotmails growth. It was becoming financially, operationally and physically difficult to maintain the rate that machines were being added. So, it became necessary to look at ways of squeezing more performance out of the servers. Several FreeBSD alternatives were investigated, including wrapping the code in an Apache module (the Apache equivalent of Internet Server API [ISAPI]), switching to Zeus, a multi-threaded Web server for UNIX that supports ISAPI, and a couple of other Web servers. At the same time more languages needed to be supported. To support the number of languages being discussed necessitated the use of Unicode. The Unicode support available under FreeBSD was inadequate and would have taken a lot of development time to get it to meet our needs. Windows 2000 had just gone beta so it was considered as a possible solution to both of these problems. The globalization team investigated the Windows 2000 Unicode support and determined that it would do what was required. The next step was performance testing.

Perfmon was showing a high number of context switches. Further investigation showed that this was due to thread blocking on allocating and freeing memory from the process heap. To resolve this a private heap was allocated for each thread. Each thread creates and destroys the heap on each request. This reduced the context switches tremendously and also eliminated the biggest stability problem: memory leaks. All the memory management calls in the code are overridden. Each time new, delete, malloc, or another similar call is made, the memory is allocated or freed from the threads private heap. That heap is thrown away between each request so the memory leaks went away.

The next big performance increase came from eliminating the need to load and parse the Hotmail-specific configuration files for every request. Every CGI had to load and parse these files each time that it ran. The configuration files loaded and parsed when the ISAPI extension was loaded and kept the data in shared memory. With these major changes and few other minor ones, the number of requests per second that could be handled nearly doubled what the live site was experiencing. There was also an approximate 30 percent speed increase in the execution of the old cgi scripts, because invocations no longer had to parse the configuration files or fork processes.

The net result is that significantly better throughput was achieved on the same hardware without a drastic reengineering of the code-base with just a few months work. Almost all the application logic is the same as it was on the FreeBSD version of the code base.

In general, it must be noted that the development group is considerably more productive and satisfied with using a modern interactive development environment to create the Hotmail front-end code base. If you use WinDBG on the live site to trace through and diagnose problems as they happen in real time, this is much better than attaching gdb to a transient cgi process or using "printf debugging." Bugs can now be identified and fixed in minutes rather than days.) Under FreeBSD, bugs and memory leaks would often go undetected because of the lack of tools. With Windows 2000 and IIS 5, the tools exist to optimize the performance and truly understand exactly what the code is doing at all times.

F1 - The server unique settings are parameters such as hostname, IP configuration, domain membership, local language settings, regional settings etc. The role of the servers being migrating at Hotmail was Web servers and as mentioned earlier, there are 3 flavors of Web server which were migrated i.e. login, OE and WEB servers that host the Hotmail email experience (reading, composing, deleting mail, etc).

F2 - Out-of-the-box in this instance refers to the fact that the tools are included on the Windows 2000 server and advanced server distribution media from Microsoft.

F3 - UDF or "uniqueness database file" is an example of the pre-provisioning database file and has a fixed file format that is used by option 1, winnt/winnt32.exe. There is not a prescribed fixed file format for the pre-provisioning database file when using the sysprep approach. The sysprep process uses an answer file called sysprep.inf for specifying server specific parameters during setup. See the sysprep section later in this document for more detail.

F4 - There are a variety of excellent products, a few of which are mentioned later in this doc.

Click here to return to the introduction page of the Microsoft Hotmail Migration Technical Case Study.