Chapter 8: Developing Phase: Deployment Considerations and Testing Activities

This chapter discusses the key aspects of deploying and testing a migrated application on Microsoft® Windows® operating systems.

You can use the information provided in this chapter to identify the implementation requirements, such as environment variables, database connectivity, and migration of scripts, for creating the migrated environment. You will also be able to identify the deployment requirements, such as packaging and deploying of tools and administering the deployed Microsoft Win32® applications. This chapter also discusses various testing activities that you need to carry out in the Developing Phase.


On This Page

Deployment Considerations Deployment Considerations
Testing Activities Testing Activities
Interim Milestone: Internal Release n Interim Milestone: Internal Release n
Closing the Developing Phase Closing the Developing Phase

Deployment Considerations

To ensure smooth deployment in the Deploying Phase, you need to address the following topics in the Developing Phase:

  • Process environment

  • Migration of scripts

  • Database connectivity

  • Building the application

  • Deployment

  • Configuration

  • Packaging tools and installation

  • Deploying applications

  • Managing applications

The process for deploying the migrated application is discussed in detail in Volume 5, Deploy-Operate of this guide.

Process Environment

The process environment includes several key elements, which are explained in this section. The notable differences between these elements in Windows are also described briefly. This section discusses the Portable Operating System Interface (POSIX) environment in general because the deployment environment varies with respect to vendor and version of UNIX. This section provides you with the necessary information to set up or retrieve various environment-specific details in the UNIX and Windows environments.

Environment Variables

Every process has an environment block associated with it. An environment block is a block of memory allocated within the address space of the process. Each block contains a set of name value pairs. Both UNIX and Windows support process environment blocks. The particular differences may vary depending on which supplier and version of UNIX you are dealing with.

Note For information on conducting this comparison, refer to the MSDN article, "Changing Environment Variables," at

A summary of the notable differences between environment variables in Windows and POSIX is also provided at this URL.

Differences Between UNIX and Windows Environment Variables

Windows supports an ANSI version of the environment functions as well as a Unicode variant. The Unicode variant is preceded with a _w prefix. Using the _w prefix in your application helps ensure that the application is linked with the correct variant when compiled with _UNICODE or _MBCS preprocessor strings. In addition to the ANSI functions putenv and getenv, the Windows application programming interface (API) also supports the GetEnvironmentVariable, GetEnvironmentStrings, ExpandEnvironmentStrings, and SetEnvironmentVariable functions.

The following is a simple example of accessing the environment block. This example works equally well in both the UNIX and Windows APIs. This example shows only the ANSI functions. Using the ANSI functions provides you with the simplest method of converting your code from UNIX to the Windows API.

UNIX/Windows example: Accessing the environment block

#include <stdlib.h>
#include <stdio.h>
#include <string.h>
int main(int argc, char *argv[])
  char *var, *value;
  if(argc == 1 || argc > 3) {
    fprintf(stderr,"usage: environ var [value]\n");
  var = argv[1];
  value = getenv(var);
    printf("Variable %s has value %s\n", var, value);
    printf("Variable %s has no value\n", var);
  if(argc == 3) {
    char *string;
    value = argv[2];
    string = (char *)malloc(strlen(var)+strlen(value)+2);
    if(!string) {
      fprintf(stderr,"out of memory\n");
    printf("Calling putenv with: %s\n",string);
    if(putenv(string) != 0) {
      fprintf(stderr,"putenv failed\n");
    value = getenv(var);
      printf("New value of %s is %s\n", var, value);
      printf("New value of %s is null??\n", var);

(Source File: W_GetEnvVar-UAMV3C8.01.c)

Temporary Files

Both UNIX and Windows APIs support functions that create temporary files.

The tmpnam() function returns a pointer to a temporary file name. The _tempname() function does this as well, but you can also use it to specify the directory and file name prefix.

Computer Information

At times, it is necessary to obtain information about a computer. This is particularly important when an application is designed to support multiple users or different types of hardware and operating systems. Some of the pieces of information that applications require are as follows:

  • Host name

  • Operating system name

  • Network name of the computer

  • Release level of the operating system

  • Version number of the operating system

  • Hardware platform name

In UNIX, you use a combination of gethostname and uname functions to obtain this information. When using Windows, you have the option of using gethostname. However, uname is not available as standard in the Windows API. It is possible to add uname using a POSIX layer, which is possible by installing Windows Services for UNIX 3.5. Applications that use this function need to be rewritten to use a different set of services.

The Platform SDK has the functionality to obtain a set of information that is similar to that provided by the uname function. The Platform SDK mappings are covered in this text, but it is recommended that you consider using the Windows Management Instrumentation (WMI) API. The WMI interface is a superset to the Windows API for obtaining information about the computer. It is highly extensible and supports not only static information about a platform, but also dynamic information such as configuration and performance data. Another source to consider is the Active Directory Service Interfaces (ADSI), a COM interface that facilitates access to information stored in the Microsoft Active Directory® directory service database for the enterprise. Both these interfaces represent the preferred mechanism for gathering information about Windows Server™ 2003.

Note For a complete list of the system information functions provided by the Platform SDK, you can refer to "System Information Functions" in the online platform SDK documentation on the MSDN Web site at

The two functions GetVersionEx and VerifyVersionInfo are used to get extended information about the operating system and to compare the operating system versions on Windows.

UNIX example: Using system information

Note: Some of the lines in the following code have been displayed on multiple lines for better readability.

#include <unistd.h>
#include <stdio.h>
#include <sys/utsname.h>
int main()
  char computer[256];
  struct utsname uts;
  if(gethostname(computer, 255) != 0 || 
  uname(&uts) < 0) {
    fprintf(stderr, "Could not get 
    host information\n");
  printf("Computer host name is %s\n", computer);
  printf("System is %s on %s hardware\n", 
  uts.sysname, uts.machine);
  printf("Nodename is %s\n", uts.nodename);
  printf("Version is %s, %s\n", uts.release, 

(Source File: U_SysInfo-UAMV3C8.01.c)

Win32 example: Using system information

Note: Some of the lines in the following code have been displayed on multiple lines for better readability.

#define _WIN32_WINNT 0X0500
#include <windows.h>
#include <stdlib.h>
#include <stdio.h>
void errabt(char *msg)
  fprintf(stderr, msg); // use GetLastError
  for more detailed info.
void main()
  DWORD nSize= 255;
  char computer[256];
  char nodename[256];
  SYSTEM_INFO siSysInfo;  // Struct for hardware info
  OSVERSIONINFO siVerInfo;// Struct for version info
  GetSystemInfo(&siSysInfo);// Get hardware OEM
  // Get major and minor number
  ZeroMemory(&siVerInfo, sizeof(OSVERSIONINFO));
  siVerInfo.dwOSVersionInfoSize = 
  if (!GetVersionEx((OSVERSIONINFO *) &siVerInfo))
    errabt("Could not get OS Version info\n");
  nSize = 255;
  if (GetComputerNameEx(ComputerNameNetBIOS, computer,
  &nSize) == FALSE)
    errabt("Could not get NETBIOS name of computer\n");
  nSize = 255;
  if (GetComputerNameEx(ComputerNameDnsFullyQualified, 
  nodename, &nSize) == FALSE)
    errabt("Could not get FQDNS Name of computer\n");
  printf("Computer host name is %s\n", computer);
  printf("System is %u on %u hardware\n", 
  printf("Nodename is %s\n", nodename);
  printf("Version is %d.%d %s (Build %d)\n",
    siVerInfo.dwBuildNumber & 0xFFFF);

(Source File: W_SysInfo-UAMV3C8.01.c)

Logging System Messages

Logging diagnostic messages in UNIX is carried out by writing formatted output to the system logger. The message is written to system log files, such as USERS, or forwarded to the appropriate computer. If a log daemon process is not running, the log information may be written to a standard log file such as /var/adm/log/logger.

The daemon syslogd in UNIX contains numerous levels of logged information, as listed in Table 8.1.

Table 8.1. UNIX Logging System Messages




A panic condition.


A condition that should be corrected immediately.


Critical conditions such as hard device errors.






Non-error–related conditions.


Informational messages.


Messages intended for debug purposes.

In contrast, the Windows event log supports logging levels, as listed in Table 8.2.

Table 8.2. Windows Event Logging Messages




Information events indicate infrequent but significant successful operations. For example, when Microsoft SQL Server™ successfully loads, it may be appropriate to log an information event stating that "SQL Server has started." Note that while this is appropriate behavior for major server services, it is generally inappropriate for a desktop application (for example, Microsoft Excel®) to log an event each time it starts.


Error events indicate significant problems that the user should know about. Error events usually indicate loss of functionality or data. For example, if a service cannot be loaded as the system starts, it can log an error event.


Warning events indicate problems that are not immediately significant, but may indicate conditions that can cause problems in the future. Resource consumption is a good candidate for a warning event. For example, an application can log a warning event if the disk space is low. If an application can recover from an event without loss of functionality or data, it will generally classify the event as a warning event.


Information events indicate infrequent but significant successful operations. For example, when SQL Server successfully loads, it may be appropriate to log an information event stating that "SQL Server has started." Note that while this is appropriate behavior for major server services, it is generally inappropriate for a desktop application (for example, Excel) to log an event each time it starts.


Success audit events are security events that occur when an audited access attempt is successful. For example, a successful logon attempt is a successful audit event.


Failure audit events are security events that occur when an audited access attempt fails. For example, a failed attempt to open a file is a failure audit event.

As you can see, the Windows event logging mechanism supports a smaller selection of event priorities than UNIX. You can augment the priority status of event messages by including category information and binary data in the event log. This additional event information is part of the Windows example (following the UNIX example).

UNIX example: System logging

#include <syslog.h>
#include <syslog.h>
#include <stdio.h>
int main()
  FILE *fp;
  fp = fopen("Bad_File_Name","r");
    syslog(LOG_INFO|LOG_USER,"error - %m\n");

(Source File: U_SysLog-UAMV3C8.01.c)

On a typically configured Linux system, this message would be logged to /var/log/messages, and on a Solaris system, the message would be logged to /var/adm/messages. For more specific information, consult the /etc/syslog.conf file. Specifically, a *.info entry will specify the file where the message is going to be logged.

Windows example: System logging

#include <windows.h>
#include <stdlib.h>
void main()
  LPSTR mstr = "This is an error from my sample app.";
  h = RegisterEventSource(NULL, // uses local computer 
       TEXT("BILLSamplApp"));   // source name 
  if (h == NULL) 
  ReportEvent(h,        // event log handle 
      EVENTLOG_ERROR_TYPE, // event type 
      0,          // category zero 
      0,          // event identifier 
      NULL,         // no user security identifier 
      1,          // one substitution string 
      0,          // no data 
        (LPCSTR*)&mstr,    // pointer to string array 
      NULL);        // pointer to data 

(Source File: W_SysLog-UAMV3C8.01.c)

In the preceding example, the source name to the RegisterEventSource call is not available in the system registry. As a result, you will not see valid mapping or lookup data when you view the event log with the Event Viewer. After running this code, Eventvwr.exe would display a window as shown in Figure 8.1.

Figure 8.1. Windows Event Viewer

Figure 8.1. Windows Event Viewer

Double-clicking the error line opens a detailed view of the event (depicted in Figure 8.2).

Figure 8.2. Details of an event in Windows Event Viewer

Figure 8.2. Details of an event in Windows Event Viewer

The preceding example is a very simple example of generating log information and posting it to the Windows event log. A complete application would use more of the Platform SDK facilities to create an application entry in the registry or perhaps create an entirely separate event log file.

Note For a complete discussion of the details and complexities of event logging in Windows, refer to “Set event logging options” on the TechNet Web site at

Migrating Scripts

This section describes the process of porting UNIX shell scripts to the Windows environment. Following are the steps involved in the porting process:

  1. Evaluating the script migration tasks.

  2. Planning for fundamental platform differences.

  3. Considering the target environments.

The steps in the process are described in more detail later in this section. This section helps you choose the appropriate porting approach and the target scripting language in the Windows environment.

Scripts fall into the following two basic categories:

  • Shell scripts, such as Korn and C shell.

  • Scripting language scripts, such as Perl, Tcl, and Python.

Shell and scripting language scripts tend to be more portable than compiled languages, such as C and C++. A scripting language such as Perl is compatible with most platform features. However, the original developer might have used easier or faster platform-specific features, or just might not have taken cross-platform compatibility into consideration.

The choice of porting approach depends on the source script type and whether the target environment is Windows only, Windows plus Interix, or uses CGI scripts.

In the Windows-only environment, a solution is to write all common scripts in Perl because there are several versions of Perl available. If software is to be maintained on UNIX and Windows-based systems, writing all-new scripts in Perl, and even converting some existing shell scripts to Perl, is a good strategy.

Evaluating the Script Migration Tasks

Before script migration begins, all required tasks need to be considered. To identify script migration tasks, consider the following questions:

  • What are the scripting languages being used?

  • Does the script rely on the syntax of the shell?

  • Does the script use substantial external programs?

  • Does the script use any platform-specific services?

  • Does the script use extensions that rely on third-party libraries?

  • Does the script use or rely on nonportable concepts for essential functionality?

  • Can a quick port be done now, with a rewrite later?

  • Does the developer understand enough of the original code to quickly locate the issues and then make the changes necessary to port to a new platform?

By answering these questions, script migration tasks can be evaluated and defined. Redesigning and rewriting portions of the application might be easier than porting because it is more efficient to take advantage of native features.

Planning for Fundamental Platform Differences

While porting scripts, the code must address some inevitable fundamental differences between the platforms. The following areas, which are described in more detail in later sections, are often sources of script migration issues:

  • File system interaction

  • Environment variables

  • Shell and console handling

  • Process and thread execution    

  • Device and network programming

  • User interfaces (UIs)

File System Interaction

UNIX and Windows-based systems interact differently with the file system. The UNIX path separator is a forward slash (/), whereas Windows uses the backslash (\). The root of UNIX files is represented by the forward slash (/), but Windows uses locally mounted drives ([A-Z]:\) and network-accessible drives using the Universal Naming Convention (\\ServerName\SharePoint\Dir\).

The first things you should correct in any code to be migrated are hard-coded file paths. These paths are commonly used to find initialization or configuration files (that is, to set up environment variables or application paths). One common mistake during the initial porting work is to refer to a Windows-based file in the native form. The problem is that the backslash (\) is also the common escape character. As a result, the path C:\dir\text.txt is translated as C:dir ext.txt. (The space is a single tab character.)

In most cases, Windows can handle the forward slash (/) as a path separator. However, when building cross-platform paths, scripting language compilers can misinterpret even correctly used file path separators or methods.

Unlike UNIX file systems, Win32 file systems are not case-sensitive. They may preserve the case of file names, but the same directory cannot contain two different files where only the case of the file name letters differs (for example, file.txt and FILE.txt). Windows also does not allow users to create a file with the same name as the directory in which it is created.

Note When hard coding paths in a script, certain Windows directories naming changes depend on the native language. For example, the directory named C:\Program Files\ in the English version of Windows is named C:\Programme\ in the German version.

The exact names for paths and other information that may be critical in porting your code are often found in the Windows registry. For example, the correct path for the Program Files directory can be found in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\ProgramFilesDir.

Windows registry is a central database of information about your Windows system. Windows registry contains information such as what hardware is present on the system, how the hardware and system are configured, and what applications are installed on the system. The registry provides fine-grained security. Each registry key can be protected with an access control list (ACL) in exactly the same way that files can be protected.

You can refer to the registry when other platform-independent methods are not available. Use the regedit command to peruse the Windows registry. Some of the information stored in the registry is also available by using language APIs, which are safer to use.

Environment Variables

Both Windows and UNIX use environment variables. Although Windows maintains an environment array, its contents are not similar to UNIX. The Windows environment array is not case-sensitive, so the environment variables PATH, path, and PaTh all refer to the same item. The PATH variable is similar in purpose across platforms (for example, shells search the directories specified in the PATH environment variable for executables and scripts), but Windows uses a semicolon (;) as a separator, whereas UNIX uses a colon (:). Fortunately, compiled languages usually have features that handle the differences in usage of the PATH variable.

Commonly used UNIX environment variables are HOME, PATH, USER, and TEMP. Windows also has the PATH and TEMP variables. To determine which environment variables are used in a Windows installation, use the abstractions in a compiled language or look in the Windows registry. As an alternative, you can use the following technique.

To see the full contents of the environment

  1. Right-click My Computer, and then click Properties.

  2. In the System Properties dialog box, click the Advanced tab.

  3. Under Environment Variables, click Environment Variables.

  4. In the Environment Variables dialog box, view and modify the environment.

Note that Windows has separate user and system environments. Administrator rights are required to modify the system environment.

Scripts commonly require a temporary data file, which is usually hard-coded to reside in /tmp on UNIX. On Windows and UNIX, use the TEMP environment variable instead to refer to an acceptable temporary file directory. Some scripts also rely on environment variables beginning with LC_, which indicates the locale information for that system.

Files are not always the same at the binary level. For example, Windows uses CRLF (carriage return/linefeed or characters \015\012) at the end of a line, whereas UNIX uses only LF. Script environments provide methods for handling this transparently. Another discrepancy is that ^Z (character \032) represents the end-of-file character. A UNIX script with this character embedded in code might ignore it, and Windows might stop reading the file at that point.

Shell and Console Handling

The shell is found on all UNIX desktops. Windows provides a command shell. Windows Server 2003 stores the path to the shell in the COMSPEC variable* *of the environment array. Developers interact with the command shell during testing, but it can interfere or act in unexpected ways during the ordinary operation of a script. Some languages can run without any attachment or specific connection to a terminal. For guidance on how to make the console behave as required, refer to the language specifics.

Some scripts call the shell to reuse existing commands, such as cat, ls, sendmail, date, and grep. Relying on the shell is not recommended because it not only reduces processing power by creating external process execution overhead, it is also highly nonportable. To avoid portability issues, it is better to rely on the methods that the language provides.

For example, the following example might not be portable:

set date [exec date “+%D %H:%M”]

The following example is portable:

Note: The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.

set date [clock format [clock sec’s]
-format “%m/%d/%y %H:%M”]

Note that invoking commands from the shell automatically use wildcard expansion (usually referred to as globbing). When the script relies on globbing, you should use the language methods for file globbing to expand file names. Where it is unavoidable to call the shell, it is important to note that the Windows command shell has different native commands and quoting rules.

Process and Thread Execution

The script might need to deal with process manipulation, especially if external system calls are unavoidable. In a language that supports process manipulation, the features are usually portable to Windows Server 2003. However, it is still necessary to evaluate all uses of process manipulation to ensure that the application code is manipulating the correct Windows processes.

It is common in UNIX to manage processes by passing signals, especially for daemon processes and system administration tasks. Signal handling, when handled by the language, is similar to process manipulation. Some uses of signal handling are portable from UNIX to Windows Server 2003, but not all signals are relevant. Windows uses an event passing model. A UNIX daemon process ported to Windows needs to respond to these events. When porting a UNIX daemon on Windows, it is necessary to create a Windows service that provides essentially the same functionality.

It is important to note that a fork command can have a different behavior in UNIX, depending on the language. If the fork command is used in UNIX, it is highly recommended that you look at alternative techniques for achieving the same result on Windows. The best solution is to switch to using threads.

Device and Network Programming

Many applications built today use a client/server model or must follow network or interprocess communication (IPC) protocols, such as HTTP, TCP/IP, and UDP. Scripting languages provide varying levels of abstraction over the standard system mechanism for communicating with files and sockets. Because some are more portable than others, it is important to examine socket handling when porting code. Methods for IPC outside socket programming or communicating through a pipe should be avoided because they are normally nonportable. A well-known remote procedure call (RPC) mechanism that works well across platforms and fits well into Web server applications is simple object access protocol (SOAP), which most scripting languages already support.

An application that communicates with the serial port or other system device can use the same protocol for interacting with the device, but must often address the device differently. For example, a serial device on UNIX can be addressed as the special file /dev/ttya. On Windows, it is addressed as COM1.

User Interfaces (UIs)

Many scripting languages have access to one or more graphical user interface (GUI) toolkits. If the language used in script has a GUI toolkit, it is important to determine the portability of that toolkit across platforms.

Tk is a GUI toolkit common to Tcl, Perl, and Python. It is fully cross-platform compatible between UNIX and Windows. Some of the finer points of cursor and font handling can vary between these systems because of the underlying operating system differences.

Scripting Environment

The Common Gateway Interface (CGI) protocol is the standard interface used by Web servers to run programs and scripts that handle dynamic content. Usually, CGI portability is not an issue because CGI is a standardized interface available under all major Web servers. CGI is falling out of favor and is being replaced by other techniques that cost the operating system less and scale better. Any language can be used as a CGI language if it supports reading and writing STDOUT and STDIN console handles, and chances are that many existing scripts are CGI-based. In recent years, many Web server plug-ins have been written for scripting languages to work around performance limitations in CGI, although using these plug-ins sometimes requires minor changes to the CGI script itself. Apache has direct language plug-ins for Perl (mod_perl), PHP (mod_php), and Tcl (mod_tcl). Through the Internet Server API (ISAPI), Internet Information Server (IIS) has a direct language plug-in for Perl called PerlEx.

Database Connectivity

This section describes various database connectivity mechanisms compatible with UNIX and Windows applications and provides an overview of each of these mechanisms.

Microsoft offers many data access technologies for various database management systems (DBMSs). Microsoft Data Access Components (MDAC) includes ActiveX® Data Objects (ADO), OLE DB, and Open Database Connectivity (ODBC). Data-driven applications can use these components to easily integrate information from a variety of sources—both relational (SQL) and nonrelational.

Note Detailed information on MDAC is available at

As stated, MDAC includes:

  • ADO. ADO provides consistent, high-performance access to data and supports a variety of development needs, including the creation of front-end database clients and middle-tier business objects that use applications, tools, languages, or Internet browsers.

    ADO provides an easy-to-use interface to the OLE DB, which provides the underlying access to data. It uses the COM automation interface available from all leading rapid application development (RAD) tools, database tools, and languages.

Note Additional information is available at

  • OLE DB. Microsoft OLE DB is a set of COM-based interfaces that expose data from a variety of relational and nonrelational data providers. OLE DB interfaces provide applications with uniform access to data stored in diverse information sources.

    OLE DB comprises a programmatic model consisting of:

    • Data providers. These contain and expose data.

    • Data consumers. These use data.

    • Service components. These process and transport data (such as query processors and cursor engines).

    In addition, OLE DB includes a bridge to ODBC to enable continued support for the broad range of ODBC relational database drivers.

    The following OLE DB providers are available:

    • OLE DB provider for ODBC

    • OLE DB provider for Oracle

    • OLE DB provider for SQL Server

Note Additional information on these OLEDB providers is available at

The Microsoft OLEDB core services and the Microsoft SQL Server OLEDB provider support 64-bit Windows.

Note For more information on this, refer to MSDN Web site at

  • ODBC. ODBC is a C programming language interface that makes it possible for applications to access data from a variety of DBMSs. Using ODBC, an application can access data in diverse DBMSs through a single interface. The application is independent of any DBMS from which it accesses data. Users of the application can add software components called drivers, which create an interface between an application and a specific DBMS.

    ODBC drivers provide access to the following types of data sources:

    • Microsoft Access

    • Microsoft Excel

    • Paradox

    • DBASE

    • Text

    • Oracle

    • Visual FoxPro®

Note Additional information on ODBC drivers is available at

The ODBC headers and libraries shipped with MDAC 2.7 SDK allow programmers to write code for the new 64-bit platforms. An application with code that uses the ODBC defined types in the ODBC libraries of MDAC 2.7 can use the same source code both for 64-bit and 32-bit platforms.

Note Additional information on this is available at

Building the Application

This section describes the Visual Studio® .NET 2003 integrated development environment (IDE) that can be used to build and debug Windows applications. IDEs typically provide all development tools needed for programming, including compilers, linkers, and project/configuration files that generate complete applications, create new classes, and integrate those classes into the current project. IDEs also include file management for sources, headers, documentation, and other material to be included in the project. IDEs can also include the creation of UI elements, resources like icons, bitmaps and cursors, and marinating the language resources like string tables. Other typical capabilities include the debugging of the application and the inclusion of any other program needed for development by adding it to a Tools menu.

Visual Studio .NET 2003 includes a complete set of development tools for building reusable Win32/Win64 applications. With Visual Studio .NET 2003, you can:

  • Do programming through wizards, perform drag-and-drop editing, and reuse program components from any of the Visual Studio .NET languages. Because of the use of programming wizards, you need to write less code.

  • Write code more quickly by minimizing errors with syntax and programming assistance within the editor.

  • Integrate dynamic HTML, script, and components into Web solutions.

  • Manage Web sites from testing to production by means of integrated site management tools.

  • Create and debug Active Server Pages (ASP).

  • Use design-time controls to visually assemble data-driven Web applications.

Visual Studio .NET 2003 also includes the Windows 2000 Developer’s Readiness Kit, which contains developer training and technical resources.

Note Additional information on using Visual Studio .NET is available at


The following sections discuss how you can configure, package and install, and deploy your application, as well as manage it. It specifically talks about using the Microsoft Windows Installer service and other tools for packaging your application. You can use the information provided in this section to identify critical parameters necessary for deployment, such as the best packaging tool and the most suitable mechanism for deploying your Win32/Win64 application.


Occasionally, it is desirable to store information on a user’s computer. The reason for doing so may be to store information about program settings, which should persist from one invocation of the program to the next, or about registration details, or the connection string for the database.

The registry is a system-defined database in which applications and system components store and retrieve configuration data. The data stored in the registry varies according to the version of Windows. Applications use the registry API to retrieve, modify, or delete registry data.

Note Additional information on the registry API is available at

Packaging Tools and Installation

This section explains how to package your migrated Win32/Win64 application and install it into a Windows environment. The standard method of packaging applications in a Win32/Win64 environment is to use the Windows Installer service. This section covers the Windows Installer and some of the tools that you can use to package your application.

Windows Installer Service

The Windows Installer service uses the Windows Installer package, which is now the standard way that application developers deliver software for the Windows platform. Using the format and a common set of actions standardized by Microsoft, tool vendors can add value in creating, editing, and distributing customized Windows Installer packages supporting such features as self-repair, rollback of the installation, and installation of the selective features of the application.

The Windows Installer package packages the libraries that are implicitly linked in the application. But if the application uses explicit linking of libraries, these libraries should be packaged by adding them explicitly to the packaged application.

The .msi file format contains all of the instructions that a program needs to install itself, which includes locations of files, movements or deletions of existing files, creation of shortcut icons, a Start menu entry, registry settings, ACL changes, Windows service installation or changes, and COM component registration. The program files may be contained in the .msi or in one or more .cab (compressed) files. The .msi uses database tables to describe the features and components of the product, the relations between the two, and all of the actions required to install, upgrade, or uninstall the application.

The installer will copy the files of the application to their correct locations from an embedded file or from the accompanying .cab file or files. Usually, the Windows Installer is invoked when a computer starts and logs on to the domain (by using an Active Directory computer account) or when a user logs on to the computer. The Group Policy feature of Active Directory is used to attach .msi installation packages to these events, although you can also invoke the installer locally by using other means such as scripting or the scheduler service.

The installation process leaves a copy of the .msi instructions on the local computer. Each time an application is launched, it checks the feature and component listings in the local .msi to see if everything is still intact. Missing or corrupt components can trigger an automatic repair known as self-healing. Self-healing can also be triggered during the operation of an application when a component is dynamically loaded. This feature is useful when .msi files are well architected by the software developer, but it is often turned off by administrators when the .msi has been repackaged. This is because many large applications are, by necessity, built as single feature packages, and self-healing would force the administrator to reinstall the entire application.

Installation on Demand

Install-on-Demand is one of the most useful features of the Windows Installer service. Install-on-Demand is the feature that prompts for the location of the installation media after selecting an item that you have not used before when using a product such as Microsoft Office. By using Install-on-Demand, you can leave components that you do not access frequently (grouped into feature sets) uninstalled and on the server until a user invokes them. You can apply this to all applications in the case of Group Policy software advertisements, where only a desktop shortcut is deployed initially, or you can apply it to application features such as spelling checkers, graphic libraries, or charting. To use this feature properly, you must understand how to divide the application into independent chunks (or components), and you must then choose which components to share across an entire installation and which components to hold back until they are needed. The local .msi database maintains a list of installation points where it can find the necessary files at the time of installation. You can edit this list by using subsequent update (.msp) files to update network locations, such as Distributed File System (DFS) shares.

Installation Rollback

As the installation occurs, the service backs up overwritten files and keeps track of any changes that are made to the system, such as registry entries or ACL changes. If the setup is not run to completion, an automatic rollback restores the original state of the system. You can run Windows Installer with verbose logging options to record these events. However, there are no built-in alert mechanisms in Group Policy. Most large enterprises using Active Directory as their only means of distributing software have developed scripts to query the .msi log files or interrogate the .msi local database by using Windows Management Instrumentation (WMI).

Installation Auditing

The following Visual Basic Scripting Edition (VBScript) code interrogates the local .msi database and lists installed packages. You can use this, in conjunction with remote scripting, as the basis of a simple installation audit.

Windows example: Installation auditing

Note: Some of the lines in the following code have been displayed on multiple lines for better readability.

Dim installer, product
Dim version
Dim productList, productString
productList = ""
Set installer = Wscript.CreateObject
For Each product In installer.Products
version = CLng(installer.productInfo(product,
version = (version\65536\256) & "." & _
(version\65535 Mod 256) & "." & _
(version Mod 65536)
productString =installer.productInfo(product,
& vbCrLf & " ID: " & product _
& " Version: " & version & vbCrLf
productList = productList & productString & vbCrLf
If productList <> "" Then
productList = "Found " & installer.products.Count & _
" applications" & vbCrLf & vbCrLf & productList
productList = "No .msi applications listed."
End If
WScript.Echo productList

The following is an example of output from this script:

Note: Some of the lines in the following code have been displayed on multiple lines for better readability.

E:\>cscript ListMSIDB.vbs
Microsoft (R) Windows Script Host Version 5.1 for Windows
Copyright (C) Microsoft Corporation 
1996-1999. All rights reserved.
Found 7 applications
Windows Server 2003 Administration Tools
 ID: {B7298620-EAC6-11D1-8F87-0060082EA63E} Version: 5.0.0
Microsoft Windows Services for UNIX
 ID: {E8A81EF0-40DB-4B5B-ABE8-558D69CE2F09} Version: 7.0.1620
Hummingbird Exceed
 ID: {CFBD3858-2164-42B0-84A2-576C18C85082} Version: 7.1.0
Microsoft Office XP Professional with FrontPage
 ID: {90280409-6000-11D3-8CFE-0050048383C9} Version: 10.0.2627
 ID: {6F716D8C-398F-11D3-85E1-005004838609} Version: 9.0.3501
Windows Server 2003 Support Tools
 ID: {242365CD-80F2-11D2-989A-00C04F7978A9} Version: 5.0.2072
Microsoft Windows Server 2003 Resource Kit
 ID: {4E1F3FCF-B205-427F-B52B-D13BDFB6526C} Version: 5.0.2092
Security Rights and the Windows Installer Service

The Windows Installer service, when invoked by Active Directory Group Policy, runs a managed installation. This process runs under the Local System account, which has administrative rights. This allows applications to be installed on systems that are locked down (that is, systems on which the end users have limited rights and abilities).

Update Files

Another aspect of the Windows Installer service is the .msp, or update file. This is a specially formatted .msi that can identify and update existing installations of itself by unique product and version numbering. Many organizations will create build images by using the .msi format so that future updatees and upgrades can be deployed by using Group Policy in Active Directory.

Window Installer Service Transforms

If you have worked with Microsoft Office 2003 or Microsoft Office XP and are familiar with the Resource Kit Custom Installation Wizard, you have seen the transform technology of the Windows Installer. Transforms allow you to amend the installation instructions in the .msi file on the fly. These are usually authored using one of the commercial editing tools such as WinInstall or InstallShield.

A limited variety of UI widgets are available to the Windows Installer to gather user input during the installation and to look up information in outside files.

Creating New Windows Installer Service Packages

There are many tools available for packaging, distributing, installing, and managing applications in the Windows Installer format (that is, .msi files). Microsoft Visual Studio .NET 2003 can create installation packages in the .msi format. In addition, there are third-party tools that help you create and manage Windows Installer packages. These products typically provide the following features:

  • An IDE for developing installation packages.

  • Installation script editors.

  • Installation debuggers.

  • Options for Internet-based installations.

  • Support for password and digital signature security options on installation packages.

  • Support for the Windows Installer update files (.msp files).

  • Support for the Windows Installer transforms file (.mst files).

  • Source control integration.

The following are three third-party products that you can also use:

  • InstallShield Developer

Note Information is available at

  • Wise for Windows Installer

Note Information is available at

  • Veritas WinINSTALL

Note Information is available at

Repackaging Applications

If a software installation process that has already been created does not support the Windows Installer (.msi) standard, you can use a repackaging application. At a minimum, a repackaging application will allow you to:

  • Create fully featured Windows Installer setups by capturing installations that are not based on Windows Installer.

  • Allow installations to be customized.

  • Check and resolve any installation conflicts.

The following are two repackaging applications that you can use:

  • InstallShield AdminStudio.

Note Information is available at

  • Wise Package Studio

Note Information is available at

Deploying Applications

The following subsections describe major activities during the deployment and the various policies used during the deployment process.

Deploying Applications with Group Policy Objects

Active Directory supports a technology known as Group Policy. You can assign Group Policy objects (GPOs) to users or computers, and you can associate them with any of the hierarchical containers that make up the directory structure. This means, for instance, that you can apply a policy to all the computers in an engineering department at a particular site or even across the organization, while the computers in an accounting department have their own policies. GPOs are filtered by the user groups in Active Directory so that you can keep precise control over applications of the users.

GPOs can set and enforce hundreds of settings on desktop computers, including all of the security settings, but the setting applicable here is the software distribution policy setting. You use the software distribution policy to deploy Windows Installer files (.msi files). The Windows Installer service, msiexec.exe, can be set by Group Policy to run with elevated (administrator-level) privileges. Thus an installation program that needs access to resources that a typical user would not have access to (for example, directories and registry entries) can still operate without the user having power user or administrator privileges.

Deploying Applications with Systems Management Server

Applications can be deployed using the Group Policy software distribution feature of Active Directory. However, there are several limitations of using GPOs for software deployment. These are addressed by Microsoft Systems Management Server (SMS). Here is a summary of the main reasons for using SMS instead of GPOs for application deployment:

  • Active Directory Group Policy requires the applications to be in the Windows Installer (.msi) format, whereas SMS can deploy any executable package, including setup programs, scripting, and batch files. Many large applications include legacy setup architectures that are difficult or impossible to replicate in a repackaged .msi installation.

  • Group Policy requires a user to log on or a computer to be restarted to initiate a software deployment policy.

  • SMS has extensive Microsoft SQL Server-based reporting capabilities.

  • SMS includes an extensive hardware and software inventory.

  • SMS allows you to query the client computer before installation to ensure adequate disk space, memory, operating system version, and other software dependencies.

  • SMS does not require Active Directory, although SMS can use Active Directory if it is available.

  • Software installations can be advertised to users through desktop shortcuts, the SMS client icon, and Control Panel. Software installations can also be pushed to a client without user intervention.

  • SMS allows you to define computer groups separately from Active Directory users and groups, based on inventory information.

Deploying Win32/Win64 Applications

This section describes different methods of deploying Win32/Win64 applications and how you can use them.

Deploying Win32/Win64 Applications by Pushing Them to the Desktop

Active Directory Group Policy software deployment or other systems that rely on a user logging on or a computer being restarted might need to have a second method of delivery that can be deployed without user intervention to client desktops.

SMS and other enterprise-level systems can achieve this as part of their typical client-server interaction. Other methods of achieving this include remote scripting or maintaining a service (daemon) on each desktop that checks for updates periodically.

Two-Phase Deployment of Win32/Win64 Desktop Applications

When deploying locally installed applications, you might want to avoid the distribution of large installation images over the network over a short period of time. To do so, you can use a two-stage approach, sometimes referred to as a knife-edge installation. In this scenario, the two stages are as follows:

  1. Deploy the installation image. Deploy an .msi or SMS package that is designed to do nothing more than copy a potentially large installation image to the local disk of each user, possibly in a partition reserved for this purpose.

  2. Schedule the installation job. A second package or job is then scheduled with the actual installation instructions that operate against this local image. This can be an incremental deployment over several days or weeks.

In this way, large numbers of users can simultaneously install a new version of an application without affecting the network and without raising data compatibility issues.

Another benefit of this technique is that multiple versions of the installation image of the application could be stored on the local drive for rapid rollback or piloting new versions. To maintain this rolling cache of images, you need well-tested install and uninstall jobs, packages, and processes.

If a deployment system such as SMS is in place with some additional Wake-On-LAN support, you can deliver the image deployment and installation packages outside ordinary working hours.

Large package deployments can also use compression technologies such as WinZip to deploy the package without dependence on the .msi format.

Side-by-Side Deployment of Win32/Win64 Applications

Although the Windows platform has been a successful development platform in part because of its built-in component-sharing mechanisms, these same shared components have also caused administrative headaches. Components from Microsoft (which are provided as part of the base operating systems, option packs, service packs, and various add-ins) and numerous third-party sources save developers countless hours. However, true backward compatibility means that shared components must function exactly as they did in previous versions while providing new functionality. In the real world, this is difficult to achieve because all configurations in which the component may be used need to be tested.

The practical functionality of a component is also not easily defined. Applications may become dependent on unintended side effects that are not considered part of the core function of the component. For example, an application may become dependent on an anomaly in the component, which when fixed causes the application to fail. The fact that dynamic-link libraries (DLLs) have been upgraded to newer internal versions while keeping the same names has also caused confusion.

This lack of backward compatibility can result in the inability to deploy a new application without breaking applications that are already deployed or compromising the functionality of the new application. To provide for successful sharing while enhancing application stability, Microsoft introduced side-by-side sharing starting in Windows 98 Second Edition and in Windows Server 2003, creating a way to share components through isolation.

With side-by-side components, multiple versions of the same component can be installed, and applications can use the one version that is most suitable.

Two different processes can load different versions of a Win32/Win64 or COM component at the same time and, independently, unload those components as required.

As outlined in the Windows Server 2003 logo certification guidelines, the best practice is to develop new applications and components with side-by-side use in mind, but there is also a way to selectively isolate the majority of the existing components by using a Windows redirection mechanism. Although redirection does not require changing any code, it does need to be thoroughly tested to ensure that the applications on the system continue to operate normally. Because these components may now be distributed into the directories of many applications, there is also an increase in the complexity of administering the components.

Creating new components for side-by-side use is the best way to guarantee that applications can load them independently. In this case, the component must be developed with careful attention to where global data and state information are stored. Any other factors that can affect having multiple versions of components in memory simultaneously must also be addressed. For instance, instead of storing a particular setting using a registry key such as:

Note: The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.

RegKeyName = SomeValue

the component would be better isolated using a version-specific key such as:

Note: The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.

VersionNumber\RegKeyName = SomeValue

or even a version- and application-specific key:

Note: The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.

= SomeValue

Shared memory structures such as memory-mapped files and named pipes also need to be taken into account and renamed or relocated on a per version basis. Better still, a component should be designed to be as stateless as possible and to let the client application handle state and user-specific data as much as possible. Where the component really does need to store its own state information, it should use a method or property of the client software instead of modifying memory structures or registry settings directly.

Windows Server 2003 allows administrators to take advantage of side-by-side loading with existing components as well as using a feature called DLL redirection. The operating system changes its default method of locating components if it finds a special file in the directory with the application that is loading the component. The file itself is empty but is specifically named to match the application executable name and has a .local suffix. For instance, myapp.exe would have an empty file next to it named myapp.exe.local. When Windows Server 2003 encounters this file, it looks for a requested component in the directory where the calling application is located or in the subdirectories below it. It will use the version of the component it finds there, no matter what path the system has registered for that component. If it cannot find a version in the directory structure of the application, the system will revert to using the path that is registered. Applications without a .local file continue to use the system registered path.

This method works for most components, but it needs to be tested to ensure that applications that use different versions can actually coexist. Components that store global state in registry keys that are not tied to a particular application or application version or shared memory structures may not operate correctly side by side. Sometimes this can be overcome just by not running simultaneously applications that load different versions, but each administrator must decide if that is acceptable to his or her user base. Other components may use relative paths to access system resources or other components, assuming that they are located in a particular directory. Some things can be safely moved or copied to satisfy this, but Windows system components, especially those protected by Windows file protection, should never be moved.

Managing Applications

Windows includes the Windows Management Instrumentation (WMI) component to manage or monitor applications. WMI is a component of the Windows operating system and is the Microsoft implementation of Web-based Enterprise Management (WBEM), which is an industry initiative to develop a standard technology for accessing management information in an enterprise environment. WMI uses the Common Information Model (CIM) industry standard to represent systems, applications, networks, devices, and other managed components. You can use WMI to automate administrative tasks in an enterprise environment.

Additional information is available at
Additional information on WBEM implementations on UNIX is available at

Testing Activities

This section discusses the testing activities designed to identify and address potential solution issues before deployment. Testing starts when you begin developing the solution and ends when the testing team certifies that the solution components meet the schedule and quality goals established in the project plan.

Testing in migration projects involving infrastructure services is focused on finding discrepancies between the behavior of the original application, as seen by its clients, and the behavior of the newly migrated application. All discrepancies must be investigated and fixed.

In the Developing Phase, the testing team executes the test plans for acceptance tests on the application submitted for a formal round of testing on the test environment. The testing team assesses the solution, makes a report on its overall quality and feature completeness, and certifies that the solution features, functions, and components address the project goals.

The inputs required for the Developing Phase include:

  • Functional specifications document.

  • A feature-complete application, which has been unit tested.

The documents that are used during the Developing Phase include:

  • Test plan. The test plan is prepared during the Planning Phase. It should describe in detail everything that the test team, the program management team, and the development team must know about the testing to be done.

  • Test specification. The test specification conveys the entire scope of testing required for a set of functionality and defines individual test cases sufficiently for the testers. It also specifies the deliverables and the readiness criteria.

  • Test environment. The test environment is an exact replica of the production environment; it is used to test the application under realistic environments. It also describes the software, hardware, and tools required for testing purposes.

  • Test data. The test data is a set of data for testing the application. Test data is usually a diverse set of data that helps test the application under different conditions.

  • Test report. The test report is an error report of the tests done. It includes a description of the errors that occurred, steps to reproduce the errors, severity of the errors, and names of the developers who are responsible for fixing them.

    The test report is updated during the Stabilizing Phase and is also one of the outputs of this phase, along with the tested and stabilized application.

The key deliverables of the Developing Phase include:

  • Application ready to be deployed in the production environment.

  • Application source code.

  • Project documentation and user manual.

  • Test plan, test specification, and test reports.

  • Release notes.

  • Other project-related documents.

Testing begins with a code review of the application and unit testing. In the Developing Phase, the application is subjected to various tests. The test plan organizes the testing process into the following elements:

  • Code component testing

  • Integration testing

  • Database testing

  • Security testing

  • Management testing

You can test the migrated application in all the scenarios using a defined testing strategy. Although each test has a different purpose, together they verify that all system elements are properly integrated and perform their allocated functions.

Code Component Testing

A component may be a class or a group of related classes performing a similar task. Component testing is the next step after unit testing. Component testing is the process of verifying a software component with respect to its design and functional specifications.

Component testing in a migration project is the process of finding the discrepancies between the functionality and output of components in the Windows application and the original UNIX application. Basic smoke testing, boundary conditions, and error test cases are written based on the functional specification of the component.

The code component testing round tests the components for the following:

  • Functionality

  • Input and output, interactions within and with other components

  • Stress testing

  • Performance

The test cases for component testing cover, either directly or indirectly, constraints on their inputs and outputs (pre-conditions and post-conditions), the state of the object, interactions between methods, attributes of the object, and other components. The code component testing requires the following inputs:

  • Test plan and specification. It provides the test cases.

  • System requirements. These are used to determine the required behaviors for individual domain-level classes. The use case model is also used to determine which parts of a component must be tested for vulnerabilities.

  • Specifications of the component. The specifications are used to build the functional test cases. Information on the component inputs, outputs, and interactions with other components can be derived from here.

  • Design document. The actual implementation of the design provides the information necessary to construct the structural and interaction test cases.

Components must also be stress tested. Stress testing is the process of loading the component to the defined and undefined limits. Each component must be stressed under a load to ensure that it performs well within a reasonable performance limit.

System CPU and memory usage per component can also be measured and monitored to determine the performance of individual components. For this, you can use such tools as the Windows Performance Monitor. For more information, refer to the "Testing and Optimization Tools" section of Chapter 9, “Stabilizing Phase” of this volume.

Integration Testing

Integration testing involves testing the application as a whole, with all the components of the application put together. Component testing is done during the testing performed in the Developing Phase. Integration testing is the process of verifying the application with respect to the behavior of components in the integrated application, interaction with other components, and the functional specifications of the application as a whole. Integration testing in a migration project is the process of finding discrepancies in the interaction between components and the behavior of components in the Windows application and the original UNIX application.

Integration testing tests the components for:

  • Functionality: behavior of the application as a whole and the individual components after integration.

  • Input and output: interactions within and with other components.

  • Response to various types of stresses.

  • Performance.

Test cases for integration testing directly or indirectly include functionality of the components, constraints on their inputs and outputs (pre-conditions and post-conditions), the state of the object, interactions between components, attributes of the object, and other components. Inputs required for integration testing include:

  • Test plan. It provides the details of testing the application.

  • Test specification. It is used to determine the required behaviors for individual domain-level classes. The use case model is also used to determine which parts of the application must be tested for vulnerabilities.

The application must also be stress tested. Stress testing is the process of loading the application to the defined and undefined limits to ensure that it performs well within a reasonable performance limit.

System testing is also performed after completion of integration testing. System testing is the process of ensuring that the integrated application is compatible with all platforms and to test against its requirements. The system CPU and memory usage for the application can also be measured and monitored to determine their performance. For this, you can use such tools as the Windows Performance Monitor.

Note For more information, refer to the "Testing and Optimization Tools" section of Chapter 9, "Stabilizing Phase."

Database Testing

The database component is a critical piece of any data-enabled application. In a migration project, the database may be the same or may have been replaced by another database. In both cases, data must be migrated to the respective database on Windows. Testing of a migrated database includes testing of:

  • Migrated procedural code.

  • Data integration with heterogeneous data sources (if applicable).

  • Customized data transformations and extraction.

Database testing also involves testing at the data access layer, which is the point at which your application communicates with the database. Database testing in a migration project involves:

  • Testing the data and the structure and design of the migrated database objects.

  • Testing the procedures and functions related to database access.

  • Security testing, which tests the database to guarantee proper authentication and authorization so that only users with the appropriate authority access the database. The database administrator must establish different security settings for each user in the test environment.

  • Testing of data access layer.

  • Performance testing of data access layer.

  • Manageability testing of the database.

An application maintains the following three databases, which are replicas of each other:

  • Development database. This is where most of the testing is carried out.

  • Deployment database (or integration database). This is where the tests are run prior to deployment to ensure that the local database changes are applied.

  • Live database. This has the live data; it cannot be used for testing.

Database testing is done on the development database during development, and the integrated application is tested using the deployment database.

Security Testing

Security is about controlling access to a variety of resources, such as application components, data, and hardware. Security testing is performed on the application to ensure that only users with the appropriate authority are able to use the applicable features of the application. Security testing also involves testing the application from the point of view of providing the same security features and measures that were provide by the original application.

To ensure that the application is secure, most security measures use the following four concepts:

  • Authentication. This is the process of confirming the identity of the users, which is one layer of security control. Before an application can authorize access to a resource, it must confirm the identity of the requestor.

  • Authorization. This is the process of verifying that an authenticated party has the permission to access a particular resource, which is the layer of security control following the authentication.

  • Data protection. This is the process of providing data confidentiality, integrity, and nonrepudiability. Encrypting the data provides data confidentiality. Data integrity is achieved through the use of hash algorithms, digital signatures, and message authentication codes. Message authentication codes (MAC) are used by technologies such as SSL/TLS to verify that data has not been altered while in transit.

  • Auditing. This is the process of logging and monitoring events that occur in a system and are of interest to security.

Note For more information, refer to "Set event logging options" on the TechNet Web site at

The systems engineer establishes different security settings for each user in the test environment. Network security testing is performed to guarantee that the network is secure from unauthorized users. To minimize the risks associated with unchecked errors on the system, you should know the user context in which system processes run, keeping to a minimum the privileges that these accounts have, and log their access to these accounts. Active monitoring can be accomplished using the Windows Performance Monitor for real-time feedback.

All security settings and security features of the application must be documented properly.

More information about security testing is available at
More information on how to make your code secure is available at
More information on "Secure Coding Guidelines for the .NET Framework" is available at

Management Testing

Testing for manageability involves testing the deployment, maintenance, and monitoring technologies that you have incorporated into your migrated application.

Following are some important testing recommendations to verify that you have developed a manageable application:

  • Test Windows Management Instrumentation (WMI). WMI can provide important information about your application and the resources it uses. During the design of your application, you made certain decisions about the types of WMI information that must be provided. These might include server and network configurations, event log error messages, CPU consumption, available disk space, network traffic, application settings, and many other application messages. You must test every source of information and be certain you can monitor each one.

  • Test Network Load Balancing (NLB) and cluster configuration. You can use Application Center 2000 clustering to add a front-end or back-end server while the application is still running. After installing new server hardware on the network, use your monitoring console to replicate the application image and start the server. The new server should automatically begin sharing some of the workload. You can set up the Application Center 2000 Performance Monitor (PerfMon) to track multiple front-end Web servers. After setting up PerfMon, make some requests to generate traffic. PerfMon will show you that there is an increase in traffic in the back-end servers and that the workload is evenly spread across the front-end computers.

Note Additional information about Application Center 2000 is available at

  • Test change control procedures. An important part of application management is the handling of both scheduled and emergency maintenance changes. Test and validate all of the change control procedures including the automated and manual processes. It is especially important to test all people-based procedures to ensure that the necessary communication, authority, and skills are available to support an error-free change control process.

Note Additional information on testing for manageability is available at

Interim Milestone: Internal Release n

The project needs interim milestones that can help the team measure their progress in the actual building of the solution during the Developing Phase. Each internal release signifies a major step toward the completion of the solution feature sets and achievement of the associated quality level. Depending on the complexity of the solution, any number of internal releases may be required. Each internal release represents a fully functional addition to the solution’s core feature set, indicating that it is potentially ready to move on to the Stabilizing Phase.

Closing the Developing Phase

Closing the Developing Phase requires completing a milestone approval process. The team documents the results of different tasks that it has performed in this phase and obtains a sign-off on the completion of development from the stakeholders (including the customer).

Key Milestone: Scope Complete

The Developing Phase culminates in the Scope Complete Milestone. At this milestone, all features are complete and the solution is ready for external testing and stabilization. This milestone is the opportunity for customers and users, operations and support personnel, and key project stakeholders to evaluate the solution and identify any remaining issues that must be addressed before beginning the transition to stabilization and, ultimately, to release.

Key stakeholders, typically representatives of each team role and any important customer representatives who are not on the project team, signal their approval of the milestone by signing or initialing a document stating that the milestone is complete. The sign-off document becomes a project deliverable and is archived for future reference.

Now the team must shift its focus to verify that the quality of the solution meets the acceptance criteria for release readiness. The next phase, the Stabilizing Phase, describes the activities—for example, user acceptance testing (UAT), regression testing, and conducting the pilot—required to achieve these objectives.


Get the UNIX Custom Application Migration Guide

Update Notifications

Sign up to learn about updates and new releases


Send us your comments or suggestions