Chapter 8: Developing Phase: Deployment Considerations and Testing Activities

This chapter discusses the key deployment considerations that need to be made in deploying the Microsoft® Windows® Services for UNIX 3.5 migrated application before closing the Developing Phase. This chapter also discusses various testing activities that you need to carry out in the Developing Phase. This chapter will help you identify the activities and milestones required to complete the Developing Phase.

On This Page

Deployment Considerations Deployment Considerations
Testing Activities Testing Activities
Interim Milestone: Internal Release n Interim Milestone: Internal Release n
Closing the Developing Phase Closing the Developing Phase

Deployment Considerations

The following are the key deployment considerations that need to be made during the Developing Phase to ensure smooth deployment in the Deploying Phase:

  • Process environment

  • Migration of scripts

  • Database connectivity

  • Deploying the application

  • Interoperability with Windows Services for UNIX 3.5

  • Monitoring and supporting the applications

The process for deploying the migrated application is discussed in detail in Volume 5, Deploy-Operate of this guide.

Process Environment

Some of the key elements in the process environment that are different for UNIX and Interix are:

  • Environment variables.

  • Temporary files.

  • Computer information.

  • Logging system messages.

This section discusses each of these key elements and explains how to implement them in Interix.

Environment Variables

An environment block is a block of memory allocated within the process address space. Each block contains a set of name-value pairs. All UNIX variants support process environment blocks. The particular differences between Interix and other UNIX variants depend on the UNIX variant being ported to Interix. For example, some UNIX variants do not support either the setenv or the unsetenv function calls, whereas Interix does.

There are usually no issues in porting calls to environment variable functions in Interix. However, when porting System V Interface Definition (SVID) code, instead of the process environment being defined as a third argument to main(), it is defined as extern char **environ. To modify the environment for the current process, use the getenv() and putenv() functions. To modify the environment so that it passes to a child process, use the getenv(), setenv(), and putenv() functions, or build a new environment and pass it to the child using the envp argument of the exec() function.

Note The putenv and setenv functions are only available if _ALL_SOURCE is defined and set to 1.

Temporary Files

Interix supports all standard and common functions that create temporary files. You do not need to modify the code to migrate these functions to Interix.

Computer Information

Interix supports functions that obtain information about the computer on which the application is executed. Typically, the ported code does not require any modifications.

The computer information includes:

  • Host name

  • Operating system name

  • Network name of the computer

  • Release level of the operating system

  • Version number of the operating system

  • Hardware platform name

This information can be obtained by using the uname -a Interix shell command. Note that the uname command and application programming interface (API) return information about the installed version of Interix, and not the version of the host Windows operating system. You can get information about the Windows operating system version by adding the Interix-specific -H option to the uname -a command.

For example:

Note: Some of the lines in the following code have been displayed on multiple lines for better readability.

$ uname -a
Interix JOE-GX 3.0 SP-7.0.1701.1 x86 
Intel_x86_Family15_Model1_Stepping2
$ uname -aH
Windows JOE-GX 5.1 SP0 x86 
Intel_x86_Family15_Model1_Stepping2
Logging System Messages

Interix provides the standard UNIX syslogd daemon to store and redirect log messages from applications and system services. The configuration file for syslog is located in /etc/syslog.conf. The Interix syslogd daemon handles only those Interix processes that are designed to use the syslog API. It does not handle log messages from the Win32® subsystem. If syslogd is not running, all the messages intended for syslogd are appended to the file /var/adm/log/logger.

The syslog, vsyslog, openlog, closelog, and setlogmask function calls are supported by Interix with the same set of severity levels, including:

  • LOG_ALERT

  • LOG_CRIT

  • LOG_DEBUG

  • LOG_EMERG

  • LOG_ERR

  • LOG_INFO

  • LOG_NOTICE

  • LOG_WARNING

The following superset of facility indicators is also supported:

  • LOG_AUTH

  • LOG_CRON

  • LOG_DAEMON

  • LOG_KERN

  • LOG_LOCAL(0-7)

  • LOG_LPR

  • LOG_MAIL

  • LOG_NEWS

  • LOG_USER

  • LOG_UUCP

It is not necessary to modify a code that uses syslog calls on Interix.

Note The syslog daemon is not started with the default installation. You can refer to the startup instructions on the syslogd manual page.

Migration of Scripts

This section describes the process of porting UNIX shell scripts to the Interix environment. The porting process follows these steps:

  1. Evaluating script migration tasks.

  2. Planning for management of platform differences.

  3. Evaluating source and target environments.

For information about each of these steps, refer to Volume 1: Plan of the UNIX Custom Application Migration Guide.

Scripts fall into the following two basic categories:

  • Shell scripts, such as Korn and C shell.

  • Scripting language scripts, such as Perl, Tcl, and Python.

Shell and scripting language scripts tend to be more portable than compiled languages such as C and C++. A scripting language such as Perl handles most of the platform specifics. However, the original developer may have used easier or faster platform-specific features, or may not have taken cross-platform compatibility into consideration at all.

A large number of UNIX commands are available with an Interix installation. Many UNIX shell scripts run under Interix without conversion because Interix provides both the Korn and Tenex C shells.

Note More information on porting shell scripts is available at https://www.microsoft.com/windows2000/docs/portingshellscripts.doc.

Porting UNIX Shell Scripts to Interix

There are only two significant differences in porting a shell script from an open-system implementation of UNIX (such as System V4 or BSD) to Interix. First, by default, Interix stores binaries in one of the following three directories:

  • /bin

  • /usr/contrib

  • /usr/local/bin.

For example, Perl is installed in one of these directories. Second, although Interix has a standard UNIX file hierarchy and a single-rooted file system with the forward slash (/) as the base of the installation regardless of the Windows drive or directory, absolute paths can be different. Absolute paths normally do not need to be converted because you can handle most situations by adding symbolic links. For example, /usr/ucb can be linked to /usr/contrib/bin, and /usr/local/bin can be linked to /usr/contrib/bin.

Additional considerations include:

  • Port scripts that set up either local or environment variables.

  • The Interix C shell initialization process executes the /etc/csh.cshrc and /etc/csh.login files before the .cshrc and .login files in the home directory of the user.

  • Be aware of the current limits of Interix shell parameters so that you can take the appropriate action. These parameters and their current limits are:

    • Maximum length of $path ($PATH) variable = ARG_MAX (normally not a problem).

    • Maximum (shell) command length = ARG_MAX (normally not a problem).

    • Maximum (shell) environment size = ARG_MAX.

    • Maximum length of command arguments, that is, length of arguments for exec() in bytes, including environ data (ARG_MAX) = 1048576.

    • Maximum length of file path (PATH_MAX) = 512.

    • Maximum length of file name (NAME_MAX) = 255 (normally not a problem).

  • Modify any scripts that rely on information from /etc/passwd or /etc/group (for example, a script that uses grep to find a user name) to use other techniques, such as Win32 ADSI scripts, to obtain information about a user. Examples include:

    • Calls to Interix getpwent(), setpwent(), getgrent(), and setgrent() APIs.

    • Win32 ADSI scripts.

    • Win32 net user commands.

Database Connectivity

This section contains information about Open Database Connectivity (ODBC) and accessing databases from Windows Services for UNIX 3.5.

Open Database Connectivity

Applications use the ODBC interface to access data from the database. This allows applications to access the database management systems (DBMS) using Structured Query Language (SQL) as a standard. ODBC permits applications to access different databases and therefore permits interoperability. Application end users can then add ODBC database drivers to link the application to their choice of DBMS.

ODBC database drivers are dynamic-link libraries on Windows and shared objects on UNIX. These drivers allow an application to access one or more data sources. ODBC provides a standard interface to allow application developers and vendors of database drivers to exchange data between applications and data sources.

The ODBC architecture has the following four components:

  • Application. Processes and calls ODBC functions to submit SQL statements and retrieve results.

  • Driver manager. Loads drivers for the application.

  • Driver. Processes ODBC function calls, submits SQL requests to a specific data source, and returns the results to the application.

  • Data source. Consists of the data, its associated operating system, DBMS, and the network platform (if any) used to access the DBMS.

Accessing Databases from Windows Services for UNIX 3.5

UNIX applications are usually given database connectivity using ODBC drivers. Windows Services for UNIX 3.5 has no built-in libraries or interfaces for accessing relational databases stored on other platforms. However, there are several packages listed in Table 8.1 that can reduce this problem.

Table 8.1. Packages Available for Database Connectivity

Name

Description

iODBC

A popular open source ODBC driver manager.

unixODBC

A popular open source ODBC driver manager.

This also serves as the ODBC-ODBC bridge client from Easysoft, allowing access to any ODBC driver in Windows.

FreeTDS

An open source implementation of the Tabular Data Stream (TDS) protocol used to access Microsoft SQL Server™ and Sybase databases, including dblib, ctlib, and an ODBC driver.

Perl

One of the most popular scripting languages, complete with the DBI database interface module, DBD::ODBC, for connecting through ODBC, and DBD::Sybase for connecting to any Sybase or SQL Server using FreeTDS.

All of these applications and libraries are built with support for threads enabled by default. Some of these libraries have been ported to Interix and are available at
https://www.interopsystems.com/tools/warehouse.htm.

iODBC

Independent Open DataBase Connectivity (iODBC) is an Open Source platform-independent implementation of both the ODBC and X/Open specifications. iODBC possesses the capability to develop applications independent of a back-end database engine, operating system, and (for the most part) programming language. Although ODBC and iODBC are both “C –based” APIs, there are numerous cross-language hooks and bridges from such languages as C++, Java, Perl, Python, and TCL.

Note Additional information is available at https://www.iodbc.org/.

unixODBC

The unixODBC Driver Manager is used in the binary builds of all the Windows Services for UNIX 3.5 database tools. It is also the client for the ODBC-ODBC bridge of Easysoft. With the ODBC-ODBC bridge, you can access any database that has a Windows ODBC driver. One license of ODBC-ODBC bridge of Easysoft, which incorporates the unixODBC driver manager, gives Windows Services for UNIX or Interix users a universal ODBC solution.

For example, consider a situation in which your Windows Services for UNIX or Interix application needs access to a remote Oracle database on a UNIX-based or Linux-based computer. Oracle does not provide OCI support for Interix. However, there are Oracle drivers available for Windows, so any Windows-based computer can act as both the client and the gateway. The ODBC-ODBC Bridge Server (a commercial product) of Easysoft is installed on the gateway, and the ODBC-ODBC client (unixODBC) is installed on the computer of the end user. In fact, the client and the gateway can be the same physical server. To access another database (for example, Microsoft SQL Server™, Access, or Excel®), just configure the appropriate data source on the gateway. No action is required on the UNIX system.

Figure 8.1 illustrates the ODBC-ODBC bridge.

Figure 8.1. ODBC-ODBC bridge

Figure 8.1. ODBC-ODBC bridge

For a detailed format of the odbc.ini file, refer to
https://www.interopsystems.com/tools/db.htm#odbc_config.

Additional information about the ODBC-ODBC bridge is available at
https://www.unixodbc.org/ and
https://www.easysoft.com.

FreeTDS

The TDS protocol is used to communicate with either Sybase or SQL Server databases. FreeTDS is an open source implementation of this protocol and contains several APIs that use it. The /usr/local/etc/freetds.conf configuration file must contain entries for your databases. Use these entries as templates for each server type. FreeTDS supports a number of different ways of setting up the configuration file; the method known as the "DBC-Combined" is strongly recommended.

Note Additional information about the DBC-Combined method is available at https://www.freetds.org/userguide/odbcombo.htm.

The configuration file can be overridden with the FREETDSCONF environment variable.

Note Additional information about the FREETDSCONF environment variable is available at https://www.freetds.org.

Perl

Perl is especially complex to build correctly under Windows Services for UNIX 3.5. There are several pitfalls that can deter an unwary or inexperienced developer.

The best approach for building Perl under Windows Services for UNIX is to use the edited version of the config.sh and Policy.sh supplied files, and then running the following to rebuild the makefiles. Run gmake to build a new binary:

$ ./Configure –der

Almost all the tests run by gmake test run correctly. However, there are a number of special cases for Windows Services for UNIX (for example, when the privileged user does not have UID 0 in Windows Services for UNIX) when the gmake test may hang after all the tests have completed successfully.

Note You can download Perl at https://www.interopsystems.com/tools/warehouse.htm.

Deploying the Application

This section explains the process of deploying your migrated application, including Interix applications, into a Windows environment.

The standard method of deploying applications in a Win32 environment is to use the Microsoft Windows Installer service. This section provides information about the Windows Installer service and some of the tools that you can use to package your application.

When you deploy applications in the Interix environment, you can use the standard UNIX tools available in Interix for file transfer, remote configuration, and scripting. If you used the proprietary UNIX software management tools to manage the application before migration, it is unlikely that these tools will be available in Interix. If you use standard UNIX tools, such as the tar or cpio archiving commands and shell scripts for application management, migration of your deployment tools is relatively easy.

Interix applications reside either on the desktop or on the application servers. In the latter case, you may have to rely on the networked file systems to access the executable files of your application. Because of the differences in how the UNIX and Windows network file systems operate, you must take specific action on some migrated applications.

Tools for Deploying Interix Applications

When migrating applications from UNIX to Interix, one of the main benefits is the similarity of the two platforms. This makes the process of migrating an application to Interix relatively easy. The similarity between the two platforms also extends to the tools available. If you have created your own deployment scripts using these tools, migrating the deployment tools to Interix is a straightforward process.

However, if you have used deployment tools specific to your UNIX distribution, you must either port these to Interix or write scripts using the available tools.

Berkeley Remote Shell Commands (r Commands)

Distributing your application into the Interix environment is likely to call for the transfer of files over the network. You can use the rsh, rcp, ssh, scp, or ftp commands for the transfer of files. These commands are available in Interix.

Note The ssh and scp commands are not included in Interix. You can download these commands at https://www.interopsystems.com/tools/warehouse.htm.

Scripts

Interix provides a wide range of scripting languages and tools for the creation of deployment tools. When you install Windows Services for UNIX 3.5, the UNIX Perl tool is installed as a part of the standard installation process. This allows Perl scripts to run on the Interix subsystem.

Note You can download Perl 5.8.3 at https://www.interopsystems.com/tools/warehouse.htm.

You can install ActiveState ActivePerl 5.6 in a custom installation process. This allows you to run Windows-based Perl scripts on the server.

Other languages, such as Ruby, Python, or Tcl/Tk, can also be used.

Note You can download these languages at https://www.interopsystems.com/tools/warehouse.htm.

Interix also includes C and Korn shell scripts.

Note You can download the bash and zsh shell scripts at https://www.interopsystems.com/tools/warehouse.htm.

Deploying Interix Applications

You can use the approach described in the next section to deploy your applications on Interix.

Deploying Interix Applications by Pushing Them to the Desktop

The push application delivery mechanism in UNIX environments is implemented with scripted ftp or rcp commands that copy the application binaries to a computer running UNIX. The application is then activated by pointing to a symbolic link at the new application binary. It is also common to use rdist and rsync to deploy the application across multiple computers. The rdist tool is supported by Windows Services for UNIX 3.5.

Note   The rsync tool can be downloaded at
https://www.interopsystems.com/tools/warehouse.htm.

From a management standpoint, remotely managed computers contain a selection of management scripts (Perl, C, or Korn shell) that can be invoked remotely to initiate an application image deployment, enumerate performance metrics, or run an audit for security.

Using the r Commands for Remote Management in Interix

You can remotely manage Interix systems with the Berkeley r commands such as rsh and rcp. Before you use these commands for remote administration, configure the Interix daemon rshd to permit remote access between computers. This is because rcp uses a remote shell at the remote computer (source or destination of the copy) to start a shell to launch the rcp process with the appropriate arguments.

For example, when rcp is used to manage the system when Interix is the source of the directory copy, use the following:

Note: The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.

rcp -r <Interix hostname>:ProgDir 
<destination hostname>:ProgDir

Interix has the following two processes associated with the rcp command in this case:

Note: The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.

user1 9281 1025 20:41:48 - 0:00.03 sh -c rcp -r -f 
ProgDir
user1 9345 9281 20:41:48 - 0:00.26 rcp -r -f ProgDir

When rcp is used to manage the system when Interix is the destination (sink) of the directory copy, use the following:

Note: The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.

rcp -r <source hostname>:ProgDir <Interix hostname>:
ProgDir

Interix has the following two processes associated with the rcp command in this case:

Note: The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.

user1 9729 1025 20:41:48 - 0:00.03 sh -c rcp -r -t 
ProgDir
user1 9793 9729 20:41:48 - 0:00.26 rcp -r -t ProgDir

Notice that a different argument is passed to rcp depending on whether it is the source (-f) or the destination (-t) of the copy.

To automate the configuring of the host equivalent files

  1. Create the necessary configuration files and set appropriate permissions, as follows:

      /etc/hosts.equiv.

    For earlier versions of Interix 3.0, you can use the following command:

    $HOME/.password

    From Interix 3.0 onward, the regpwd tool is provided to store the password securely into the registry and is accessible only to the privileged processes.

    $HOME/.rhosts
  2. Set appropriate permissions, as follows:

#!/bin/csh

Does the hosts.equiv file exist?

If not, create an empty one.

if ( ! -f "/etc/hosts.equiv" ) then   # create /etc/hosts.equiv   touch /etc/hosts.equiv   # Set the permissions on /etc/hosts.equiv   chmod 755 /etc/hosts.equiv   # Set the owner of /etc/hosts.equiv   chown +Administrators /etc/hosts.equiv endif if ( $?HOME) then   # See whether the users .rhosts file exists.   # If not, then create it.   if ( ! -f "$HOME/.rhosts" ) then     touch $HOME/.rhosts     chmod 600 $HOME/.rhosts   endif endif

Note Use of the r commands may be restricted for security reasons. In this case, you need to modify the preceding script to reflect your security policies. SSH is available for Interix as a secure alternative and can be downloaded at
https://www.interopsystems.com/tools/warehouse.htm.

Installing MSI Packages Remotely with Interix rsh

To remotely launch Win32-based applications from an Interix shell, remember that there are no stdin/stdout/stderr file handles on which to read and write. This is because the remote process is connected to pseudoterminals that do not have any corresponding Win32 object. Therefore, Win32-based applications must be wrapped so that they are provided with stdin/stdout/stderr file handles. The following code shows the remote execution of the ipconfig.exe command:

Note: The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.

$ rsh remotesystem "/dev/fs/C/WINNT/system32
/ipconfig.exe < /dev/null 2>&1 | cat"

You can also execute the Win32 command shell (cmd.exe) commands. The following depicts the execution of a remote dir command:

Note: The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.

$ rsh remotesystem "/dev/fs/C/WINNT/system32/CMD.EXE
/c dir < /dev/null 2>&1 |cat"

When you pass Win32 paths with Interix shells, backward slashes (\) used in the path specification must be escaped with another backslash (\). For example, to execute the MSI package installer from an Interix Korn shell, double backslashes (\\)* *must be passed in the paths to msiexec.exe. The following example shows this for the installation of the Windows® 2000 support tools (for example, <drive>:\SUPPORT\TOOLS\2000 RKST.MSI):

$ /dev/fs/C/WINNT/system32/msiexec.exe /I <drive>:\\SUPPORT\\TOOLS\\2000RKST.MSI /Lv C:\\Temp\\msi.log /qn.

The remote installation of this MSI package is interesting because rsh requires that Win32 path backslashes be escaped, and the command path that is passed over the remote shell must also have its backslashes (\) escaped. The following rsh command will install the Windows 2000 Support Tools remotely with an rsh command:

Note: The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.

$ rsh remotesystem "/dev/fs/C/WINNT/system32/
msiexec.exe /I <drive>:\\\\SUPPORT\\\\TOOLS\\\\
2000RKST.MSI /Lv C:\\\\Temp\\\\msi.log /qn < /dev/
null 2>&1 | cat"

(where <drive> is a drive letter available on the remote system.)

Code Modification

To add or isolate code that implements Interix-specific features, surround it with the following code:

Note: The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.

#ifdef __INTERIX, such as in the following example:
#ifdef __INTERIX
(void) fcntl(fd, F_SETFL, fcntl(fd, F_GETFD)
| FD_CLOEXEC));
#else /* __INTERIX */
(void) ioctl(fd, FIOCLEX, NULL);
#endif /* __INTERIX */

The __INTERIX macro is automatically defined to be true by the c89 compiler interface and by the gcc compiler. The _POSIX_ macro is defined to be true by c89. (The _POSIX_ macro is a reserved symbol and should not be used or modified.) The c89 interface also passes the /Za option to the Microsoft Visual C++® compiler, which defines __STDC__, unless -N nostdc is specified. When -N nostdc is defined, the ANSI-only mode in the Visual C++ compiler is disabled and Microsoft extensions are allowed. The default mode is ANSI C mode.

The cc compiler interface defines the symbols __ INTERIX and UNIX to be true. (The UNIX macro is defined because many applications intended to be compiled on multiple platforms use this macro to call out features found on UNIX systems.) The cc interface also passes the /Ze option, which enables language extensions.

The Interix header files are structured to align with the Single UNIX Specification. For example, string and memory functions that occur in Portable Operating System Interface (POSIX).1 are in string.h. String and memory functions that are in the Single UNIX Specification, but not in POSIX.1, are in strings.h.

The include files are also structured to restrict the API namespace. If the macro _POSIX_SOURCE value is defined as 1 before the first header file is included, the program is restricted to the POSIX namespace. The program contains only those APIs that are specified in the POSIX standards. This can be restrictive.

To get all of the APIs provided with the Interix Software Development Kit (SDK), define the _ALL_SOURCE value as 1 before the first header file is included, as shown in the following example:

#define _ALL_SOURCE 1
#include <unistd.h>

You can also define _ALL_SOURCE in makefile with –D _ALL_SOURCE in the compiler flags section of the makefile. Then use the compiler flags in the compilation commands in the makefile.

This simplifies porting of the source code because this macro exposes all the define statements and prototypes in the requested header files.

If the source is not defined, the default is the more restrictive _POSIX_SOURCE.

Packaging and Archiving Tools

Table 8.2 lists the archiving and packaging facilities used for UNIX applications and supported by Interix.

Table 8.2. Archiving and Packaging Facilities

Archiving and Packaging Facilities

Description

Interix Support

tar

The tape archive program, which uses a variety of formats. It is one of the most popular formats.

Interix supports this tool.

cpio

Copies files into or out of a cpio or tar archive. It was intended to replace tar. The cpio archives can also be unpacked by pax.

Interix supports this tool.

pax

The POSIX.2 standard archiver. It reads, writes, and lists the members of an archive file (including tar-format archives). It also copies directory hierarchies.

Interix supports this tool.

ar

Creates and maintains groups of files that are combined into an archive. It is not usually used for source archives, but is almost always used for object file archives (static object libraries).

Interix supports this tool.

pkg_add

Standard installation tool familiar to BSD users and very similar to the SUN and SV tool (of the same name).

This tool is available for downloading at https://www.interopsystems.com/tools/pkg_install.htm.

rpm

Powerful command-line–driven package management system capable of installing, uninstalling, verifying, and updating packages.

Interix has a set of tools to perform these activities:

  • pkg_add. To install a package.

  • pkg_info. To view status of installed packages.

  • pkg_delete. To delete a package.

  • pkg_update. To update or install new packages.

Table 8.3 lists the compression formats that are used for UNIX applications.

Table 8.3. Compression Formats for UNIX Applications

Compression Formats

Description

Interix Support

compress

Creates a compressed file with the adaptive Lempel-Ziv coding to reduce the size of files (typically 70 percent smaller than the original file).

Interix supports this tool.

zip/gzip

Algorithms combine a version of Lempel-Ziv coding (LZ77) with another version of Huffman coding in what is often called string compression.

Interix supports this tool.

pack

Compresses files using Huffman coding.

Interix supports this tool.

uncompress, zcat

Extracts compressed files.

Interix supports this tool.

gunzip

Decompresses files created through compress, zip, gzip, or pack. The detection of the input format is automatic.

Interix supports this tool.

unpack, pcat

Restores files compressed by pack.

Interix supports this tool.

bzip2

Compresses files using the Burrows-Wheeler block sorting text compression algorithm and Huffman coding.

You can download this compression format at
https://www.interopsystems.com/tools/warehouse.htm.

Table 8.4 lists the common suffixes for archived and compressed file names.

Table 8.4. Archived/Compressed File Suffixes

Suffix

Format

Description

.a

ar

Created by and extracted with ar.

.cpio

cpio

Created by and extracted with cpio.

.gz

gzip

Created by gzip and extracted with gunzip.

.tar

tar

Created by and extracted with tar.

.tgz

tar , gzip

A tar file that has been compressed with gzip.

.Z

compressed

Compressed by compress.

Uncompressed with uncompress, zcat, or gunzip.

.z

pack

or

gzip

Compressed by pack and extracted with pcat.

Compressed by gzip and extracted with gunzip.

.zip

zip

Compressed by zip and extracted with unzip or compressed by pkzip and extracted with pkunzip.

.bz2

bzip2

Compressed by bzip2 and extracted with bunzip2 or bzip2 –d.

.tbz

tar , bzip2

A tar file compressed with bzip2.

Using Libraries

Microsoft linkers usually use the LIB environment variable to specify alternate search locations for libraries. The Interix cc and c89 tools ignore the initial value of LIB to avoid conflicts with Windows tools. You can add additional libraries to the search path using the -L option, which you can specify multiple times on the command line.

You can also specify the C library, the lex, the math, and the yacc libraries with the traditional operands -lc, -ll, -lm, and –ly, respectively.

Many compilers use the INCLUDE environment variable to specify alternate search locations for header files. To avoid conflict with Windows tools, the c89 tool ignores the initial value of INCLUDE. You can add additional directories to the search path with the -I option, which you can specify multiple times on the command line.

Configuring the System

UNIX users generally configure the system by editing the configuration files with any of the available text editors. Many UNIX users and system administrators like the fact that much of the configuration for UNIX is stored in text files. The advantage is that you do not need to learn to use a large set of configuration tools. Familiarity with an editor and a scripting language serves the purpose. The disadvantage is that the information in the files comes in various formats, so you must familiarize yourself with the formats to change the settings. To manage a network, UNIX system administrators often employ scripts to reduce repetition and error. In addition, administrators can use the Network Information Service (NIS) to centralize the management of many standard configuration files. Although many versions of UNIX have GUI management tools, such tools are usually specific to each version of UNIX.

Startup Scripts and Logon/Logoff Scripts

In UNIX, scripts are used during the startup to invoke most of the system and user processes. Such scripts include special scripts written by the systems manager, in addition to all the system services (such as networking and printing). The kernel starts init, a special process of UNIX that starts all other services and processes. It is configured through the /etc/inittab file. For BSD-style systems, init runs various rc scripts to configure services; and for System V–style systems, init runs scripts under the /etc/rc.d directory. Configuration of the characteristics of any service is carried out within /etc/inittab and the rc scripts.

The Interix subsystem supports the init tool. It is the first process that runs when the Interix subsystem starts. It is similar to /etc/init on traditional UNIX systems. However, the Interix version does not use /etc/inittab because Interix runs only at level 2.

When init starts, it executes all scripts in /etc/rc2.d in alphabetical order. The /etc/rc2.d directory contains symbolic links to the actual scripts located in /etc/init.d instead of the scripts themselves. The scripts are typically used to start and stop Interix daemons or to perform other tasks required when the system initializes or shuts down. The administrator can change the names of symbolic links in /etc/rc2.d to do the following:

  • Control the order in which tools are run or daemons are started or stopped.

  • Control whether a tool is run or a daemon is started or stopped.

Using inetd

Windows Services for UNIX 3.5 provides inetd, which behaves like the UNIX inetd. Hence, no changes are required for a code that uses inetd.

The Interix inetd daemon is started by the init process and runs in the security context of the local Administrator. The init process is started automatically when the Interix subsystem starts. The services it starts (like telnet) are disabled by default.

After uncommenting the services you want inetd to run, send a SIGHUP signal to the inetd process. To start inetd, you must be the administrator because there are special privileges that some services will need and only the administrator has them (such as "root" on other UNIX systems).

Additional information is available at
https://www.microsoft.com/technet/interopmigration/unix/sfu/intdrutil.mspx.

Note The inetd program in Interix has an extra -L option, which is used to set the time lockout period to seconds. This is a security feature to mitigate denial-of-service (DoS) attacks. When the invocation rate of a service is exceeded (1000, by default), the service becomes unavailable for the time interval that you set. The default value is 180 seconds and the minimum value is 30 seconds.

The inetd.conf file is the configuration file for the inetd daemon. The file contains a list of Internet-related services that inetd can invoke when it receives a request from an Internet client.

However, the Windows system has one networking environment in common with Win32 and Interix applications. Therefore, it is not possible to have the Interix telnetd and the Win32 Telnet Service listening on the same standard telnet port at the same time. Hence, services that attempt to open standard ports used by other enabled Windows services like telnet should not be enabled in inetd.conf.

Note The user entry in inetd.conf should contain the user name of the user under whose account the server should run. This allows for servers to be given less permission than root. This column was originally added in UNIX to enhance security. On Interix, this field is always set to the string NULL and is ignored as it was not implemented in the current versions of Interix.

Installation

Default installation of Windows Services for UNIX 3.5 does not install all useful tools. For example, if you plan to only use the network file system (NFS) portion, you may not want to install Interix, but you should still install user name mapping. If you want to develop UNIX code, you can install both the Interix SDK and Interix GNU SDK.

Administration

The Domain Name System (DNS) is the hierarchical, distributed database. It stores information for mapping Internet host names to IP addresses and vice versa, mail routing information, and other data used by Internet applications. Clients look up information in the DNS by calling a resolver library, which sends queries to one or more name servers and interprets the responses.

If you face DNS errors, Interix has resolver routines that provide access to the Internet DNS. The resolv.conf configuration file contains information that is read by the resolver routines when they are invoked by a process for the first time. The file is designed to be read by users and contains a list of keywords with values that provide various types of resolver information. For example, you can check whether the /etc/resolv.conf file is configured to point to the right DNS.

During installation, the file is configured based on the information it gets from the system. However, if there is one DNS for external and one DNS for internal, then it might get configured to the wrong DNS. The information in resolv.conf can become incorrect because it is only set during the Windows Services for UNIX installation. Furthermore, for non-English Windows systems, the Windows Services for UNIX script used at installation puts incorrect information in resolv.conf. The updated BIND 9 tool can be used to configure and work with DHCP and non-English systems by ignoring the contents of the resolv.conf file. The BIND 9 software distribution contains both a name server and a resolver library.

Note Additional information is available at the Microsoft home page for Windows Services for UNIX 3.5 at https://www.microsoft.com/windows/sfu/default.asp. Microsoft support for Windows Services for UNIX 3.5 is available at
https://support.microsoft.com/default.aspx.
You can access a newsgroup for discussion on Windows Services for UNIX 3.5 at
https://communities2.microsoft.com/communities/newsgroups/en-us/default.aspx?dg=microsoft.public.servicesforunix.general.
Answers to frequently asked questions and problems are available at the Interop Systems Web site at
https://www.interopsystems.com.

This site also provides a list of tools that can be downloaded to obtain added functionality.

Note Help is also available from the UNIX tools discussion forum at https://www.interopsystems.com/tools/default.aspx.

A lot of information on possible errors is also documented in the Windows Services for UNIX 3.5 Help. You can also refer to the man pages.

Notes:

  1. The event log helps fix administrative problems. Interix provides the syslogd, which is a system message logger. You can see the log file /var/adm/log/logger.

  2. syslogd is not started automatically when the Interix subsystem starts. If you need to enable this service, remove the comment characters from the following lines in /etc/init.d/syslog and then start the service with the command:

    /etc/init.d/syslog start,

    or restart the computer.

        #  ${SYSLOGD}
    

    #  [ $? = 0 ] && echo "syslogd started"

  1. The sendmail tool, although included with Windows Services for UNIX 3.5, is not supported as a full message transfer agent (MTA) by Microsoft.

Interoperability with Windows Services for UNIX 3.5

This section discusses the scenario where Interix and Win32/64 and .NET might be required to interoperate with each other. The details are covered under the following sections:

  • Running Win32-based Programs

  • Encapsulating an Interix Application from a Win32 COM Object

Running Win32-based Programs

The Interix subsystem extends the POSIX subsystem so that you can run a Win32-based, character-based user interface (CUI) and Win32-based graphical user interface (GUI) programs.

The Interix environment ships with the runwin32 command, which simplifies the running of Win32 binaries. Shell scripts to invoke the standard built-in Windows command-line programs and CMD.exe are also provided in the /usr/contrib/win32/bin directory.

You can easily add the standard Windows command-line programs to your environment. The /usr/contrib/win32/bin directory is already added to your PATH.

The Interix ksh shell has been enhanced to run case-sensitive searches for Win32 programs as well.

After a Win32-based application is started, it interacts with the Win32-based subsystem, so there is no problem with the data generated inside the application. For example, in a file selection box, the path names are displayed in the Win32 format.

Interactions Between the Subsystems

When you run Win32-based programs from the Interix command line, keep the following in mind:

  • You must specify the Win32-based program either with a complete path name or by adding the Win32-based programs to your PATH. You need to do this only if PATH_WINDOWS is not set appropriately. The path name is case sensitive.

  • The portion of the command line that specifies the Win32-based program must be in the Interix format because it is handled by the Interix subsystem. The portion of the command line passed to the Win32-based program must be in the format that the Win32-based program supports.

  • If the Win32-based program makes use of environment variables, they must be in a format that is supported by a Win32-based program.

  • If other Interix programs use the environment variables, they must be converted back to a format that the Interix programs can use.

  • Running a Win32-based program involves the interaction of the two different permission models: the Interix POSIX model and the Win32 ACL model.

    The return value or exit status of a Win32-based program may not have any significance.

  • You cannot run a Win32 interpreter program directly with a #! line in a shell script. Instead, you must use #!/bin/sh or #!/bin/ksh . The ActiveState Perl is an exception. It is possible to have #!/Perl/bin/Perl.exe as the first line of the shell script, but note that this will work only from the Interix environment. cmd.exe does not support this construct.

  • The Win32-based program may expect any text files to be in the Win32 format (end-of-line marked by CR-LF) instead of the POSIX format (end-of-line marked by LF).

    You can use the flip(1) tool, which is a file interchange program that converts text-file formats between POSIX and other formats (such as MS-DOS® and Apple Macintosh). The flip(1) tool converts lines ending with carriage-return (CR) and linefeed (LF) (MS-DOS) or just carriage-return (Apple) to lines ending with just linefeed, or vice versa.

  • When creating pipelines with Win32 commands, you may need to use the cat32.exe program.

Note For more information on cat32, refer to the Help pages of Windows Services for UNIX 3.5 or manual page of cat32 in Interix environment.

Adding Win32-based Programs to Your PATH

You can run such programs as ATTRIB.exe, CACLS.exe, and REGEDT32.exe from the Interix shells. You can also redirect the input and output of these programs.

You can include the Win32 commands to your environment by adding the appropriate directories to your PATH. For example:

“${PATH}:/dev/fs/C/WINNT/system32”

Note Add the pathname in the POSIX file name format and not the Win32 format. Note that the case of the directory must exactly match that of the file system.

When you run a Win32-based program, the Interix subsystem converts your PATH variable back to the Win32 format so that the Win32-based program has the current PATH. This is the only environment variable that is converted for you; all others must be handled.

Note Typing the entire file name in the correct case can be difficult. To shorten the frequently used commands, you can use aliases, links, or a shell script.

Path Names

The unixpath2win tool converts a UNIX path name to a Win32 path name. The winpath2unix tool converts a Win32 path name to a POSIX path name.

For example, you can easily convert a path from one format to another. To convert the \\inxsrv\public path to an Interix format, use the following code:

$ winpath2unix ‘\\inxsrv\publics’
/net/inxsrv/publics
$ winpath2unix ‘C:\WINDOWS\system32’
/dev/fs/C/WINDOWS/system32

To convert the path back to Win32 format, use the following code:

$ unixpath2win /net/inxsrv/publics
\\inxsrv\publics
$ unixpath2win /dev/fs/C/WINDOWS/system32
C:\WINDOWS\system32
Environment Variables

Environment variables are used to store information required by several tools or applications. Often, this information is a path name or a set of command options.

You can change an environment variable in a shell script without fear of conflicts because the environment of the shell script ends when the script does. Conflicts can arise when you change the environment variable in your current environment or when an Interix tool later in the script needs the value of the same environment variable.

The usual practice is to convert the environment variable before calling the Win32-based program, and then convert it back to the UNIX format after the Win32-based program exits. If the environment variable contains a directory, you can use the unixpath2win and winpath2unix tools.

PATH_WINDOWS

The Interix ksh shell has been enhanced to support Windows style path searching. Using the PATH_WINDOWS environment variable, it is possible to find .exe, .bat, and other files at the CMD.exe prompt. You can use this variable to specify the directories that require a searching and suffix matching mechanism that is not case sensitive.

The Interix profile file sets the PATH_WINDOWS environment variable in which you can specify a suffix matching order.

Inherited Environment Variables

The environment of your Interix shell session is built by both the Interix subsystem and your startup files. When the Interix subsystem starts, it converts the Win32 environment in the following ways:

  • All environment variable names are converted to all uppercase—for example, Path becomes PATH.

  • The contents of the PATH environment variable are converted from the Win32 format to the POSIX format.

  • A new HOME environment variable is built from the HOMEDRIVE and HOMEPATH variables. If you already have HOME defined in your system control box, it is ignored.

The global startup file /etc/profile adjusts the environment in the following ways:

  • The old PATH is stored in PATH_ORIG and a new PATH is constructed.

  • TMPDIR is set to the first of $OPENNT_TMPDIR, $TMPDIR, $TMP, or $TEMP that actually exists. TMP and TEMP are not converted because they may be used by Win32-based programs.

  • TERM, TERMCAP, EDITOR, VISUAL, and FCEDIT are set.

  • SHELL is unset.

Redirecting Standard Input, Output, and Error

Win32-based programs can accept input redirected from the standard in (as in, < file). You can also redirect their standard output (with, > file) and their standard error.

Win32-based programs can also be used in command pipelines. However, Win32-based programs are not required to behave as expected (in terms of the three standard file streams). To provide a behavior that is more robust, when piping to and from Win32-based programs, Interix includes a Win32 tool, cat32.exe, which is just a "better behaved" filter. If you experience problems with a particular Win32-based program in a pipeline, try inserting cat32 into the pipeline. For example:

$ net.exe users | cat32 | more
Useful Tools

Two tools provided with Interix make it easier to handle the interaction between the two environments:

  • runwin32. The runwin32 tool runs a Windows command. The cmd can be any file with a .exe extension found in the directories $WINDIR (typically, /dev/fs/C/WINDOWS on Windows XP and Windows Server 2003 or later*)*, $WINDIR/system32, or any built-in command of CMD.exe. You can run the Win32 commands directly from the Interix shells, but using runwin32 eliminates the need to change your PATH environment variable to include the Windows or system32 directories present in your computer.

  • wvisible. The wvisible tool returns true if the current window station is visible and returns false if it is not. The term "window station" is peculiar to Windows. Every Win32 process is associated with a window station object. If the window station is visible, Win32 windows are displayed on the screen of the computer, and the user can interact with these windows using the keyboard and mouse. If the window station is not visible, Win32 windows are invisible and noninteractive.

Encapsulating an Interix Application from a Win32 COM Object

The Component Object Model (COM) is a binary-level specification that allows you to build applications using interchangeable components. You can write the language-independent components in C++, Java, C, or the Korn shell because the specification is binary. Client applications that make use of COM are largely Win32-based (although there is nothing in COM itself that restricts it to a Windows environment). Applications that use COM to bind components together can upgrade or customize those components dynamically. This section describes a method for encapsulating a UNIX application in a Win32 DLL (as a COM object).

For example, consider an application that writes to standard output. You must define a COM interface to pass this information. The encapsulating DLL is written to invoke the Interix application (your UNIX code that is ported to Interix) with the correct command-line options. The routine interprets the output of the application and passes it to the client application.

Defining the interface and writing the DLL are not trivial, but the basic concept is simple. On an Interix system, the Win32 world can invoke an Interix application as arguments to the POSIX.exe program. POSIX.exe serves a variety of purposes in the Interix environment, but its primary role is to serve as the access mechanism for the Win32 subsystem of Windows NT® to start an Interix process.

Figure 8.2 shows the architecture of the completed system.

Figure 8.2. Architecture of Interix COM application

Figure 8.2. Architecture of Interix COM application

The client makes a call to the Interix COM DLL module. The DLL invokes POSIX.exe (and thereby, the Interix subsystem) to run the Interix C application. The DLL captures the output and passes it back to the Microsoft Visual Basic® GUI front-end application. The DLL does not have exclusive use of the application, and it can be invoked by another Interix user (for instance, logged on over telnet) while it is being called by the Visual Basic GUI front-end application.

To build a DLL that wraps a UNIX application, you need to think like both a UNIX programmer and a COM programmer. The UNIX programmer has to think about getting the command to produce the correct output. The COM programmer has to think about capturing that output. The difficult part is to get the POSIX.exe command line correct.

To build the Interix COM DLL module

  1. Build the Interix application.

  2. Define the COM interface.

  3. Implement the interface in the DLL. The basic idea is to invoke POSIX.exe to run the Interix application, capture the standard output, and pass that data. For this, you need to get the application command line correct for POSIX.exe. Command-line quoting, if any, must be handled carefully.

  4. Build the DLL.

Note Additional information about building the DLL is available at https://www.microsoft.com/technet/interopmigration/unix/sfu/intrxcom.mspx.

Interacting with a Win32 Application Using Memory-Mapped Files

Memory mapping is the technique of making a part of the address space appear to contain a file or device so that ordinary memory accesses act on it.

Memory mapping uses the same mechanism as used by virtual memory to "trap" accesses to parts of the address space so that data from the file or device can be paged in (and other parts paged out) before the access is completed.

An Interix application can interact with the Win32 application using the same memory-mapped file. The following example shows how the Win32 program and the Windows Services for UNIX 3.5 program communicate with each other using a memory-mapped file. Compile both the programs and execute one with argument 1 and the other with argument 2.

Following is a Windows Services for UNIX example:

Note: Some of the lines in the following code have been displayed on multiple lines for better readability.

#include <fcntl.h>
#include <stdio.h>
#include <errno.h>
#include <string.h>
#include <sys/mman.h>
#define BufferSize 256
char *Messages[][2]={
    /* Talk of '1' */            /* Talk of '2' */
 {"Hello '2', How are you?",        "Not bad '1', how 
 about yourself?"},
 {"What's up '2'?",            "Not much '1'"},
 {"It could be a lot worse!",        "Yup!"},
 {"Well, I guess that is that, '2'!",    "I guess so 
 '1'..."},
 {0,0}
};
typedef struct ShareData_tag{
 char P1_speaks;    /* 'T'=true, 'F'=false */
 char P2_speaks;    /* 'T'=true, 'F'=false */
 char Mess_p1_to_p2[BufferSize];
 char Mess_p2_to_p1[BufferSize];
} SHARED_DATA;
SHARED_DATA SharedDataInit={
 'F','F',{0},{0}
};
const char *pMemMapFileName="/dev/fs/C/common.mem";
int main( int argc, char *argv[]){
 int Step=0, fd, Done=0;
 SHARED_DATA *pSharedData;
 void *pMem;
 if( argc<2){
  printf("Please invoke with either '1' or '2'\n");
  exit(1);
 }
 if( -1 != (fd=open( pMemMapFileName, O_RDWR | O_CREAT
  , 0777 ))){
        /* mmap seems to fail without this write on 
                empty file */
  write( fd, &SharedDataInit, sizeof(SHARED_DATA));
  pMem=mmap( 0, sizeof(SHARED_DATA), PROT_READ | 
PROT_WRITE, MAP_SHARED, fd, 0);
  if( (void*)-1 != pMem){
   pSharedData=(SHARED_DATA *)pMem;
   while( !Done){
        /*************************
         * Personality of '1':
         * - Will talk if both are silent
         * - Will stop talking if both are talking
         */
    if( '1'==argv[1][0]){
     if( 'F'==pSharedData->P1_speaks){
      if( 'F'==pSharedData->P2_speaks){
       strcpy( pSharedData->Mess_p1_to_p2, 
       Messages[Step][0]);
       pSharedData->P1_speaks='T';
      }
     }
     else{
      if( 'T'==pSharedData->P2_speaks){
       printf("[1,%d] P2 says: %s\n",Step,pSharedData
       ->Mess_p2_to_p1);
       pSharedData->P1_speaks='F';
       Step++;
       if( !Messages[Step][0]) Done=1;
      }
     }
    }
        /*************************
         * Personality of '2':
         * - Will talk if other was talking and it was silent
         * - Will stop talking if other is talking
         */
    else{
     if( 'F'==pSharedData->P2_speaks){
      if( 'T'==pSharedData->P1_speaks){
       printf("[2,%d] P1 says: %s\n",Step,pSharedData
       ->Mess_p1_to_p2);
       strcpy( pSharedData->Mess_p2_to_p1, 
       Messages[Step][1]);
       pSharedData->P2_speaks='T';
       Step++;
       if( !Messages[Step][0]) Done=1;
      }
     }
     else{
      if( 'F'==pSharedData->P1_speaks){
       pSharedData->P2_speaks='F';
      }
     }
    }
    sleep(1);
   }
        /* Wait for other process to finish */
   sleep(2);
   pSharedData->P1_speaks='F'; 
   /* For next time we use the same file */
   pSharedData->P2_speaks='F';
   munmap( pSharedData, sizeof(SHARED_DATA));
  }
  else{
   perror( "Could not map memory\n");
  }
  close( fd);
 }
 else perror( "Could not create memory mapping file\n");
}

(Source File: InteropSFU-UAMV2C8.01.c)

Win32 example:

Note: Some of the lines in the following code have been displayed on multiple lines for better readability.

#include <windows.h>
#include <winioctl.h>
#include <conio.h>
#include <stdio.h>
#define BufferSize 256
char *Messages[][2]={
    /* Talk of '1' */            /* Talk of '2' */
 {"Hello '2', How are you?",        "Not bad '1', how 
 about yourself?"},
 {"What's up '2'?",            "Not much '1'"},
 {"It could be a lot worse!",        "Yup!"},
 {"Well, I guess that is that, '2'!",    "I guess so '1'
 ..."},
 {0,0}
};
typedef struct ShareData_tag{
 char P1_speaks;    /* 'T'=true, 'F'=false */
 char P2_speaks;    /* 'T'=true, 'F'=false */
 char Mess_p1_to_p2[BufferSize];
 char Mess_p2_to_p1[BufferSize];
} SHARED_DATA;
SHARED_DATA SharedDataInit={
 'F','F',{0},{0}
};
const char *pMemMapFileName="C:\\common.mem";
int main( int argc, char *argv[]){
 int Step=0, Done=0;
 SHARED_DATA *pSharedData;
    HANDLE hFile;
    HANDLE hMem;
 if( argc<2){
  printf("Please invoke with either '1' or '2'\n");
  exit(1);
 }
#ifdef XYZ
 if( -1 != (fd=open( pMemMapFileName, O_RDWR | 
 O_CREAT, 0777 ))){
  write( fd, &SharedDataInit, sizeof(SHARED_DATA));
  pMem=mmap( 0, sizeof(SHARED_DATA), PROT_READ |
PROT_WRITE, MAP_SHARED, fd, 0);
  if( (void*)-1 != pMem){
   pSharedData=(SHARED_DATA *)pMem;
#endif
/***************************************************
*********/
 if (( hFile = CreateFile (pMemMapFileName /* szTmpFile */,
            GENERIC_WRITE | GENERIC_READ,
            FILE_SHARE_WRITE | FILE_SHARE_READ,
            NULL,
            OPEN_ALWAYS /* CREATE_ALWAYS */,
            FILE_FLAG_NO_BUFFERING | 
FILE_FLAG_WRITE_THROUGH,
                      NULL)) == 
                      INVALID_HANDLE_VALUE){
  printf("Could not create file %s\n",pMemMapFileName
  /* szTmpFile */);
  exit(1);
 }
                /* Create file mapping. */
 if (!( hMem = CreateFileMapping ( hFile,
              NULL,
              PAGE_READWRITE,
              0,
              sizeof(SHARED_DATA),
              NULL))){
  if ( hFile){
   CloseHandle ( hFile);
   hFile = NULL;
  }
  printf("Mapping file failed\n");
  exit(1);
 }
 if( (pSharedData = (SHARED_DATA *)MapViewOfFile( hMem, 
                 FILE_MAP_WRITE, 
                 0, 
                 0, 
                     sizeof(SHARED_DATA)))){
  *pSharedData=SharedDataInit;
  FlushViewOfFile( pSharedData, sizeof(SHARED_DATA));
/***************************************************
*********/
   while( !Done){
        /*************************
         * Personality of '1':
         * - Will talk if both are silent
         * - Will stop talking if both are talking
         */
    if( '1'==argv[1][0]){
     if( 'F'==pSharedData->P1_speaks){
      if( 'F'==pSharedData->P2_speaks){
       strcpy( pSharedData->Mess_p1_to_p2, 
       Messages[Step][0]);
       pSharedData->P1_speaks='T';
       FlushViewOfFile( pSharedData, sizeof(SHARED_DATA));
      }
     }
     else{
      if( 'T'==pSharedData->P2_speaks){
       printf("[1,%d] P2 says: %s\n",Step,pSharedData
       ->Mess_p2_to_p1);
       pSharedData->P1_speaks='F';
       FlushViewOfFile( pSharedData, sizeof(SHARED_DATA));
       Step++;
       if( !Messages[Step][0]) Done=1;
      }
     }
    }
        /*************************
         * Personality of '2':
         * - Will talk if other was talking and it was silent
         * - Will stop talking if other is talking
         */
    else{
     if( 'F'==pSharedData->P2_speaks){
      if( 'T'==pSharedData->P1_speaks){
       printf("[2,%d] P1 says: %s\n",Step,pSharedData
       ->Mess_p1_to_p2);
       strcpy( pSharedData->Mess_p2_to_p1, 
       Messages[Step][1]);
       pSharedData->P2_speaks='T';
       FlushViewOfFile( pSharedData, sizeof(SHARED_DATA));
       Step++;
       if( !Messages[Step][0]) Done=1;
      }
     }
     else{
      if( 'F'==pSharedData->P1_speaks){
       pSharedData->P2_speaks='F';
       FlushViewOfFile( pSharedData, sizeof(SHARED_DATA));
      }
     }
    }
    Sleep(1000);
   }
        /* Wait for other process to finish */
   Sleep(2000);
/***************************************************
*********/
  UnmapViewOfFile ( pSharedData);
 }
 else printf("Could not obtain mem pointer\n");
 if( hMem){
  CloseHandle ( hMem);
  hMem=NULL;
 }
 if ( hFile){
  CloseHandle ( hFile);
  hFile = NULL;
 }
/***************************************************
*********/
#ifdef XYZ
   pSharedData->P1_speaks='F'; 
   /* For next time we use the same file */
   pSharedData->P2_speaks='F';
   munmap( pSharedData, sizeof(SHARED_DATA));
  }
  else{
   perror( "Could not map memory\n");
  }
  close( fd);
 }
 else perror( "Could not create memory mapping file\n");
#endif
}

(Source File: InteropWin-UAMV2C8.021.cpp)

Note FlushViewOfFile writes to the disk a byte range, within a mapped view of a file. Flushing a range of a mapped view causes any dirty pages within that range to be written to the disk. Dirty pages are those whose contents have changed since the file view was mapped. Because of caching implementation of the file system support for memory-mapped files, this call may not be required. However, there is no harm in having this call other than when high performance is a critical factor.

Monitoring and Supporting the Applications

System and application administrators usually want to use tools and scripts to manage migrated applications. With migrations to Interix, there is an option to also migrate any of the management tools that were used in the UNIX environment.

This section discusses the options available for managing and supporting migrated applications. In this case, Windows Services for UNIX 3.5 provides the tools that are necessary to port Perl and UNIX shell scripts to Win32, including those employing standard UNIX tools such as awk, sed, and grep.

Deploying Interix Applications Using the Berkeley "r" Commands

The script copies the updated image of an application remotely to an Interix computer and modifies the startup symbolic link of the application to point to the new application. This installation script is copied (using rcp) to the target computer, and then executed using an rsh to copy a specified application directory to the target computer (where this script is executed). The following checks must be performed:

  • Whether the platform is Interix.

  • What the current application version is, if any.

  • Whether there is enough disk space at the target computer.

The arguments to this script include:

  • The directory or file on the target computer where the application will be installed.

  • Disk space (in kilobytes) needed for the file or directory in which the application will be installed.

  • The directory or file that must be removed. This must be different from the second argument.

  • The source computer on which the installation directory of the application exists.

  • The path of application on the source computer.

If you specify -f, the application directory or file is removed if it already exists, and then the installation is performed. If this option is not specified and the application directory or the file exists, it is not removed and the script results in error. Following is an example of such a script with the relevant error messages:

Script for Remote Deployment of Applications in Interix

Note: Some of the lines in the following code have been displayed on multiple lines for better readability.

#!/bin/ksh
# ------------------------- Start of Functions-------
--------------------
CHECK_DISK_SPACE ()
{
 TARGET_DIR="$1"
 SPACE_REQ="$2"
 AVAILFILE1="$3"
 AVAILFILE2="$4"
 if [ ! -d "$TARGET_DIR" ]; then
  return 2
 fi
 cd "$TARGET_DIR"
 let AVAILSPACE=`/bin/df -k . | /bin/tail -1 | /bin/
 awk '{print $4}'`
 TARGET_DIRFILESYSTEM=`/bin/df -k . | /bin/tail -1 |
 /bin/awk '{print $1}'`
 if [ -a "$AVAILFILE1" ]; then
  cd `dirname "$AVAILFILE1"`
  AVAILFILE1FILESYSTEM=`/bin/df -k . | /bin/tail -1 |
  /bin/awk '{print $1}'`
  if [ "$AVAILFILE1FILESYSTEM" = "$TARGET_DIRFILESYSTEM"
  ]; then
   let AVAILSPACE="$AVAILSPACE"+`/bin/du -skx 
   "$AVAILFILE1" 2>/dev/null | /bin/awk '{print $1}'`
  fi
 fi
 if [ -a "$AVAILFILE2" ]; then
  cd `dirname "$AVAILFILE2"`
  AVAILFILE2FILESYSTEM=`/bin/df -k . | /bin/tail -1 |
  /bin/awk '{print $1}'`
  if [ "$AVAILFILE2FILESYSTEM" = "$TARGET_DIRFILESYSTEM"
  ]; then
   let AVAILSPACE="$AVAILSPACE"+`/bin/du -skx 
   "$AVAILFILE2" 2>/dev/null | /bin/awk '{print $1}'`
  fi
 fi
 if [ "$AVAILSPACE" -lt "$SPACE_REQ" ]; then
  return 9
 fi
 return 0
}
# ------------------------- End of Functions --------
-------------------
if [ `uname` != "Interix" ]; then
 echo "The operating system is not Interix."
 exit 1
fi
FORCE="n"
if [ "$6" = "-f" ]; then
 FORCE="y"
fi
#dir/file where application will be installed
INSTALL_LOC="$1"
#disk space (in KB) needed in `dirname $INSTALL_LOC`
SPACE_REQ="$2"
#dir/file to be removed (should not be same 
as $INSTALL_LOC)
REMOVE_LOC="$3"
if [ -a "$INSTALL_LOC" -a "$FORCE" = "n" ]; then
 echo "The product (same or different level) is already
 installed."
 exit 2
fi
TARGET_DIR=`dirname "$INSTALL_LOC"`
CHECK_DISK_SPACE "$TARGET_DIR" "$SPACE_REQ" 
"$INSTALL_LOC" "$REMOVE_LOC"
FLAG="$?"
if [ "$FLAG" = "9" ]; then
 echo "Not enough disk space is available."
 exit 3
elif [ "$FLAG" != "0" ]; then
 echo "The available disk space cannot be ascertained."
 exit 4
fi
rm -rf "$INSTALL_LOC"
rm -rf "$REMOVE_LOC"
#Source machine dns name where the application’s
installation directory exists
SOURCE_HOST="$4"
#Directory path of application on source machine
SOURCE_LOC="$5"
rcp –r "$SOURCE_HOST":"$SOURCE_LOC" "$INSTALL_LOC"
Using the Remote Deployment Script

You will be able to deploy your application to a remote computer by using the following instructions. The remote computer is called targetmachine in these instructions.

To deploy your applications to a remote computer

  1. Copy the script install.ksh to target computer using rcp.

  2. Execute the install script as follows:

    Note: The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.

    $ rsh targetmachine /dev/fs/C/tmp/install.ksh /dev/fs
    

/C/tmp/test1 200 /dev/fs/C/appdir/app1 sourcehost /dev/fs/E/SFU/app1

Where the arguments are as follows:

  - **/dev/fs/C/tmp/test1**
    
    The directory or file in which application will be installed on the target computer.

  - **200**
    
    The disk space (in KB) needed for the file or directory in which the application will be installed.

  - **/dev/fs/C/appdir/app1**
    
    The directory or file to be removed (must be different from the second argument).

  - ***sourcehost***
    
    The source computer in which the installation directory of the application exists.

  - **/dev/fs/E/SFU/app1**
    
    The directory path of application on the source computer.

If you attempt to run the script twice, the following is displayed:

**Note:** The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.

<pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">$ rsh targetmachine /dev/fs/C/tmp/install.ksh /dev/fs

/C/tmp/test1 200 /dev/fs/C/appdir/app1 sourcehost /dev/fs/E/SFU/app1 $ "The product (same or different level) is already installed."

This is because the application already exists on the target computer. If you use the **-f** option, the following is displayed because installation would have occurred twice:

**Note:** The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.

<pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">$ rsh targetmachine /dev/fs/C/tmp/install.ksh /dev/fs

/C/tmp/test1 200 /dev/fs/C/appdir/app1 sourcehost /dev/fs/E/SFU/app1 –f

Testing Activities

This section discusses the testing activities designed to identify and address potential solution issues prior to deployment. Testing starts when you begin developing the solution and ends when the testing team certifies that the solution components meet the schedule and quality goals established in the project plan.

Testing in migration projects involving infrastructure services is focused on finding discrepancies between the behavior of the original application, as seen by its clients, and the behavior of the newly migrated application. All discrepancies must be investigated and fixed.

In the Developing Phase, the testing team executes the test plans for acceptance tests on the application submitted for a formal round of testing on the test environment. The testing team assesses the solution, makes a report on its overall quality and feature completeness, and certifies that the solution features, functions, and components address the project goals.

The inputs required for the Developing Phase include:

  • Functional specifications document.

  • A feature-complete application, which has been unit tested.

The documents that are used during the Developing Phase include:

  • Test plan. The test plan is prepared during the Planning Phase. It should describe in detail everything that the test team, the program management team, and the development team must know about the testing to be done.

  • Test specification. The test specification conveys the entire scope of testing required for a set of functionality and defines individual test cases sufficiently for the testers. It also specifies the deliverables and the readiness criteria.

  • Test environment. The test environment is an exact replica of the live environment; it is used to test the application under realistic environments. It also describes the software, hardware, and tools required for testing purposes.

  • Test data. The test data is a set of data for testing the application. Test data is usually a diverse set of data that helps test the application under different conditions.

  • Test report. The test report is an error report of the tests done. It includes a description of the errors that occurred, steps to reproduce the errors, severity of the errors, and names of the developers who are responsible for fixing them.

    The test report is updated during the Stabilizing Phase and is also one of the outputs of this phase, along with the tested and stabilized application.

The key deliverables of the Developing Phase include:

  • Application ready to be deployed on the production environment.

  • Application source code.

  • Project documentation and user manual.

  • Test plan, test specification, and test reports.

  • Release notes.

  • Other project-related documents.

Testing behind with a code review of the application and unit testing. In the Developing Phase, the application is subjected to various tests. The test plan organizes the testing process into the following different elements:

  • Code component testing

  • Integration testing

  • Database testing

  • Security testing

  • Management testing

You can test the migrated application in all the scenarios using a defined testing strategy. Although each test has a different purpose, together they verify that all system elements are properly integrated and perform their allocated functions.

Code Component Testing

A component may be a class or a group of closely related classes performing a similar task. Component testing is the next step after unit testing. Component testing is the process of verifying a software component with respect to its design and functional specifications.

Component testing in a migration project is the process of finding the discrepancies between the functionality and output of components in the Windows application and the original UNIX application. Basic smoke testing, boundary conditions, and error test cases are written based on the functional specification of the component.

The code component testing round tests the components for:

  • Functionality.

  • Input and output, interactions within and with other components.

  • Stress testing.

  • Performance.

The test cases for component testing cover, either directly or indirectly, constraints on their inputs and outputs (pre-conditions and post-conditions), the state of the object, interactions between methods, attributes of the object, and other components. The code component testing requires the following inputs:

  • Test plan and specification. It provides the test cases.

  • System requirements. These are used to determine the required behaviors for individual domain-level classes. The use case model is also used to determine which parts of a component must be tested for vulnerabilities.

  • Specifications of the component. The specifications are used to build the functional test cases. Information on the component inputs, outputs, and interactions with other components can be derived from here.

  • Design document. The actual implementation of the design provides the information necessary to construct the structural and interaction test cases.

Components must also be stress tested. Stress testing is the process of loading the component to the defined and undefined limits. Each component must be stressed under a load to ensure that it performs well within a reasonable performance limit.

System CPU and memory usage per component can also be measured and monitored to know the performance of individual components. For this, you can use tools such as the Windows Performance Monitor. For more information, refer to the "Testing and Optimization Tools" section in Chapter 9, “Stabilizing Phase” of this volume.

Integration Testing

Integration testing involves testing the application as a whole, with all the components of the application put together. Component testing is done during the testing performed in the Developing Phase. Integration testing is the process of verifying the application with respect to the behavior of components in the integrated application, interaction with other components, and the functional specifications of the application as a whole. Integration testing in a migration project is the process of finding discrepancies in the interaction between components and the behavior of components in the Windows application and the original UNIX application.

Integration testing tests the components for:

  • Functionality, behavior of the application as a whole and the individual components after integration.

  • Input and output, interactions within and with other components.

  • Response to various types of stresses.

  • Performance.

Test cases for integration testing directly or indirectly include functionality of the components, constraints on their inputs and outputs (pre- and post-conditions), the state of the object, interactions between components, attributes of the object, and other components. The application must also be stress tested. Inputs required for integration testing include:

  • Test plan. It provides the details of testing the application.

  • Test specification. It is used to determine the required behaviors for individual domain-level classes. The use case model is also used to determine which parts of the application must be tested for vulnerabilities.

Stress testing must also be performed. Stress testing is the process of loading the application to the defined and undefined limits to ensure that it performs well within a reasonable performance limit.

System testing is also performed after completion of integration testing. System testing is the process of ensuring that the integrated application is compatible with all platforms and to test against its requirements. The system CPU and memory usage for the application can also be measured and monitored to determine their performance. For this, you can use tools such as the Windows Performance Monitor.

Note   More information on these tools is available at "Testing and Optimization Tools" in Chapter 9, “Stabilizing Phase” of this volume.

Database Testing

The database component is a critical piece of any data-enabled application. In a migration project, the database may be the same or may have been replaced by another database. In both cases, data must be migrated to the respective database on Windows. Testing of a migrated database includes testing of:

  • Migrated procedural code.

  • Data integration with heterogeneous data sources (if applicable).

  • Customized data transformations and extraction.

Database testing also involves testing at the data access layer, which is the point at which your application communicates with the database. Database testing in a migration project involves:

  • Testing the data and the structure and design of the migrated database objects.

  • Testing the procedures and functions related to database access.

  • Security testing, which tests the database to guarantee proper authentication and authorization so that only the users with the appropriate authority access the database. The database administrator must establish different security settings for each user in the test environment.

  • Testing of data access layer.

  • Performance testing of data access layer.

  • Manageability testing of the database.

An application maintains the following three databases, which are replicas of each other:

  • Development database. This is where most of the testing is carried out.

  • Deployment database (or integration database). This is where the tests are run prior to deployment to ensure that the local database changes are applied.

  • Live database. This has the live data and cannot be used for testing.

Database testing is done on the development database during development, and the integrated application is tested using the deployment database.

Security Testing

Security is about controlling access to a variety of resources, such as application components, data, and hardware. Security testing is performed on the application to ensure that only the users with the appropriate authority are able to use the applicable features of the application. Security testing also involves testing the application from the point of providing the same security features and measures that were provide by the original application.

To ensure that the application is secure, most security measures are based on the following four concepts:

  • Authentication. This is the process of confirming the identity of the users, which is one layer of security control. Before an application can authorize access to a resource, it must confirm the identity of the requestor.

  • Authorization. It is the process of verifying that an authenticated party has the permission to access a particular resource, which is the layer of security control following the authentication layer.

  • Data protection. It is the process of providing data confidentiality, integrity, and nonrepudiability. Encrypting the data provides data confidentiality. Data integrity is achieved through the use of hash algorithms, digital signatures, and message authentication codes. Message authentication codes (MACs) are used by technologies such as SSL/TLS to verify that data has not been altered while in transit.

  • Auditing. It is the process of logging and monitoring events that occur in a system and which are of interest to security.

Note For more information, refer to "Event Logging" on the TechNet Web site at https://technet2.microsoft.com/WindowsServer/en/Library /0473658c-693d-4a06-b95b-ebe8a76648a91033.mspx.

The systems engineer establishes different security settings for each user in the test environment. Network security testing is performed to guarantee that the network is secure from unauthorized users. To minimize the risks associated with unchecked errors on the system, you should know the user context in which system processes run, keeping to a minimum the privileges that these accounts have, and log their access to these accounts. Active monitoring can be accomplished using the Windows Performance Monitor for real-time feedback.

All security settings and security features of the application must be documented properly.

Note

More information about security testing is available at
https://msdn.microsoft.com/library/default.asp?url= /library/en-us/vsent7/html/vxcontestingforsecurability.asp.
More information on how to make your code secure is available at
https://msdn.microsoft.com/security/securecode/.
More information on secure coding guidelines for the .NET Framework is available at
https://msdn.microsoft.com/security/securecode /bestpractices/default.aspx?pull=/library/en-us/dnnetsec /html/seccodeguide.asp.

Management Testing

Testing for manageability involves testing the deployment, maintenance, and monitoring technologies that you have incorporated into your migrated application.

Following are some important testing recommendations to verify that you have developed a manageable application:

  • Test Windows Management Instrumentation (WMI). WMI can provide important information about your application and the resources it uses. During the design of your application, you made certain decisions about the types of WMI information that must be provided. These might include server and network configurations, event log error messages, CPU consumption, available disk space, network traffic, application settings, and many other application messages. You must test every source of information and be certain you can monitor each one.

Note More information on usage of WMI in applications is available at https://msdn.microsoft.com/library/default.asp?url=/ library/en-us/wmisdk/wmi/wmi_reference.asp.

  • Test Network Load Balancing (NLB) and cluster configuration. You can use Application Center 2000 clustering to add a front-end or back-end server while the application is still running. After installing new server hardware on the network, use your monitoring console to replicate the application image and start the server. The new server should automatically begin sharing some of the workload. You can set up the Application Center 2000 Performance Monitor (PerfMon) to track multiple front-end Web servers. After setting up PerfMon, make some requests to generate traffic. PerfMon will show you that there is an increase in traffic in the back-end servers and that the workload is evenly spread across the front-end computers.

Note More information about Application Center 2000 is available at https://www.microsoft.com/applicationcenter/.

  • Test change control procedures. An important part of application management is the handling of both scheduled and emergency maintenance changes. Test and validate all of the change control procedures including the automated and manual processes.

    It is especially important to test all people-based procedures to ensure that the necessary communication, authority, and skills are available to support an error-free change control process.

    Note   More information on this is available on the MSDN Web site at
    https://msdn.microsoft.com/library/default.asp?url=/library/en-us/vsent7/html/vxcontestingformanageability.asp.

Interim Milestone: Internal Release n

The project needs interim milestones to help the team measure their progress in the actual building of the solution during the Developing Phase. Each internal release signifies a major step toward the completion of the solution feature sets and achievement of the associated quality level. Depending on the complexity of the solution, a number of internal releases may be required. Each internal release represents a fully functional addition to the solution’s core feature set, indicating that it is potentially ready to move on to the Stabilizing Phase.

Closing the Developing Phase

Closing the Developing Phase requires completing a milestone approval process. The team documents the results of different tasks that it has performed in this phase and obtains a sign-off on the completion of development from the stakeholders (including the customer).

Key Milestone: Scope Complete

The Developing Phase culminates in the Scope Complete Milestone. At this milestone, all features are complete and the solution is ready for external testing and stabilization. This milestone is the opportunity for customers and users, operations and support personnel, and key project stakeholders to evaluate the solution and identify any remaining issues that must be addressed before beginning the transition to stabilization and, ultimately, to release.

Key stakeholders, typically, representatives of each team role and any important customer representatives who are not on the project team, signal their approval of the milestone by signing or initialing a document stating that the milestone is complete. The sign-off document becomes a project deliverable and is archived for future reference.

Now the team must shift its focus to verify that the quality of the solution meets the acceptance criteria for release readiness. The next phase, the Stabilizing Phase, describes the activities (for example, user acceptance testing (UAT), regression testing, and conducting the pilot) required  to achieve these objectives.

Download

Get the UNIX Custom Application Migration Guide

Update Notifications

Sign up to learn about updates and new releases

Feedback

Send us your comments or suggestions