Chapter 2: Operations

This chapter deals with the topic of operating your UNIX-to-Microsoft® Windows® migration solution after the migration project has been completed. Much of the relevant guidance can be found in existing documentation on Windows operation. The guidance provided here supplements this documentation and addresses three areas:

  • Aspects of the Windows environment that will be modified in some respect as a result of the migration.

  • Aspects of migrated solutions that are not covered by existing guidance.

  • Interoperation issues.

After the application has been successfully deployed, the operations team is responsible for maintaining the solution on a continuous basis, which allows the team to proactively deal with problems. The team performs all the daily maintenance procedures and provides low-level routine change management support for the solution. The success of ongoing operations depends heavily upon the "people and process" elements as well as the technology.

In this guide, the guidance for completing the technical elements of the migration project is organized according to the Process Model phases outlined in Microsoft Solutions Framework (MSF). But the operations guidance (the “people and process” elements) offered in this chapter is organized according to Microsoft Operations Framework (MOF), which is a complementary framework to MSF.

The development and deployment of an IT solution typically involves two teams: the project team and the operations team. The project team is mainly responsible for the MSF Planning, Developing, and Deploying Phases of the migration. MSF provides a flexible and scalable way to plan, design, develop, and deploy successful IT solutions. In contrast, the operations team is permanent and is responsible for the solution’s daily operations and management. MOF is designed to guide the operations teams to achieve mission-critical system reliability, availability, supportability, and manageability of IT solutions.

The two frameworks are complementary, minimizing the time to value—that is, the time between recognition of the need and delivery of the service. Consistency of terminology and concepts between the two frameworks also supports the delivery of a high-quality service. The two frameworks are also well integrated. MSF and MOF both include guidance for team roles and processes to ensure a successful deployment into the production environment.

A brief overview of MOF is provided in the next section of this chapter. More information on its basic models and key concepts is provided in the appendix of the UNIX Migration Project Guide.

Note   More detailed information on MOF is available at https://www.microsoft.com/mof.

*

On This Page

Microsoft Operations Framework Microsoft Operations Framework
Operating a Mixed Environment Operating a Mixed Environment
Operating in the Windows Environment Operating in the Windows Environment
Advantages of a Migrated Application to .NET Advantages of a Migrated Application to .NET
Conclusion Conclusion
Roadmap for Future Migrations Roadmap for Future Migrations
Further Reading Further Reading

Microsoft Operations Framework

Microsoft Operations Framework (MOF) is a collection of best practices, principles, and models. MOF offers a suite of operational guidance in the form of white papers, operations guides, assessment tools, operations kits, case studies, templates, support tools, and services. MOF principles and guidance are organized around three core models: the MOF Process Model, MOF Team Model, and MOF Risk Management Discipline These models are discussed in the following sections. You can use this information to understand the best practices of MOF and apply them to your IT environment to effectively manage and operate the migrated applications.

MOF Process Model

This section provides an overview of the MOF Process Model and its approach to IT operations. The MOF Process Model provides guidelines on how to plan, deploy, and maintain IT operational processes in support of mission-critical service solutions. The MOF Process Model organizes the service management life cycle into four quadrants, with each quadrant having a specific focus and set of tasks that are carried out through its corresponding set of service management functions (SMFs). (MOF defines more than 20 SMFs.) The key components of the MOF Process Model are:

  • Quadrants

  • Operations management reviews

  • Service management functions

Figure 2.1 depicts the MOF Process Model.

Figure 2.1. MOF Process Model and its quadrants

Figure 2.1. MOF Process Model and its quadrants

The following sections explain these components in detail.

Quadrants

Typical operational activities are defined and grouped into quadrants, each of which is focused on a particular set of processes and tasks. The entire MOF Process Model can be divided into four quadrants, as described as follows.

Changing Quadrant

The Changing Quadrant describes processes, responsibilities, reviews, and best practices that help organizations manage changes to their IT infrastructure. MOF identifies three key sets of activities, or SMFs, within this quadrant:

  • Change Management

  • Configuration Management

  • Release Management

In addition to these SMFs, MOF recommends two specific milestone reviews, the Change Initiation Review and the Release Readiness Review, as part of the change management process. These reviews ensure that the new solutions or other significant changes align with the business and the operational requirements for operability.

Note   More information on each of these components of the Changing Quadrant is available at https://www.microsoft.com/technet/itsolutions/cits/mo/mof/mofchange.mspx.

Operating Quadrant

The Operating Quadrant is the collection of processes and IT functions dedicated to the ongoing maintenance, monitoring, control, and protection of IT infrastructure or applications. MOF identifies seven key sets of activities, or SMFs, within this quadrant:

  • Directory Services Administration

  • Job Scheduling

  • Network Administration

  • Security Administration

  • Service Monitoring and Control

  • Storage Management

  • System Administration

In addition to these SMFs, MOF recommends the periodic completion of an Operations Review in order to permit IT service managers to analyze their performance and processes.

Note   More information on each of these components of the Operating Quadrant is available at https://www.microsoft.com/technet/itsolutions/cits/mo/mof/mofopman.mspx.

Supporting Quadrant

Activities and processes that are performed to resolve user- and system-generated queries, issues, or problems are in the domain of the Supporting Quadrant. MOF identifies three key sets of activities, or SMFs, within this quadrant:

  • Incident Management

  • Problem Management

  • Service Desk

In addition to these SMFs, MOF recommends the periodic completion of a Service Level Agreement (SLA) Review and an analysis of past performance and future commitments.

Note   More information on each of these components of the Supporting Quadrant is available at https://www.microsoft.com/technet/itsolutions/cits/mo/mof/mofsupport.mspx.

Optimizing Quadrant

The Optimizing Quadrant encompasses processes and IT functions dedicated to planning and implementing enhancements to the IT environment through a continuous cycle of process improvement. Recommendations originating in the Optimizing Quadrant generally are reflected as changes in the IT infrastructure and are instituted through the change management process, which is a part of the Changing Quadrant.

MOF identifies eight key sets of activities, or SMFs, within this quadrant:

  • Availability Management

  • Capacity Management

  • Financial Management

  • Infrastructure Engineering

  • IT Service Continuity Management

  • Security Management

  • Service Level Management

  • Workforce Management

In addition to these SMFs, recommended changes will be implemented through change management, initiated by a Change Initiation Review.

Note   More information on these components of the Optimizing Quadrant is available at https://www.microsoft.com/technet/itsolutions/cits/mo/mof/mofoptimize.mspx.

Operations Management Reviews

The operations management reviews (OMRs) are a unique feature of MOF; they contain reviews of operational activity for efficiency and effectiveness. MOF makes these reviews an explicit part of the Process Model and provides detailed guidance on how to conduct them. These OMRs are specifically labeled on the Process Model diagram because they warrant senior management attention and can be used as a regularly reported "health check" on the state of the operations activity. The operations management reviews are:

  • Release Readiness Review

  • Operations Review

  • SLA Review

  • Change Initiation Review

The Process Model incorporates two types of management reviews: release-based and time-based. Two of the four OMRs—Release Readiness Review and Change Initiation Review—are release-based and occur at the initiation and final installation of a release into the target environment, respectively. The remaining two OMRs—Operations Review and SLA Review—occur at regular intervals to assess the internal operations as well as performance against customer service levels.

Service Management Functions

Service management functions (SMFs) are the underlying processes and activities within each MOF quadrant that support the mission of service for that quadrant. These SMFs are at the core of the MOF Process Model. Although all of the SMFs are cross-functional (and cross-quadrant) in nature, each SMF is assigned a "home" or "primary" quadrant that aligns the functions performed with the mission of service for that quadrant. (The specific SMFs applicable to each quadrant of MOF are listed in the preceding sections.) This natural alignment between each quadrant and its corresponding SMFs allows the IT manager to see all the key SMFs that are required to effectively run the operations environment.

MOF Team Model

The MOF Team Model organizes the activities of IT operations into seven distinct role clusters that represent areas, or functional roles, within IT operations.

The seven role clusters of the Team Model define the general categories of activities and processes, common ways to identify roles and responsibilities, and common goals for each specialist function team required for a successful operations management organization.

Table 2.1 describes the Team Model quality goals and lists the key functional roles responsible for achieving them.

Table 2.1. MOF Team Quality Goals and Team Roles

Quality Goal

Team Role

Well-developed release and change management processes and accurate inventory tracking of all IT services and systems.

Release

Portfolio of business-aligned IT services.

Service

Efficient management of physical environments and infrastructure tools.

Infrastructure

High-quality, cost-effective customer support and a service culture.

Support

Predictable, repeatable, and automated day-to-day system management.

Operations

Protected corporate assets, controlled access to systems and information, and proactive planning for emergency response.

Security

Efficient and cost-effective, mutually beneficial relationships with service and supply partners.

Partner

MOF Risk Management Discipline

The MOF Risk Management Discipline applies proven risk-management techniques to the challenges that operations staff members face every day. The MOF Risk Management Discipline offers greater value than many other risk management models through its key principles, consistent terminology, and structured and repeatable six-step process. Following is a brief introduction to the six steps of the risk management process.

  • Identify. Risk identification allows you to identify risks so that the operations staff becomes aware of potential problems.

  • Analyze and prioritize. Risk analysis and prioritization enables operations to analyze and prioritize the possible risks and commit resources to manage them.

  • Plan and schedule. Risk planning takes the information obtained from risk analysis and uses it to formulate strategies, plans, change requests, and actions. Risk scheduling ensures that these plans are approved and then incorporated.

  • Track and report. Risk tracking monitors the status of specific risks and the progress in their respective action plans. Risk reporting ensures that the operations staff, service manager, and other stakeholders are aware of the status of top risks and the plans to manage them.

  • Control. Risk control is the process of executing risk action plans and their associated status reporting. Risk control also includes initiating change control requests when changes in risk status or risk plans could affect the availability of the service or SLA.

  • Learn. Risk learning formalizes the lessons learned and uses tools to capture, categorize, and index that knowledge in a reusable form that can be shared with others.

Operating a Mixed Environment

This section explains different operations mechanisms that you can use when managing a migrated application post-deployment. The section also provides information about tools and software available for the preceding purpose and how they can be used for managing application environments.

For many organizations, a migrated application functions in an environment containing a mixture of UNIX and Windows operating systems. Such organizations need some interoperation between the two operating systems. To some extent, interoperation is also required where Interix and Windows systems coexist.

The following are the main topics that you will need to address in mixed environments:

  • Windows- to-UNIX connectivity

  • User authentication and authorization

  • Resource and data sharing

  • Tools and scripts

Windows- to-UNIX Connectivity

In a mixed live environment in which users and support staff access applications running on Windows-based and UNIX-based computers, you will require tools to connect between the operating systems. The client software usually available on Windows desktops provides character or graphical (X Windows) access to UNIX applications. Modern terminals access the UNIX systems either from the command line or from a graphical user interface (GUI).

Command-line access tools include telnet, Berkeley r access commands (rsh, rexec, and rcp), and secure shell (ssh), which have the following features:

  • The telnet client allows the user to operate in the multiuser environment provided by the host, with access provided by the telnet server.

  • The rsh command allows the user to run commands remotely from a local environment.

  • The ssh command provides extra authentication and encryption features. Its functionality is similar to the rsh command, except that ssh can be made very secure.

The GUI access tool is X Windows, which has the following functionality:

  • An X Windows server provides a graphical interface to local or remote applications because it uses a TCP/IP connection to the UNIX-based computer.

Developers and users of UNIX applications typically use one or more of these access methods. The access method that is used affects both the migration strategy and the migration process itself.

Users require UNIX connectivity when the migrated application continues to use UNIX-style interfaces, such as in some migrations to the Interix environment.

The Windows operating system includes a basic telnet client and supports the rsh, rexec, and rcp client commands.

With an X Windows server on the desktop, developers can also use a GUI to connect to UNIX applications. Windows operating systems do not provide X Windows. For X Windows connectivity, developers need a third-party X Windows server.

The following sections describe the main connectivity solutions available.

Terminal Emulation and Command-Line Connectivity

UNIX developers use the tools described in this section to access UNIX development servers from the command line. If the UNIX application has a command-line interface, these tools are also helpful to users.

Windows Telnet Client

The Windows telnet client provides basic command-line access to UNIX systems. Its command syntax is identical to that of the telnet clients found on UNIX operating systems. The Windows telnet client runs in a window. Users can copy text between the UNIX command line and Windows-based applications. Users run the client from the command prompt or use the Run option on the Start menu.

The Microsoft HyperTerminal application also includes a telnet client. HyperTerminal offers more configuration options than the command-line telnet client. Terminal types for the client range from ANSI to VT52 to VT100. It is simple to copy text between HyperTerminal and Windows-based applications.

A third-party telnet client may be required for command-line tools that use terminal characteristics beyond those provided by HyperTerminal, such as function keys or full-screen operation.

Windows Telnet Server

Developers who need to connect from UNIX-based  to Windows-based servers can install the telnet server included in the Microsoft Windows Server™ 2003 operating system.

Remote Shell (rsh), Remote Copy (rcp), and Remote Execute (rexec)

Using the Berkeley r commands for remote access to UNIX servers simplifies the process. Authentication occurs only once on the logon to the UNIX system, after which a user does not need to log on again to connect to other systems. Because of this, it is easy to run remote shells (rsh) and remote commands (rexec) and to copy files between remote computers (rcp).

The remote clients are included in Windows. The syntax for the Windows commands is:

Note: Some of the lines in the following code have been displayed on multiple lines for better readability.

rsh [Host] [-l UserName] [-n] [Command]
rcp [{-a | -b}] [-h] [-r] [Host][.User:] 
[Source] [Host][.User:]
[Path\Destination]
rexec [Host] [-l UserName] [-n] [Command]

On Windows-based clients, the UserName parameter is required if the names on UNIX and Windows are different.

The Windows 2000 Resource Kit includes an rsh tool, Rshsvc.exe.

Secure Shell (ssh)

The secure shell provides extra authentication and encryption features. It is often used to replace telnet and the Berkeley r access commands where security is an issue. The ssh command functionality is similar to the rsh command except that ssh can be made very secure. (Client and user information is not revealed to third parties.)

Windows operating systems do not include a secure shell client or server. However, third-party products do provide the ssh functionality.

GUI Access Tools

Users who require GUI access to a UNIX-based server from a Windows-based desktop usually use an X Windows server.

X Windows

Windows operating systems do not include an X Windows server. The Interix environment includes X Windows clients, which can be used to develop applications that use an X Windows user interface. However, Interix does not include X server.

Third-party products that provide Windows-based X Windows servers are well-integrated with the Windows desktop. Users of third-party products can copy text and graphics between X Windows and Windows. Third-party Windows-based X Windows servers include:

These products provide support for most X Windows standards. Each product also includes extra information that will be useful. For example, MKS Toolkit also includes more than 300 UNIX commands that can be used within Windows.

Remote Desktops

Windows operating systems handle multiple users in a different way than UNIX does. A single user at a time logs on to a Windows-based computer. However, using Windows 2000 Terminal Services is analogous to using X Windows on UNIX. By using Terminal Services, a Windows-based server can provide a desktop to remote users on clients spread over a network. This is often referred to as a "thin client" because the client only has to handle input and output while the application runs on the server.

For UNIX- to-Windows connectivity, UNIX and Linux implement the RDP protocol used by Terminal Services.

To connect to a Windows-based server from a UNIX client using a remote desktop, implement the Citrix MetaFrame. The Citrix MetaFrame has clients for a wide range of operating systems, including UNIX and Linux. Also, an X Windows interface to UNIX is required.

Note   More information on Citrix MetaFrame is available at https://www.citrix.com/products/.

User Authentication and Authorization

In any heterogeneous network, such as a network being migrated to Windows, users need to work across systems, and the systems need to interoperate. Your migration project needs to take into account the differences between the UNIX and Windows security models. Security-related questions that were addressed as part of planning need to be implemented here. For example:

  • Number of user databases that administrators need to manage.

  • Mode of managing user security on shared resources, such as network file system (NFS).

Resource and Data Sharing

Migration from UNIX to Windows requires the two environments to share data and resources. In the simplest case, UNIX-based servers migrate files to Windows-based servers. On the other hand, UNIX-based servers and Windows-based servers share data and resources, such as printers, during and after migration.

There are some less- integrated methods for transporting files between systems, such as file transfer protocol (FTP) and removable media (for example, tape). These methods are not considered in this guide because they do not provide interoperability. However, removable media and FTP can be very useful during the migration. Tape is particularly effective for transferring volumes of data that are too large to feasibly transfer across a network.

Developers or application users must be able to use the files that they need in their target environment (Windows or Interix) transparently. They do not need to know whether the files are on a UNIX-based server or on a Windows-based server. However, UNIX-based systems and Windows-based systems use different network protocols, functionality, and naming conventions for file sharing.

Tools and Scripts

Your system and application administrators will usually want to use tools and scripts to manage your migrated application. With migrations to Microsoft Interix and Microsoft Win32®, you have the option to migrate any management tools that were used previously in your UNIX environment. In a Win32 migration, you can also use some of the Windows-based management tools, such as Microsoft Systems Management Server (SMS).

It is likely that you will consider porting your UNIX management tools to Interix because it gives a UNIX-like environment. Consider porting the tools if your application has been migrated to Win32 and continues to use the same configuration files and scripting tools as it did on UNIX. In this case, Windows Services for UNIX 3.5 provides the tools necessary to port Perl and UNIX shell scripts to Win32, including those employing standard UNIX tools such as awk, sed, and grep. These topics are covered in detail in Chapter 2, “Developing Phase: Process Milestones and Technology Considerations” of the respective build volumes (Volume 2, Volume 3, and Volume 4) of this guide.

Operating in the Windows Environment

After successfully deploying the migrated applications in the Windows environment, you should follow certain key activities to operate and maintain the deployed applications. These key activities are:

  • Service monitoring and control.

  • Exception monitoring.

  • Notification.

  • Event monitoring.

The following subsections describe these activities in detail.

Note   More information on operating applications in a Windows environment is available at
https://www.microsoft.com/technet/prodtechnol/windowsserver2003/operations/default.mspx and
https://www.microsoft.com/technet/prodtechnol/winxppro/maintain/default.mspx.

Service Monitoring and Control

This section provides an overview of different monitoring mechanisms that can be adopted to monitor the migrated applications in the operating stage.

The monitoring process for an application involves:

  • Monitoring the health and status of enterprise systems.

  • Alerting administrators to problems and consolidating the monitoring data in a single place for ease of administration.

The monitoring tools can monitor individual servers or network components, or they may focus on application services like e-mail, transaction processing, or Web services.

Predictive event management is an essential goal for Windows. Instead of reacting to failures, these services report potential errors and failures before they happen so that corrective action may be taken. For example, a predictive exception monitoring tool might notice the increasing frequency of disk write errors on a particular disk and may notify the system manager that the disk is likely to fail in the near future. The specifics of prediction will necessarily vary from site to site.

The Web-Based Enterprise Management (WBEM) initiative is an effort to provide a unifying mechanism for describing and sharing management information. WBEM provides a general mechanism that allows any application or service to supply the raw data needed for prediction.

Note   More information on WBEM is available at https://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnwmi/html/wmioverview.asp.

A recommended tool for service monitoring and control for Windows is Microsoft Operations Manager (MOM) 2005. It provides:

  • Comprehensive event management.

  • Proactive monitoring and alerting.

  • Reporting and trend analysis.

Note   More information on MOM is available at https://www.microsoft.com/mom/techinfo/productdoc/default.mspx.

Exception Monitoring

Exception monitoring tools monitor conditions that fall outside a predefined range of conditions. An administrator would want to know when something unusual happens. Exceptions are usually generated when the amount of a system resource like free disk space falls below a preset threshold or when a monitored device reports a failure or anomaly. Exceptions may be advisory (for example, "This disk is down to 25 percent free space") or they may be critical (for example, "The drive at SCSI ID 3 just went offline"). The response to any exception depends on its severity.

There are several potential sources of exception data. The Microsoft Windows Server 2003 event log records notifications from applications, drivers, and system services, and the log can be used as the basis for generating exceptions.

Note   More information on how to diagnose system problems with Event Viewer in Windows Server 2003 is available at https://support.microsoft.com/default.aspx?scid=kb;en-us;302542&sd=tech.

Simple Network Management Protocol (SNMP) and performance monitor tools are also available for the same. SNMP is a popular protocol for network management. It is used to collect information from, and to configure, network devices, such as servers, printers, hubs, switches, and routers on an Internet Protocol (IP) network. Windows Server 2003 provides SNMP agent software that works with third-party SNMP management software to monitor the status of managed devices and applications. SNMP managers can generate exceptions when they receive traps from their managed devices, and BackOffice components like Microsoft Exchange and Microsoft SQL Server™ can trigger exceptions when service-specific events occur, for example, when a remote mail server does not respond to messages in a predefined interval.

System Monitor is a tool used to track a range of processes and give a real-time graphical display of the results on a Windows Server 2003 system. This tool can assist you with the planning of upgrades, tracking of processes that must be optimized, monitoring results of tuning and configuration scenarios, and the understanding of a workload and its effect on resource usage to identify bottlenecks. Navigate to the performance icon in the administrative tools folder in Control Panel to open the monitor. Alternatively, you can open it from the Start menu or by typing perfmon.msc in the Run dialog box.

System Monitor in Windows Server 2003 supports an Alert view, in which, the designated parameters are continually monitored and exception messages are generated when any parameter goes out of limits. Administrators can also use WSH or any other supported scripting mechanism to write flexible scripts that monitor the system and send exception messages when needed.

Exception monitoring in UNIX is usually done by administrators. They can write scripts to monitor their application. Scripts are often not portable among variants of UNIX, and third-party applications usually require the purchase of their own proprietary monitoring components.

Notification

Notification is the process by which administrators or automated systems are notified that some condition has gone out of bounds. These messages can travel on a variety of paths, the most common of which are:

  • Alphanumeric pagers.

  • E-mail.

  • On-screen alerts.

  • Adding entry to event log.

The Windows Server 2003 Resource Kit includes tools for sending exception alerts through        e-mail from a command line or script. These tools can be used with the built-in exception capacity of the event log and System Monitor. Alphanumeric paging messages can be sent using freeware or commercial third-party tools that dial into a paging service and upload the message.

In addition, BackOffice components and many third-party applications include their own notification mechanisms. For example, Microsoft Exchange Server can be used to send e-mail, display on-screen alerts, or route exceptions to an external application. These capabilities are in addition to, and not a replacement for, the standard system notification paths.

Both UNIX and Windows support a variety of third-party notification products. These products usually combine exception monitoring with flexible rule sets that determine who gets which notification messages.

Event Monitoring

A network services and consulting firm provides dial-up Internet access for government customers who, for security reasons, cannot have dedicated local access. The organization depends on its servers running Windows NT® and UNIX for its own internal network, including file and print services, the Web page of the organization, and its sales database.

The network administrator and data center managers each carry a two-way pager. The UNIX systems may run custom-written shell scripts that monitor various conditions and send an e-mail to the address of the pager when a significant event occurs. For example, the daily backup script will send a success message to the regular mail accounts of the administrators, but will send failure messages directly to the pager of the on-call administrator.

The Windows NT-based systems use pager notification through System Monitor and Exchange Administrator. Both of these programs allow an external program to be launched when a monitored parameter goes out of limits. Exception messages are routed to the pagers by a small command-line mail-sending tool. Windows NT-based servers also use ksh shell scripts (using the Interix UNIX-compatibility toolkit) to monitor some of the same conditions as the UNIX servers. For example, the Windows NT® backup script is largely identical to the UNIX script.

The notification system allows administrators to receive continual updates of ongoing processes like backups, as well as instant notification of problems with the network or servers. In many cases, problems can be resolved before the customer encounters them.

Advantages of a Migrated Application to .NET

This section describes the various features available with .NET Framework that help in maintaining or managing the application after it is migrated from UNIX to Windows. This section acquaints you with .NET features and their usage in the operations activities of the migrated application. After migrating your application to Windows Services for UNIX 3.5 or Win32, you can take advantage of the new functionality exposed by the common language runtime (CLR) and reuse existing components from the managed code that you develop.

The interoperability features of .NET enable you to work with existing unmanaged code (that is, code running outside the CLR) in Component Object Model (COM) components as well as Microsoft Win32 DLLs. It also allows the use of managed components from the unmanaged, COM-based code.

The .NET Framework provides the following three forms of interoperability:

  • Platform Invoke (P/Invoke). This feature allows the use of managed languages, such as C# and Microsoft Visual Basic® .NET, to call libraries within standard Microsoft Windows DLLs (such as the Win32 API) or custom DLLs.

  • It Just Works (IJW). This feature allows the use of Managed Extensions for C++ to directly call libraries within standard Windows DLLs. IJW provides C++ programmers with a more straightforward way to call functions in standard Windows DLLs. The appropriate header file should be included for the unmanaged APIs and then linked to the associated import library.

  • COM Interop. This feature allows the use of managed languages to activate and interact with COM objects through COM interfaces. COM interop simplifies creating and instantiating COM components. You can generate (or acquire) an interop assembly to use COM interop and to make early-bound calls to COM components. An interop assembly is an assembly that contains managed types that provide the capability to program indirectly against unmanaged COM types.

Note   More information on improving interop performance is available at https://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/scalenetchapt07.asp.

Advantages of a Migrated Windows Services for UNIX-based Application to .NET

UNIX and Win32 calls cannot be mixed in a single process when building applications using the Interix environment in Windows Services for UNIX 3.5. However, the following alternate ways can be used by which Windows Services for UNIX 3.5 applications can make use of the .NET environment:

  • Encapsulate the Windows Services for UNIX 3.5 application from a Win32 COM object. This is described is Chapter 7, “Development Considerations for Deployment” of Volume 2: Migrate Using Windows Services for UNIX 3.5 of this guide. .NET applications can access the resultant COM object through interoperability mechanisms (PInvoke and others), discussed in Chapter 3, “.NET Interoperability” of Volume 4: Migrate Using .NET of this guide.

  • Use Web services on Windows Services for UNIX 3.5. XML Web services provide a clean way to exchange data between Windows Services for UNIX 3.5 and .NET applications without dependencies on the architectural differences between the two platforms. The steps are as follows:

    • Install a Web server on Windows Services for UNIX 3.5. Apache is an open-source Web server that is available for Windows Services for UNIX 3.5. Use third-party components to create a Web service wrapper around your Windows Services for UNIX 3.5 code. Many of these third-party components are available as open source components.

    • .NET applications can now “consume” the Web service. The functionality provided by the Windows Services for UNIX 3.5 Web service can be integrated into any .NET or Win32 application.

  • Use Web services on .NET. Write a C# code to run the Windows Services for the UNIX 3.5 application and capture its output. Translate this output into XML using the .NET Framework classes. Wrap all of this within a .NET Web service. Other .NET applications can now make use of the functionality of the Windows Services for UNIX 3.5 application through the Web service. The benefit of this approach is that it takes the advantages of the Web services infrastructure into the .NET Framework.

    Some of the ways in which C# code can access Windows Services for UNIX 3.5 application are as follows:

  • Use command-line invocation through posix.exe, which is easy but not very efficient.

  • If the UNIX application exposes an RPC server (for example, using ONC RPC), the C# code could make an RPC call to the UNIX-based server. You could build some unsafe C# code that maps a file into memory, and build the UNIX application to map the same file. Shared memory is particularly useful when you need to quickly move large amounts of data between Windows Services for UNIX 3.5 and .NET.

Advantages of a Migrated Win32/Win64-based Application to .NET

An application that has been migrated to Win32/Win64 using the Windows API (application programming interface) can be migrated to .NET in the following ways:

  • Interop Services. .NET can call Windows API using platform Interop Services, which resides at the System.Runtime.InteropServices namespace. The Windows API resides in the User32.dll, GDI32.dll, and Shell32.dll in the Windows system directory. The Interop Services of .NET work with external DLLs. Therefore, by using the System.Runtime.InteropServices namespace in the application, you can make calls to external applications.

    Note   More information on this is available at https://msdn.microsoft.com/library/default.asp?url=/library/en-us/csref/html/vcwlkPlatformInvokeTutorial.asp.

  • COM objects. .NET also provides interoperability with COM applications. A COM object can be called from .NET. When the COM object is called from .NET, the runtime generates a runtime callable wrapper (RCW). The RCW acts as a proxy for the unmanaged object and is responsible for handling all interactions between the .NET client code and the COM component.

Note More information on COM and .NET interoperability is available at https://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnbda/html/cominterop.asp and
https://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpguide/html/cpconExposingCOMComponentsToNETFramework.asp.

Conclusion

This marks the conclusion of the five volumes of UNIX Custom Application Migration Guide. You should now possess sufficient knowledge to migrate UNIX applications to Windows as well as know about all of the related processes that require special attention in order to obtain the best possible solution. The guide has aligned the various activities in the UNIX-to-Windows migration process with the MSF/MOF Process Model in the following volumes:

  • Volume 1: Plan. This volume described the Windows and UNIX architectures; development of the solution architecture, project plan, and infrastructure; and preparation of the development and testing environments. This volume is aligned with the Envisioning and Planning Phases of the MSF Process Model.

  • Volume 2: Migrate Using Windows Services for UNIX 3.5. This volume described Windows Services for Unix 3.5, the Interix environment, and architectural differences between Interix and the UNIX environments. It also described the migration activities involved in migrating the UNIX applications to the Interix environment. This volume aligned with the Developing Phase of the MSF Process Model. It provided instructions to developers for the migrating UNIX applications to Windows using Windows Services for UNIX 3.5.

  • Volume 3: Migrate Using Win32/Win64. This volume provided an overview of the Microsoft Win32 and Win64 architectures and described the steps involved in migrating UNIX applications to the Win32/Win64 environments using the Windows Platform Software Development Kit (SDK). This volume aligned with the Developing Phase of the MSF Process Model.

  • Volume 4: Migrate Using .NET. This volume described the .NET architecture, configuration of the .NET environment, and the steps involved in migrating UNIX applications to the .NET environment using Microsoft Visual Studio® .NET 2003 and .NET Framework. This volume aligned with the Developing Phase of the MSF Process Model.

  • Volume 5: Deploy and Operate. This volume described the steps involved in deploying the migration and post-deployment support activities. This volume aligned with the Deploying Phase of the MSF Process Model and operations activities of the MOF Process Model.

This guide has explained the various possible alternatives of migrating UNIX applications to the Windows environment. You can choose the best possible approach based on your business requirements. The guide also provides an assessment tool that can be used for determining the best migration method based on answers to the questionnaire.

Roadmap for Future Migrations

As technology advances, application migrations from UNIX to the Microsoft Windows operating system need to address how to integrate and interoperate with new features and rising technologies on the Windows platform. This section discusses what to expect for current and future migrations with regard to the following technologies:

  • Windows Server System

  • Visual Studio .NET 2005

  • Microsoft SQL Server™ 2005

  • 64-bit solutions

  • Microsoft patterns and practices

  • High-performance and distributed computing

Windows Server System

The Microsoft Windows Server System provides solutions to a wide-range of application business information management issues, including database management, application deployment and configuration control, business-to-business data interchange, messaging, and workflow management. Following are the some of the products that are part of the Windows Server System:

  • Application Center 2000. Microsoft Application Center 2000 is a deployment and management tool for high-availability Web applications built on the Microsoft Windows 2000 operating system. It allows deploying, monitoring, and maintaining complex distributed applications consisting of Web sites, Web services, and COM components.

  • SQL Server 2000. Microsoft SQL Server 2000 is a relational database engine that can store, retrieve, analyze, and manage large amounts of data. SQL Server meets the availability, reliability, and scalability requirements of demanding e-commerce data storage requirements.

  • BizTalk Server 2000. Microsoft BizTalk® Server 2000 is a suite of tools and services to help you quickly build and deploy data interchange between internal and external Internet applications using W3C-standard Extensible Markup Language (XML) messages.

  • Exchange 2000 Server. Microsoft Exchange 2000 Server is a messaging infrastructure and data store designed to meet the real-time messaging and collaboration needs of both small and large organizations. Exchange Server 2000 features include instant messaging, real-time conferencing, collaboration, document-centric workflow, calendaring, and contact management. The Exchange data store provides a central repository for e-mail, documents, Web content, and applications.

  • Commerce Server 2000. Microsoft Commerce Server 2000 is a customizable e-commerce application for creating an online business. Commerce Server 2000 includes many ready-to-use functions, including catalog display, product and customer service management, and order processing.

  • Content Management Server. Microsoft Content Management Server is a content management system for building, deploying, and maintaining dynamic Web site content for either internal organizational documents or a commercial Web site.

  • Windows 2000 Advanced Server. Microsoft Windows 2000 Advanced Server is the server operating system for e-commerce and internal business applications.

  • Windows 2000 Datacenter Server. Microsoft Windows 2000 Datacenter Server is the server operating system for running mission-critical applications and high-volume, real-time transactions.

  • Host Integration Server 2000. Microsoft Host Integration Server 2000 connects distributed .NET Platform applications to mainframe applications, processes, and data stores. With Host Integration Server, you can access relational data on SQL Server and DB2, and flat-file data on mainframes, AS/400, UNIX, Windows 2000, and Microsoft Windows NT® Server systems.

  • SharePoint Portal Server: Microsoft SharePoint® Portal Server is a suite of tools and services that can help you create and deploy a central point for publishing business information. It provides document storage and management, content searches, and team collaboration features that are fully integrated with Microsoft Office 2000 applications like Word, Excel®, and PowerPoint®.

  • Systems Management Server. Microsoft Systems Management Server (SMS) is a suite of tools for handling operating system installation, providing asset management, automating software and hardware inventory, monitoring server health, tracing network topology, and doing remote troubleshooting.

Note   More information on Windows Server System is available at https://msdn.microsoft.com/library/default.asp?url=/library/en-us/vsent7/html/vxconnetenterpriseservers.asp.

Visual Studio .NET 2005

Visual Studio .NET 2005 is a complete set of development tools for building ASP .NET Web applications, XML Web services, desktop applications, and mobile applications. Visual Basic, Visual C++, Visual C#, and Visual J# all use the same integrated development environment (IDE), which allows them to share tools and facilitates in the creation of mixed-language solutions. It has enhanced features as compared to Visual Studio .NET 2003 in Office application and Web application development. Visual Studio 2005 Team System is a productive, integrated, and extensible software development life-cycle tools platform that helps software teams by improving communication and collaboration throughout the software development process.

Note   More information on about the new features of Visual Studio .NET 2005 is available at https://msdn2.microsoft.com/library/88fx1xy0.aspx.

SQL Server 2005

SQL Server™ 2005 is a comprehensive database platform providing enterprise-class data management with integrated business intelligence (BI) tools. The SQL Server 2005 database engine provides more secure, reliable storage for both relational and structured data, enabling you to build and manage highly available, highly performing data applications. The close integration of SQL Server 2005 with Microsoft Visual Studio, the Microsoft Office System, and a suite of new development tools, including the Business Intelligence Development Studio, helps a developer or database administrator reduce development and administrative time.

The features of SQL Server 2005 include:

  • .NET Framework Integration. With the release of Microsoft SQL Server 2005, database programmers can now take full advantage of the Microsoft .NET Framework class library and modern programming languages to implement functionality within the server. Using common language runtime (CLR) integration, you can code your stored procedures, functions, and triggers in the .NET Framework language of your choice. Microsoft Visual Basic .NET and the C# programming language both offer object-oriented constructs, structured exception handling, arrays, namespaces, and classes.

  • Transact-SQL and Managed Code. You can write stored procedures, triggers, and user-defined functions using T-SQL or a programming language that is compatible with .NET Framework, such as Visual Basic .NET or C#.

  • Web Services. In SQL Server 2005, you can develop XML Web services in the database tier, making SQL Server an HTTP listener. This provides a new type of data access capability for applications that are centralized around Web services.

  • ADO.NET. The next version of ADO.NET supports query change notifications to multiple active result sets (MARS). ADO.NET evolves dataset access and manipulation to achieve greater scalability and flexibility.

  • SQL Management Objects. The SQL Management Objects (SMO) model is the management object model for SQL Server 2005. SMO represents significant design and architectural improvements for the SQL Server management object model. SMO is the primary tool for developing database management applications using .NET Framework.

  • XML Technologies. XML has become a common format for storing and transferring data and is a popular choice for marked-up, structured, or semi-structured information. Microsoft SQL Server 2000 supports the use of XML through Microsoft SQLXML, which allows you to convert relational data to an XML format and store XML data in relational tables.

  • New Application Framework. SQL Server 2005 introduces a new SQL Server application framework, Service Broker. Service Broker is a distributed application framework that provides reliable asynchronous messaging at the database to database level.

Note   More information about the new features of SQL Server 2005 for Visual Studio .NET 2005 is available at https://www.microsoft.com/sql/.

64-Bit .NET Framework

The 64-bit version of.NET Framework 2.0 enables the .NET Framework platform, tools, and applications to run on 64-bit workstations and servers, which provides increased performance and scalability by addressing more memory (16 terabyte versus 4 GB), processing more data (64 versus 32 bits) per clock cycle, and performing faster numeric calculations. The 64-bit version of the .NET Framework 2.0 allows you to take full advantage of the underlying 64-bit hardware platform without having to deal with differences between the 32-bit and 64-bit hardware platforms. In most cases, applications developed using the 32-bit .NET Framework can be ported to the 64-bit version of.NET Framework and be executed as 64-bit native applications without any source code modifications except the floating point code and the native code. The 64-bit version of .NET Framework 2.0 runs on Windows Server 2003 SP1 64-bit or Windows XP 64-bit, both of which support the two mainstream 64-bit processor architectures.

Note   More information on the 64-bit version of .NET Framework is available at https://msdn.microsoft.com/netframework/programming/64bit/.

Microsoft Patterns and Practices

Patterns and practices are recommendations by Microsoft for architects, software developers, and IT professionals responsible for delivering and managing enterprise systems on the Microsoft platform. Patterns and practices are available for both IT infrastructure and software development topics. Patterns and practices are based on real-world experiences that go far beyond white papers to help enterprise IT professionals and developers quickly deliver sound solutions. This technical guidance is reviewed and approved by Microsoft engineering teams, consultants, Product Support Services, and by partners and customers. Organizations around the world have used patterns and practices to:

  • Reduce project cost.

  • Exploit engineering efforts by Microsoft to save time and money on projects.

  • Follow Microsoft recommendations to lower project risks and achieve predictable outcomes.

  • Increase confidence in solutions.

  • Build solutions on proven recommendations from Microsoft for total confidence and predictable results.

  • Deliver strategic IT advantage.

  • Gain practical advice for solving business and IT problems today, while preparing companies to take full advantage of future Microsoft technologies.

There are three types of patterns and practices:

  • Guides. Guides consist of written guidance to obtain a detailed understanding of technical problem domains and engineering practices. Microsoft patterns and practices guides cover topics such as patterns, application architecture, integration, performance, and security.

  • Reference Implementations. Reference implementations are executable sample applications that demonstrate patterns and practices guidance in action. Microsoft patterns and practices reference implementations cover such topics as application integration and interoperability using Web services.

  • Application Blocks or Enterprise Library for .NET Framework. Application blocks are reusable source-code components that provide proven solutions to common development challenges. They can be integrated as is into applications, or they can be extended or customized. Microsoft patterns and practices application blocks address specific recurring problem domains such as data access, logging, user interface process, cryptography, security, configuration, exception handling, and composite user interfaces.

Note More information on Microsoft patterns & practices is available at https://msdn.microsoft.com/practices/.

High-Performance and Distributed Computing

The techniques and technologies for high-performance distributed computing can affect a migration. High-performance computing can change the underlying system architecture because of the requirements for computing power and network bandwidth. In addition, the potential distributed nature of high-performance computing raises issues with the test and deployment teams, both of which now need to consider multiple computers instead of single computers. With all the power now available on desktops and servers, the scientific and engineering community looks for ways to maximize the processing power of this commodity hardware and, potentially, the used cycles of these systems.

Two possible approaches are worthy of consideration:

  • High-performance cluster computing. This refers to a technique where large numbers of workstations or servers (or a combination of the two) work together to perform jobs that, until recently, only supercomputers could perform.

  • Grid computing. This refers to using the network to link multiple computers so that tasks can be shared between them. Again, programmers and users can access significant computing power through the use of grid technologies. Furthermore, good use can be made of such CPU cycles, whereas before they would go to waste.

Note   More information on high-performance computing is available at https://www.microsoft.com/technet/itsolutions/cits/interopmigration/unix/hpcunxwn/default.mspx* *and
https://www.microsoft.com/windowsserver2003/ccs/default.mspx.

Further Reading

You can also refer to the following Microsoft Press® books for more information on specific technology.

Reference Architectures

Microsoft Systems Architecture—Enterprise Data Center 

Microsoft Systems Architecture—Internet Data Center 

Application Architecture for .NET: Designing Applications and Services

Microsoft SQL Server 2000 High Availability Series: Volume 1: Planning

Microsoft SQL Server 2000 High Availability Series: Volume 2: Deployment

Managing and Maintaining a Microsoft® Windows Server™ 2003 Environment

Managing Projects with Microsoft® Visual Studio® 2005 Team System

Microsoft® Systems Management Server 2003 Administrator's Companion

Microsoft Active Directory Branch Office Guide: Volume 1: Planning

Microsoft Active Directory Branch Office Series Volume 2: Deployment and Operations

Reference Building Blocks

Data Access Application Block for .NET

.NET Data Access Architecture Guide

Designing Data Tier Components and Passing Data Through Tiers

Exception Management Application Block for .NET

Exception Management in .NET

Monitoring in .NET Distributed Application Design

Microsoft .NET/COM Migration and Interoperability

Production Debugging for .NET-Connected Applications

Authentication in ASP.NET: .NET Security Guidance

Building Secure ASP.NET Applications: Authentication, Authorization, and

Secure Communication

Development References

Inside Microsoft® SQL Server™ 2005: T-SQL Programming. Itzik Ben-Gan. (Solid Quality Learning) and others. 0-7356-2197-7

Inside Microsoft® SQL Server™ 2005: T-SQL Querying. Itzik Ben-Gan. (Solid Quality Learning) and others. 0-7356-2313-9

Inside Microsoft® SQL Server™ 2005: The Storage Engine. Kalen Delaney. (Solid Quality Learning) 0-7356-2105-5

Inside Microsoft® Visual Studio® .NET 2003

Programming with Managed Extensions for Microsoft® Visual C++® .NET

Download

Get the UNIX Custom Application Migration Guide

Update Notifications

Sign up to learn about updates and new releases

Feedback

Send us your comments or suggestions