Chapter 5: Stabilizing a Custom Solution

On This Page

Introduction and Goals Introduction and Goals
Testing the Solution Testing the Solution
Resolving Issues Resolving Issues
Conducting a Pilot Deployment Conducting a Pilot Deployment
Stabilizing Phase Major Milestone: Release Readiness Approved Stabilizing Phase Major Milestone: Release Readiness Approved

Introduction and Goals

In the Stabilizing Phase described in this chapter, you must test and stabilize the custom solution developed earlier that implements interoperability between the Microsoft® Windows® and UNIX operating systems. The solution that you will test is one of several possible solutions described in Volume 2: Chapter 4, "Developing a Custom Solution," for enabling UNIX clients to access authentication information, or both authentication and authorization information, stored in the Windows Active Directory® directory service.

Depending on which solution your organization plans to deploy, interoperability between UNIX hosts and an Active Directory domain is established as follows:

  • Authentication only. As defined in this guide, an End State 1 solution enables UNIX or Linux clients to use Active Directory Kerberos for authentication but continues to use your current UNIX-based authorization method.

  • Authentication and authorization. As defined in this guide, an End State 2 solution uses Active Directory–based Kerberos authentication and Active Directory–based LDAP authorization to support UNIX or Linux clients.

The End State 1 or End State 2 custom solution that you will test might use native UNIX or Linux operating system components available by default in UNIX or Linux (referred to as a native OS solution), or it might use open source components and free software downloads in addition to the native OS components (referred to as an open source solution). Both native OS and open source solutions described in this volume include solutions designed for Red Hat 9 Linux and for Solaris 9 UNIX.

Developers might have experimented with more than one solution during the Developing Phase as part of the process of determining which solution is best for your organization. However, typically, you perform the tests and pilot deployment described in this chapter only for the solution that you plan to deploy in your production environment.

The purpose of the tests is to confirm that UNIX user accounts and the attributes that define the UNIX users who own those accounts are stored correctly in Active Directory and that the new solution is integrated into your overall network environment. This includes testing authentication during the logon process and testing that LDAP authorization correctly grants or denies a user or computer access to a network resource. For this reason, the tests for End State 2 (authentication and authorization) differ from those for End State 1 (authentication only).

Your goal in the Stabilizing Phase is to improve the solution quality to a level that meets your acceptance criteria for release to production. Effective testing in the Stabilizing Phase emphasizes operation under realistic environmental conditions. This includes developing and testing a prototype installation script for your pilot deployment that you can later adapt for your production deployment. After you determine that the solution is sufficiently stable to be a release candidate, you deploy the solution to one or more pilot groups of users before moving into the Deploying Phase.

Intended Audience

The primary audience for the Stabilizing Phase chapter includes the Test, Release Management, and Development Roles. All team leads should also review this chapter.

Knowledge Prerequisites

Ensure that your team possesses the knowledge requirements stated in "About This Volume" and identified in the "Project Team Skills Template" job aid.

Team leads should be familiar with the following Microsoft methodologies:

For more information about MSF and the UMPG, see "About This Volume."

Major Tasks and Deliverables

The major Stabilizing Phase tasks are:

  • Test the solution.

  • Resolve issues.

  • Conduct a pilot deployment.

This chapter provides the technical information needed to enable you to accomplish these tasks. Refer to the UMPG for guidelines about how to organize team members and work processes used to complete this phase.

Testing the Solution

In the Stabilizing Phase, you perform tests that confirm that the solution works as expected and tests that attempt to "break" the solution.

Although developers have already performed some of the tests described in this chapter in their lab during the Developing Phase, you now need to perform more comprehensive testing in a setting that more closely approximates your production environment. Testing should include all logon methods and all applications that perform authentication and authorization.

You should carefully evaluate testing requirements that might arise from any of the following:

  • Service level agreements (SLAs)

  • Regulatory requirements

  • Policies

Many of the tests described for the Stabilizing Phase are applicable to any of the available solutions developed earlier in the Developing Phase. Others vary depending on the type of solution. Each test or procedure in this chapter includes a label indicating whether you need to perform this test, depending on which solution you plan to deploy. For example:

[All Solutions]

– or –

[Native OS Red Hat Solutions]

Consider automating as much of the testing as possible to ease verification of functionality during and after deployment.

During testing and during the pilot deployment that follows testing, expect to identify additional ways, not included in this chapter, that the solution you plan to deploy interacts with your network infrastructure that might require additional analysis or testing.

This section includes the following topics:

  • Prepare test lab environment

  • Test infrastructure components

  • Test base functionality

  • Test potential failure cases

  • Extend testing to your full environment

  • Monitor potential long-term failures

  • Develop an installation script for the pilot deployment

Prepare Test Lab Environment

The tests described in this chapter assume that your lab is set up as described in Volume 2: Chapter 4, "Developing a Custom Solution."

This section includes the following topics:

  • Review tests and other tasks performed in the Developing Phase

  • Assess status of test lab used by developers

  • Expand test lab to model your production environment

Review Tests and Other Tasks Performed in the Developing Phase

Although Chapter 4, "Developing a Custom Solution," describes several possible solutions for implementing Active Directory–based authentication and authorization for UNIX clients, members of the Test and Release Management teams who perform the Stabilizing Phase tasks described in this chapter need to be familiar only with the specific solution that your organization plans to deploy.

Test-related procedures described for each type of solution include those in the following subsections in Chapter 4:

  • For all End State 1 and all End State 2 solutions , review the following Developing Phase sections and tests in Chapter 4:

    • Create Test OUs, Users, and Groups

    • Perform a Quick Kerberos Configuration Verification Test

    • Create Test User Data for User test01

    • Perform a Quick Verification Test as User test01

  • For End State 2 solutions only , review the following Developing Phase sections and tests in Chapter 4: 

    • Create LDAP Proxy User

    • Add UNIX Attributes to Test Groups and Users

    • Test LDAP Connectivity to the Active Directory Server

    • Create Test User Data for User test02

    • Perform a Quick Verification Test as User test02

    • After configuring Transport Layer Security (TLS) and restarting the LDAP daemons, repeat each user test again:

      Perform a Quick Verification Test as User test01

      Perform a Quick Verification Test as User test02

In addition to reviewing the test-related procedures listed earlier from Chapter 4, you should also review all of the procedures for the specific interoperability solution to be deployed. Reviewing all of the procedures used to develop the solution will help you gain an overall understanding of the interoperability solution that you will test.

Assess Status of Test Lab Used by Developers

If the Development team experimented with two or more of the several solutions described in Chapter 4, make sure that you know which solution your organization plans to deploy in your production environment. If the development lab was configured for more than one solution, the lab should be rebuilt from scratch before continuing.

CAUTION   If the Development team experimented with the commercial solutions included in this volume—Centrify DirectControl or Quest Software Vintela Authentication Services (VAS)—you must rebuild your test lab from scratch. Using a lab that was set up earlier for one of the commercial solutions might cause your custom solution to fail. This is true both for the UNIX or Linux clients and for the Active Directory servers.

Assess the current state of the test lab and, if necessary, rebuild it:

  • Use current lab? If the lab used to develop the solution, as described in Chapter 4, is in a known stable state appropriate for testing, use the existing lab infrastructure to perform the steps described in this chapter.

  • Re-build lab? If the lab used to develop the solution is in an unknown or unstable state, use the procedures described in Chapter 4 to rebuild the lab. The procedures that you need to perform are listed in Table 5.1.

    Table 5.1. If Necessary , Rebuild Test Lab As Described in Chapter 4

    Steps

    Where to Find These Steps

    Preparatory steps

    All solutions:

    Perform the steps in Chapter 4 described in the section "Preparing Your Environment."

    Solution steps

    Your solution:

    Perform the steps in Chapter 4 described for the particular solution that you plan to deploy. Choose the section that describes how to deploy one of the following solutions:

    • Use Solaris 9 with Native OS Components for End States 1 and 2

    • Use Red Hat 9 with Native OS Components for End States 1 and 2

    • Use Solaris 9 with Open Source Components for End States 1 and 2

    • Use Red Hat 9 with Open Source Components for End States 1 and 2

    Each of these sets of procedures is grouped such that, if your organization plans to deploy an End State 1 solution, you can skip the procedures needed for an End State 2 solution.

Expand Test Lab to Model Your Production Environment

Developing one of the solutions during the Developing Phase as described in Chapter 4 served as an initial proof of concept for the End State 1 or End State 2 solution that your organization plans to deploy in your production environment. Now, in the Stabilizing Phase, you expand the lab to more accurately model your production environment and to address the objectives that you defined during the Planning Phase.

It is important that you understand the scope of testing that modeling your production environment will require before you begin testing infrastructure components, base functionality, and potential failure cases.

This section describes the tasks you will need to perform to expand the test lab to model your production environment:

  • Create a large number and variety of UNIX test users.

  • Include a large number and variety of UNIX or Linux test computers.

  • Identify all access methods used in your production environment.

  • Create a domain structure that simulates your production environment.

  • Include applications for which authentication or authorization functions will be migrated to Active Directory.

  • Include applications that will continue to use old authentication or old authorization data stores.

  • Configure networking to reflect your production environment.

  • Configure NTP and DNS to reflect your production environment.

  • Configure any additional NIS functions.

  • Create a sufficient number of users and computers to simulate your production environment load.

  • Create users, groups, and computers to simulate your rationalization requirements.

In addition to the information in this section about setting up your test lab to model your production environment, see also the series of tests in the section "Extend Testing to Your Full Environment" later in this chapter.

Create a Large Number and Variety of UNIX Test Users

[All solutions (except where indicated)]

The configuration for the solutions in the development lab described in Chapter 4 included only a handful of Active Directory–enabled UNIX users and groups organized into a simple Active Directory container structure. Before you can deploy your solution into the production environment, you must conduct the tests in this chapter with a large number of UNIX users that reflects the different types of users in your production environment.

See the sections "Baselining the Environment" and "Create the Functional Specification" in Volume 2: Chapter 1, "Choosing the Right Technology and Planning Your Solution" for information about the types of information, including types of users and groups present in your environment, that you need to identify in the Planning Phase.

To create a large number of UNIX users for the test environment

  1. Identify all UNIX user types in your production environment. The test users you create should realistically represent the users in your production environment—including:

    • UNIX users who also have existing Windows accounts. Include UNIX users who will have both a Windows account and a UNIX account at the start of deployment. This group should include both users whose Windows and UNIX user names match and those whose user names do not match.

    • UNIX users who will not have Active Directory accounts. Include UNIX users who will not use Active Directory accounts after the solution is deployed (for example, local UNIX accounts or root accounts that will not be migrated).

    • UNIX users for whom Active Directory accounts will be created during deployment. Include UNIX users who, before deployment starts, have accounts only in data stores other than Active Directory. It is important to test migrating this type of user before the production deployment.

    • UNIX and Windows users stored in different containers. Your test container structure—Active Directory organizational units (OUs) and groups—should match the container structure in your production environment, as indicated in Table 5.2.

      Table 5.2. Include Users Stored in Different Active Directory Containers

      Active Directory Container

      Description

      OUs

      All Active Directory OUs that contain UNIX users who have Active Directory accounts now or will have accounts in one or more of these OUs when the solution is deployed.

      Built-in groups

      All Active Directory built-in groups, such as Domain Users, to which UNIX users now belong or will join when the solution is deployed.

      Custom groups

      All Active Directory groups defined by your organization to which UNIX users now belong or will join when the solution is deployed.

    • UNIX users located in different domains. Include UNIX users who now have or will have Active Directory accounts that belong to different Active Directory domains—both trusted domains and nontrusted domains.

    • UNIX users with different types of permissions. Include UNIX users who have or will have a wide variety of Active Directory permissions, such as users who belong to administrative groups with broad or narrow rights to perform various actions on the network, users who belong to groups that can access sensitive information such as payroll data, or users who belong to groups that can access engineering data.

    • UNIX users whose home directories are NFS-mounted. Include UNIX users who have home directories that are mounted by using the Network File System (NFS) protocol. UNIX uses NFS to enable a computer to access files over a network. In UNIX, to gain access to files on a device other than the local computer, you must first inform the operating system where in the directory tree you would like those files to appear. This process is called "mounting" a file system.

    • UNIX users whose home directories are stored locally. Include UNIX users whose home directories are local instead of NFS-mounted directories.

    • UNIX users who participate in shared user logons (if any). Include UNIX user accounts that multiple users use. Generally, this practice is not recommended although it might be appropriate in some cases. For example, a retail business with one computer, many employees, and a large amount of turnover might create one account with minimal privileges for nonmanagement employees.

    • UNIX users who frequently forget passwords or lock out accounts. Include UNIX user accounts that you create to match those found in the production environment for users who tend to need frequent password resets or account unlocks.

    • UNIX accounts used for automated processes. Include user or service accounts that are used to create and run automated processes.

    • Novice UNIX users. Include UNIX user accounts that you create to match those found in the production environment for novice UNIX users.

    • Advanced UNIX users. Include UNIX user accounts that you create to simulate those used by advanced UNIX users in your organization. Take into account the likelihood that advanced UNIX users might have their own nonstandard scripts, tools, or procedures that will not otherwise be reflected in the test environment and which might be affected by the solution.

      Depending on the situation, you can use any of the following strategies:

      Adapt the solution. Adapt the solution to ensure that scripts, tools, or procedures used by advanced UNIX users in your organization function correctly;

      Exclude users. Exclude advanced UNIX users who use nonstandard scripts, tools, or procedures from the interoperability solution

      Change methodology. Change the way that the advanced users use these scripts, tools, or procedures so that they work with the solution.

    • UNIX users for each authorization data store type used (End State 1 solutions only). If multiple authorization data stores will be used in addition to Active Directory, include UNIX users from each authorization data store type.

    • UNIX users who will continue to need to access types of computers that will not be incorporated into the solution. Include UNIX users who use computers that run operating systems that are not compatible with the solution, those that are limited to local access only, or those running pluggable authentication modules (PAM) applications that are not compatible with the solution and that would fail if PAM Kerberos were installed.

      Note   PAM is used to control authentication on UNIX-based computers.

    • UNIX users who are members of a large number of groups. Include UNIX users who belong to the maximum number groups that occur in your production environment. It is important to test basic logon for these users—look for expected or anomalous behavior when the Windows Privilege Attribute Certificate (PAC) is very large (a very large PAC typically indicates that the user belongs to a large number of groups).

      Note   The PAC is a Kerberos v5 authorization data field that contains the user rights assigned to a particular user as indicated by the user's security identifier (SID) and the SIDs of each group to which the user belongs. If the Kerberos Key Distribution Center (KDC) service running on Active Directory authenticates a user's identity, Kerberos returns the PAC to the user's workstation with the ticket-granting ticket (TGT). Although UNIX LDAP authorization does not make use of PAC data, the PAC is still returned with the ticket. Standard UNIX applications do not use the Windows PAC.

      CAUTION   Kerberos authentication using the User Datagram Protocol (UDP), which is the historical default, can fail when a user is a member of a large number of groups if this number of groups causes the PAC data to grow to exceed the limit of the UDP packet size. In this case, the Transmission Control Protocol (TCP) must be used for Kerberos authentication. Some older Kerberos implementations do not support TCP for Kerberos authentication. For more information about this issue, see "Test Basic Logon for UNIX Users Who Belong to Many Groups" later in this chapter.

  2. Create UNIX test users that model your production environment. Use one or more of the following methods to create a large number of UNIX users and groups in Active Directory:

    • Manually add users. Manually add the users and groups by using Active Directory Users and Computers.

    • Import users: Use one of the methods described in Table 5.3.

      Table 5.3. Choose the Appropriate Method for Your Test Lab

      Import All Users

      Import a Representative Subset of Users

      Import a copy of all the users in your production environment by using a tool such as ldifde.

      Import a representative sample of all types of users in your production environment by using a tool such as ldifde.

      Note   For information about ldifde, see Appendix E: "Relevant Windows and UNIX Tools."

    • Use your organization's standard method. Use the user-provisioning method standard to your organization to create a sample selection of users.

    • Programmatically create the users. If the user account names and passwords are stored or can be derived with an algorithm, it is easier to use these accounts with load test, stress test, and test automation tools.

Include a Large Number and Variety of UNIX or Linux Test Computers

You must include enough computers to adequately simulate your production environment. In some cases, this might require a few dozen computers for the tests described in this chapter. You should identify all types of computers in your environment and select a cross-section of types and configurations of computers for testing to accurately reflect the variety of configurations that must be supported in your production environment.

For example:

  • Solaris 9 SPARC computers used as graphical user interface (GUI) desktop workstations for UNIX developers.

  • Red Hat 9 computers used as command-line desktop workstations for multiple users in computer labs.

  • Red Hat 9 computers used as application servers for a given application. Only UNIX administrators log on directly to these computers. Use a representative sample of multiple applications in the environment.

  • Red Hat 9 computers on which multiple groups of staff members log on through command-line remote access, such as by using ssh or telnet.

    Note   For more information about ssh and telnet, see the subsection "Repeat Every Test with Each Authentication Interface" under the heading "Prepare for Base Functionality Testing" later in this chapter.

  • Solaris 9 SPARC computers used for secure transactions with special lockdown configurations.

  • Computers that will not be incorporated into the interoperability solution (because, for example, they have noncompatible operating systems) but which will continue to be accessed by users who will also be using computers incorporated into the solution.

  • Computers in multiple locations (for example, remote offices or home offices).

  • Computers running unrelated applications that might pose potential conflicts.

  • Red Hat 9 images used inside a desktop or server virtualization application, such as Microsoft Virtual PC, Microsoft Virtual Server, or VMware Workstation.

    Virtual computers are useful for testing functionality in a test lab. With the exception of load testing a server, you can use virtual computers for all x86-based operating systems in your test setup, including Linux and Active Directory servers (but not computers running Solaris because the solutions in this guide use the Sun SPARC Solaris operating system, not Solaris x86). Using virtual computers is especially useful when you need to test large numbers of client computers—for example, you can use a large number of virtual clients for load testing a real (non-virtual) server.

    CAUTION   Do not use a virtualization application to simulate types of computers that are not run in virtual images in your production environment. The solutions described in this guide assume that computers running Windows and Linux are running x86-based operating systems; however, the solutions in this guide assume that computers running Solaris are running the Sun SPARC operating system (not Solaris x86).

    For more information about virtual computers, see:

Identify All Access Methods Used in Your Production Environment

Evaluate your production environment to identify which logon methods are in use. You should be prepared to test each logon method used in your network.

Your users might log on:

  • Directly at the console command line of UNIX-based workstations.

  • Directly at the console graphical user interface (GUI) of UNIX-based workstations.

  • By using both wired and wireless connections.

  • Through remote access logon methods:

    • By using GUI remote access (for example, by using Hummingbird Exceed from www.hummingbird.com or Reflection X from https://www.wrq.com/products/reflection/pc_x_server).

    • By using command-line remote access, for example, using non-Kerberized or Kerberized versions of tools such as ssh or telnet.

    • By using a Virtual Private Network (VPN) connection or over a satellite link or other method that might produce different behavior from that expected for LAN or WAN remote logon.

  • When already logged on as one type of user, by changing to a different user through the use of tools such as su or sudo.

  • By using a custom PAM-enabled application.

For more information about each of these methods, see the following subsections later in this chapter:

  • Repeat Every Test with Each Authentication Interface

  • Use Non-Kerberized UNIX Tools for Base Functionality Testing

  • Switching to Kerberized Remote Access Tools for Extended Testing

Create a Domain Structure That Simulates Your Production Environment

Build a domain structure for your test environment that simulates the complexity of the production environment. For example:

  • If the production environment uses a multidomain forest, use a similar multidomain forest in the test environment.

  • If the production environment uses trusts between multiple domains, use trusts between multiple domains in the test environment.

One possible approach is to clone your production system. Alternatively, you might prefer to follow your well-established procedure for building your production environment. You might create a script for adding test users and then create a script for using those users, where appropriate, for some of the tests in this chapter.

Include Applications for Which Authentication or Authorization Functions Will Be Migrated to Active Directory

If your production environment contains applications that use Kerberos or LDAP and your plans call for migrating authentication (for End State 1 solutions) or both authentication and authorization (for End State 2 solutions) for these applications to Active Directory, you must test these applications against the new environment.

Even if you do not plan to migrate applications that use Kerberos or LDAP until well after the initial deployment, before starting the deployment in your production environment, you must confirm that using these applications in the new environment will be feasible:

  • Include both applications that incorporate Kerberos authentication and/or LDAP authorization natively (for example, by using the Generic Security Service Application Programming Interface [GSS-API]) and those that use PAM for authentication and/or LDAP for authorization, as appropriate.

    Note   GSS-API is a set of library functions that provides a standard authentication programming interface, allowing application developers to support specialized authentication without requiring knowledge of implementation specifics. Most applications that support GSS-API also support Kerberos. Applications that agree on using Kerberos can use GSS-API to implement the exchange of credentials.

  • Include any applications that use LDAP for authentication—even though the solution selected for deployment uses Kerberos for authentication for the bulk of the transactions.

  • Use the same versions of the applications that will be used in production when the solution is deployed.

  • Include applications that use stored Kerberos credentials.

  • Include applications that use Kerberos key tables.

  • Include applications that handle users who belong to a large number of groups.

Include Applications That Will Continue to Use Old Authentication or Old Authorization Data Stores

If any applications will continue to use data stores other than Active Directory for authentication or for both authentication and authorization, it is important to confirm that users and computers for which data has been migrated to Active Directory can still use and host these applications. Therefore, when you expand your test lab to model your production environment to test your interoperability solution, make sure that you:

  • Include both applications that incorporate Kerberos authentication and/or LDAP authorization natively (using the GSS-API, for instance) and those that use PAM for authentication and/or LDAP for authorization, as appropriate.

  • Include any applications that use LDAP for authentication—even though the solution selected uses Kerberos for authentication for the bulk of the transactions.

  • Use the same versions of the applications that will be used in production when the solution is deployed.

  • Include applications that use stored Kerberos credentials.

  • Include applications that use Kerberos key tables.

Configure Networking to Reflect Your Production Environment

Your test environment should include a network structure with a complexity similar to that found in your production environment. This is especially important for Kerberos authentication testing because Kerberos authentication is sensitive both to time and to IP addressing.

Factors to consider include:

  • If network address translation (NAT) devices or software are used in the production environment, incorporate a representative sample of these in the test environment.

  • If load balancers are used in the production environment with applications that are related to the UNIX-based computers, incorporate a representative sample of these in the test environment.

  • If multiple subnets are used in the production environment, reflect this in the test environment.

  • If WANs, wireless networks, or unusual access methods (such as satellite links) are used in the production environment, reflect this in the test environment.

Configure NTP and DNS to Reflect Your Production Environment

Because Kerberos is especially sensitive to time and name resolution problems, your test environment should reflect the Network Time Protocol (NTP) and Domain Name System (DNS) configuration of your production environment, even if that configuration is anomalous or not ideal. For instance:

  • If DNS is hosted in the production environment on Solaris 9 servers and will continue to be hosted on those servers after deployment (instead of migrating to Windows Server 2003), use similar DNS servers in the test environment.

  • If you use separate DNS systems for different platforms or environments, replicate this environment in your test environment.

Configure Any Additional NIS Functions

If any additional network information service (NIS) functions are used in your production environment, such as the automount daemon or netgroup files, you must configure and test these:

  • NIS is a UNIX identity store that stores information about users, groups, computers, and ipnodes (a local database that associates the names of network nodes with IP addresses).

  • The UNIX automount daemon automatically mounts a network file system (that is, makes data that is stored in the UNIX file system accessible) when it is first accessed and later unmounts the file system when no activity occurs.

  • The UNIX /etc/netgroup file defines network-wide groups and is used for checking permissions when doing remote mounts, remote logons, and remote shells. For remote mounts, the information in the netgroup file is used to classify computers. For remote logons and remote shells, the file is used to classify users.

Create a Sufficient Number of Users and Computers to Simulate Your Production Environment Load

Although it is not generally possible to completely simulate the load of a production environment in a test environment, the environment should use a sufficient number of computers and users to provide a level of confidence that the load can be handled in the production environment.

When simulating load, it is best to use multiple computers as clients instead of just one. In addition, the user account being authenticated should be randomly chosen from the entire set of test users for each authentication instead of using just one or two accounts repeatedly. The primary reason for this is to ensure that the tests are not just using cached values.

Types of load to simulate include:

  • For End State 1 and 2 solutions, large numbers of users logging on simultaneously via a variety of access methods.

  • For End State 1 and 2 solutions, simultaneous logons to both Windows and UNIX environments.

  • For End State 1 and 2 solutions, whatever actions your users and applications tend to perform in your environment.

  • For End State 2 solutions, large numbers of users making requests for authorization data. For example, perform directory listings using the long form of the list command (ls -l) to list UNIX directories containing files owned by a large variety of Active Directory users and groups.

  • For End State 2 solutions, a very large number of groups should be created in the Active Directory database. All groups are retrieved during LDAP authorization for each user regardless of the user’s membership in the groups.

When conducting load testing, use multiple different computers and users instead of attempting to simulate the load of a broad user and computer base with just a handful of test users and computers.

At peak load, your users should not see degraded logon times or logon failures at a rate greater than that which occurs during nonpeak load times.

Create Users, Groups, and Computers to Simulate Your Rationalization Requirements

It is likely that you will need to perform rationalization of user, group, and computer data in order to deploy your UNIX-to-Windows interoperability solution. Rationalization is the process of ensuring that a user has the same user ID (UID) and group ID (GID) on different computers. Start by identifying differences in UNIX and Active Directory user and group names and deciding whether those names should be the same.

In addition to identifying all types of users and groups, it is also important to identify all computers, applications, access permissions, and home directories before you begin the process of rationalization.

Include data in the test environment that will need the same rationalization as in the production environment, and test rationalization of this data.

In the section "Rationalize UIDs and GIDs" in Volume 2: Chapter 6, "Deploying a Custom Solution," you will be directed to run through full tests of the entire deployment process, including rationalization. Now is a good time to think about how to test the rationalization process, including which tests you will need to perform and what facilities you will need in order to perform those tests.

Test Infrastructure Components

This section describes how to test infrastructure components that are vital to the successful operation of your interoperability solution.

This section includes the following test topics:

  • Test host name resolution

  • Test time synchronization

  • Test Active Directory server response

Test Host Name Resolution

[All Solutions]

UNIX clients and client applications use host name resolution to locate Kerberos and LDAP servers by converting the server host names into the corresponding IP addresses. Thus, it is essential that host name resolution function correctly for your authentication and authorization solution to operate correctly.

Most enterprise networks use the Domain Name Service (DNS) to perform host name resolution. The instructions provided in this guide assume that you use DNS in your environment. Alternatives to DNS for resolving a host name to its corresponding IP address include hosts files and LDAP. If your production environment uses one or more non-DNS methods for name resolution, you must also test their functionality.

In the example lab configuration used during the Developing Phase, the Active Directory server acts as the DNS server and also acts as the Kerberos KDC and LDAP authorization data store. However, in the Stabilizing Phase, you expand your test lab to model your production environment, which means that your DNS service might now run on DNS servers that are separate from your Active Directory servers.

In order for Kerberos to function correctly, your DNS system must support resolution of both forward and reverse lookups. That is, a server host name must be resolvable to an IP address and an IP address must be resolvable to a server host name. Accurate forward lookup tables with absent or incorrect reverse lookup tables can introduce subtle problems that might not be immediately obvious but which could affect Kerberos functionality.

You should test DNS before performing any of the other tests described later in this chapter. Otherwise, any problems encountered while testing authentication or authorization functionality might be attributable to DNS issues instead of to Kerberos or LDAP errors.

Tools that you can use to test DNS include dnslint, nslookup, and dig; these tools are used in the tests in the following subsections. For more information about these tools, see Appendix E: "Relevant Windows and UNIX Tools."

This section includes three major tests to check the functionality of your DNS system:

  • Are the DNS servers responding to requests?

  • Are the DNS servers in sync?

  • Do correct forward and reverse lookup records exist for all computers in the environment?

Collectively, these subsections show you how to test a Windows-based DNS infrastructure. If, instead, you run a DNS infrastructure that is not Windows-based or if you run parallel Windows-based and other DNS infrastructures, you must design additional tests to ensure DNS consistency.

Are the DNS Servers Responding to Requests?

UNIX operating systems are especially sensitive to the absence of DNS servers. When a UNIX-based computer finds no DNS server to resolve host names, many functions—not just those described in this chapter—will fail. UNIX-based computers will also exhibit major performance degradation if the first DNS server specified in the computer’s /etc/resolv.conf file does not respond even if other DNS servers specified in the file do respond. It is therefore very important to ensure that all DNS servers configured for UNIX-based computers are responding to requests.

For Windows-based DNS infrastructures, you can use the command-line Windows dnslint tool to verify the responsiveness of the DNS servers in your environment.

To check the responsiveness of a Windows DNS server from a Windows client

  1. Return data about DNS servers. On any Windows-based computer in the domain, open a command-prompt window and then type the following command to display information about the DNS structure of the domain:

    C:\>dnslint /d example.com /s 10.9.8.1

    where (in this example) example.com is the name of the Active Directory domain and 10.9.8.1 is the IP address of one of the DNS servers in the domain.

  2. Look for errors. This dnslint command generates a report, which it automatically displays in HTML format on the computer on which the command is run. Review this report to confirm that all DNS servers in the environment are listed and that all are responding to User Datagram Protocol (UDP) queries.

    Any errors found will be highlighted in the report.

    For an example of dnslint output that contains no errors and for another example that does contain errors, see "Dnslint Examples" at https://technet2.microsoft.com/WindowsServer/en/Library/23d7442b-cef7-49e6-95d4-c4a4c53d0a5e1033.mspx.

    Using dnslint in this way to verify DNS responsiveness for a Windows client does not verify the existence or correctness of the DNS records for your UNIX-based computers.

To check the responsiveness of any DNS server from a Windows or UNIX client

  1. Return data about DNS servers:

    1. On any UNIX-based or Windows-based computer in the domain, type the following command to display information about the DNS structure of the domain:

      # nslookup unix01 server01

      where (in this example) unix01 is the host name of a computer in the DNS domain and server01 is the host name of one of the domain's DNS servers.

    2. Repeat this command for each DNS server in the domain.

  2. Look for errors. Review the data returned from the nslookup commands that you ran in step 1. Confirm that the lookup was successful and that the DNS server specified in each command was the server that you specified in the nslookup command. For example:

    • The following command:

      # nslookup unix01 server01

      should return output similar to the following:

      Server:  server01.example.com
      

Address:  10.9.8.1 Name:    unix01.example.com Address:  10.9.8.16

  - The following command:
    
    <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml"># nslookup unix01 server02</pre>

    
    should return output similar to the following:
    
    <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">Server:  server02.example.com

Address:  10.9.8.2 Name:    unix01.example.com Address:  10.9.8.16

The output from these commands tells you both that the DNS server server01 (or server02 in the second example) is responding to requests and that a forward lookup record for the computer unix01 exists in the DNS tables on this server.

If the specified DNS server cannot be contacted, you might receive an error indicating that the specified server cannot be found followed by output from a lookup request based on the DNS configuration of the computer on which the command is run. In some cases, only the output based on the local DNS configuration appears, without any error message. The response depends on the platform and on the version of **nslookup**.

For example, if the DNS server server03 cannot be contacted, the following command:

<pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml"># nslookup unix01 server03</pre>


will return a response similar to the following (with or without the leading error message):

<pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">*** Can't find server address for 'server03':

Server:  server01.example.com Address:  10.9.8.1 Name:    unix01.example.com Address:  10.9.8.16

In this response, you can see that the request was made to DNS server server03 but that the response was returned by DNS server server01.

If neither the specified DNS server nor any DNS server specified in the configuration of the computer on which the command is run can be found, an error indicating that no DNS server can be found is returned.
Are the DNS Servers in Sync?

Synchronization problems among DNS servers can produce intermittent host name resolution failures. This type of failure can be very difficult to troubleshoot.

For Windows-based DNS infrastructures, you can review the event log on each DNS server for errors. Typically, DNS servers are unsynchronized only if replication is failing, and any replication failure should generate an event log error.

For either a UNIX or a Windows-based DNS infrastructure, you can use the UNIX dig tool to display a list of all DNS records to a file from a particular DNS server, and then you can compare the output to a file generated by a request made against a different DNS server. You can do this comparison manually or by using the diff command; however, diff will return some differences between output files even if the servers are synchronized, which can make interpreting the data difficult.

To display all DNS forward and reverse lookup data from a specific DNS server

  1. Display forward-lookup data. Use the dig command to display all DNS forward-lookup data from the DNS server server01 for the domain example.com to an output file, output1:

    Note   For more information about the dig command beyond the following simple examples, see Appendix E: "Relevant Windows and UNIX Tools" and see the Red Hat man (manual) page.

    For Solaris, type:

    # dig @server01 example.com -t AXFR > output1 

    For Red Hat, type:

    # dig -b IPAddress @server01 example.com -t AXFR > output1

    Note   The optional -b switch is needed only if the client host has more than one active interface (on different networks). The AXFR option specifies a zone transfer request. In order to make a zone transfer request, your DNS server must be configured to allow the computer on which the command is run to receive zone transfer data. This is not enabled by default for Windows-based DNS infrastructures. On Windows DNS servers you can enable this feature on the Zone Transfers tab of the ZoneName Properties dialog box for each forward and reverse lookup zone.

  2. Display reverse-lookup data. Use the following command to display all DNS reverse-lookup data from the DNS server server01 for the reverse lookup address range 10.9.8.x to an output file, output2:

    For Solaris, type:

    # dig @server01 -x 10.9.8 -t AXFR > output2 

    For Red Hat, type:

    # dig -b IPAddress @server01 -x 10.9.8. -t AXFR > output2
  3. Repeat commands for each DNS server. Run these commands for each DNS server in the environment.

  4. Compare output for each DNS server. Compare the output of the forward and reverse lookup for each DNS server against each other DNS server to confirm that the returned host name and IP address across all DNS servers for each computer in the environment is correct.

Because comparing the output from dig commands can be tedious, especially if you have a large number of servers or a large number of DNS records, you might opt not to use this command unless you are attempting to troubleshoot problems that might be DNS-related.

Do Correct Forward and Reverse Lookup Records Exist for All Computers in the Environment?

Kerberos authentication does not work at all if both forward and reverse DNS records are missing for any of the computers involved in a Kerberos transaction. The three types of computer that participate in a Kerberos transaction are:

  • Client. The client computer that initiates the request.

  • KDC. The KDC that grants the credentials.

  • Application server. The computer that hosts the server side of a client/server application.

In the case of initial logon to a UNIX-based computer using the custom interoperability solutions described in this volume, Active Directory acts as the KDC, and the UNIX-based computer on which the logon attempt is made is both the client and the application server. When some of these forward and reverse DNS records exist and some are either missing or incorrect, some transactions with Kerberos might succeed while others fail. This can complicate troubleshooting.

To use dnslint to check DNS servers for the existence of forward and reverse lookup records for UNIX clients from Windows for any DNS server

  1. Generate a report for forward and reverse lookup records. Create a report about the forward and reverse lookup records for your UNIX-based computers as follows:

    1. Create a generic text file. On any Windows-based computer in the domain, type the following command to create a file that you can then modify and use to obtain a report about the forward and reverse lookup records for your UNIX-based computers:

      C:\>dnslint /ql autocreate 
    2. Modify the generic text file. The autocreate option in the preceding command creates a text file called in-dnslint.txt. To modify the in-dnslint.txt file to obtain information specifically about your UNIX-based computers, edit the file as follows:

      +This DNS server is called:  server01.example.com
      

[dns~server] 10.9.8.1   unix01.example.com,a,r     ;A record 10.9.8.16,ptr,r    ;PTR record   unix02.example.com,a,r     ;A record 10.9.8.17,ptr,r    ;PTR record

    where (in this example) \[dns~server\] 10.9.8.1 is the DNS server to which you want to direct the requests; unix01.example.com is the fully qualified domain name (FQDN) of one UNIX-based computer that you want to test, and 10.9.8.16 is its IP address; and unix02.example.com is the FQDN of another UNIX-based computer that you want to test, and 10.9.8.17 is its IP address.

3.  **Run the command with the modified text file.** After you modify the in-dnslint.txt file as shown in the preceding step, type the following command in order to generate a report of the forward lookup records (A records) and reverse lookup records (PTR records) of the computers specified in the file:
    
    <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">C:\&gt;dnslint /ql in-dnslint.txt /v </pre>
  1. Look for errors. Review the output to see whether any errors appear in the SRV records. Each host name should be resolved to an IP address, and each IP address should be resolved to a host name. The IP address returned for each host name specified in the file with the syntax unix01.example.com,a,r should match the IP address for the same computer specified with the syntax 10.9.8.16,ptr,r, and vice versa.

    For example, the following snippet of return data indicates that a forward lookup record for the host unix01 was found but that the reverse lookup record for this host is missing:

    Name queried: unix01.example.com
    

Record type: A      Query type: recursive      Protocol used: UDP Query result: 10.9.8.16 Name queried: 16.8.9.10.in-addr.arpa  Record type: PTR      Query type: recursive      Protocol used: UDP Query result: record not found

To use nslookup to return forward lookup data for a specified computer from UNIX or Windows for any DNS server

  1. Start the nslookup tool in interactive mode. On any UNIX-based or Windows-based computer in the DNS domain, type the following command to open the nslookup tool in interactive mode:

    # nslookup
  2. Specify the DNS server. At the nslookup command prompt, type the following command to set the DNS server to which requests will be sent:

    > server adserver01 

    where (in this example) adserver01 is the name of the DNS server.

  3. Return forward lookup data for a specified computer. At the nslookup command prompt, type the following command to display the forward lookup DNS record for the specified computer:

    > unix01 

    where (in this example) unix01 is the host name of a computer in the DNS table. This can be any computer in the DNS table.

  4. Review the output. The output of this command should show the host name and IP address of the DNS server to which the request was directed and the host name and IP address of the specified computer. For example:

    Server:  adserver01.example.com
    

Address:  10.9.8.1 Name:    unix01.example.com Address:  10.9.8.16

  1. Exit nslookup. At the nslookup command prompt, type the following command to exit from the application:

    > exit 

To use nslookup to return reverse lookup data for a specified computer from UNIX or Windows for any DNS server

  1. Start the nslookup tool in interactive mode. On any UNIX-based or Windows-based computer in the DNS domain, type the following command to open the nslookup tool in interactive mode:

    # nslookup
  2. Specify the DNS server. At the nslookup command prompt, type the following command to set the DNS server to which requests will be sent, where (in this example) adserver01 is the name of the DNS server:

    > server adserver01 
  3. Set the query type to reverse lookup , PTR , records. At the nslookup command prompt, type the following command to set the query type to PTR:

    > set type=PTR 
  4. Return reverse lookup data for a specified computer. At the nslookup command prompt, type the following command to return reverse lookup data for a specified computer’s IP address:

    > 10.9.8.16 

    where (in this example) 10.9.8.16 is the IP address of a computer in the DNS table. This can be any computer in the DNS table.

  5. Review the output. The output of this command should show the host name and IP address of the DNS server to which the request was directed and the reverse lookup record, including IP address and host name, of the specified computer. For example:

    Server:  server01.example.com
    

Address:  10.9.8.1 16.8.9.10.in-addr.arpa name = unix01.example.com

Compare the output of the reverse lookup to the output of the forward lookup to confirm that the returned host name and IP address for a given server match.

**Note**   You can also review the output files generated by the **dig** commands described in the section "Are the DNS Servers in Sync?" to confirm that forward and reverse lookup records for each UNIX-based computer exist.
  1. Exit nslookup. At the nslookup command prompt, type the following command to exit from the application:

    > exit 

You can create a script for nslookup requests to return data for more than one computer. For more information about scripting with the nslookup command, see "Running Programs From WSH Scripts" at https://www.microsoft.com/technet/community/columns/scripts/sg1002.mspx.

Because it can be time consuming to query and review data for a large number of computers, you might opt not to use these commands unless you are attempting to troubleshoot problems that might be DNS-related.

Test Time Synchronization

[All Solutions]

The time on UNIX clients must be synchronized with the clock that is maintained by Active Directory on your domain controllers. Synchronization ensures that the time stamp on a Kerberos ticket issued by the Windows KDC is within a valid range. In a Kerberos environment, if your clocks are not synchronized, authentication will fail. Because some functions rely on credentials acquired earlier, these failures might not appear immediately after the clocks become unsynchronized. It is possible that, for a time, some Kerberos functions might fail while others continue to work, which can make problem diagnosis difficult.

This section describes how to test time synchronization manually. If your environment uses automated time synchronization, such as that provided by NTP, an alert system might be in place that monitors time synchronization and notifies a designated group if time synchronization fails.

CAUTION   Before you perform the tests described later in this chapter, you should make sure that time is synchronized on your Active Directory servers and UNIX clients.
As explained in the section "Synchronize Time" in Volume 2: Chapter 4, "Developing a Custom Solution," the custom solutions included in this volume assume that you have configured time on your Active Directory servers and UNIX clients as described in Appendix H: "Configuring Time Services for a Heterogeneous UNIX and Windows Environment." The best practice is to implement NTP to maintain synchronization instead of attempting to manually synchronize computers.

The default time skew setting for Kerberos is 5 minutes, so requests from computers that are more than 5 minutes out of sync will likely fail. You can use the following procedure to manually determine whether the time configured on a UNIX client and the time configured on an Active Directory server are unsynchronized.

To determine whether a UNIX client and an Active Directory server are unsynchronized

  1. Compare Windows and UNIX time settings. On your Active Directory servers and UNIX clients, check that the date, time zone, and time are the same:

    1. Check the Active Directory server. On the Windows domain controller, double-click the time that displays on the taskbar to display the date, time, and time zone.

    2. Check a UNIX-based computer. On a UNIX client, use the date command to display the date, time, and time zone.

    These commands can be scripted to return data from more than one computer.

  2. If necessary , resynchronize time. Resynchronize the time on your Windows domain controllers and UNIX-based computers as described in Appendix H: "Configuring Time Services for a Heterogeneous UNIX and Windows Environment." If your environment uses an automated time synchronization method, such as NTP, the next automated update will overwrite manual changes.

Test Active Directory Server Response

This section includes the following topics:

  • Test Kerberos authentication to Active Directory

  • Test LDAP authorization to Active Directory

Test Kerberos Authentication to Active Directory

[End States 1 and 2]

An Active Directory server acts as a Kerberos KDC to authenticate users through Kerberos. The primary test of functionality for any KDC server is that clients are capable of authenticating to the KDC and that a ticket granted to a client by the KDC is usable for authentication to network services.

In the subsection "Test Authentication for End States 1 and 2 Solutions" later in this chapter, you determine whether users can authenticate to the KDC and can access the UNIX logon service by logging on to a UNIX-based computer. By logging on, a user both requests initial credentials (a ticket-granting ticket or TGT) from the KDC and attempts to access a network service (through the UNIX logon function).

Before you perform those tests, you should use the following procedure to ensure that the Active Directory server is responding correctly to Kerberos authentication requests by a UNIX client.

Follow the steps in the following procedure once for each Active Directory server to be tested.

To test the responsiveness of each Active Directory server for Kerberos authentication

  1. Modify krb5.conf file. On a UNIX host, edit the realms section of the krb5.conf file to reference a KDC record only for the Active Directory server being tested. The following example shows the realms section of a krb5.conf that has not been modified:

    [realms]
    

EXAMPLE.COM = {   kdc = adserver01.example.com:88   kdc = adserver02.example.com:88   kdc = adserver03.example.com:88   admin_server = adserver01.example.com:749   kpasswd_server = adserver01.example.com:464   kpasswd_protocol = SET_CHANGE   default_domain = example.com }

Modify this section to identify only one KDC. For example:

<pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">[realms]

EXAMPLE.COM = {   kdc = adserver02.example.com:88   admin_server = adserver01.example.com:749   kpasswd_server = adserver01.example.com:464   kpasswd_protocol = SET_CHANGE   default_domain = example.com }

**Note**    The krb5.conf file identifies the Kerberos REALM, the KDCs, and the DNS domain name for the environment. The krb5.conf file also includes Active Directory–specific settings: the encryption type (because Active Directory does not support all of the encryption types that the UNIX client supports) and password change settings.
  1. Get Kerberos ticket. Use the kinit command to acquire a Kerberos ticket for a test user. For example, to acquire a ticket for the user test01, type:

    $ kinit test01

    Note   If you are testing a native OS solution, you might not need to specify the path for UNIX commands, depending on where the commands are stored. If you experience any problems, be sure to specify the full path.
    If you are testing an open source solution, you might need to specify the path for UNIX commands or add their directories to the PATH variable because the commands are not stored by default in a specific directory. You can use either the native versions of these tools or the open source versions for testing. However, if you have configured an encryption type that is not supported by the native versions of the tools, the commands might fail.

    If the command completes successfully, no message appears and the user is returned to the command prompt. A message appears only if an error occurs.

  2. View Kerberos ticket. Use the klist command, which lists currently held Kerberos tickets, to view the ticket just acquired for test01. Type:

    $ klist

    This command returns output similar to the following:

    Note: Some of the lines in the following code have been displayed on multiple lines for better readability.

    Ticket cache: /tmp/krb5cc_500
    

Default principal: test01@EXAMPLE.COM Valid starting          Expires              Service principal Tue 10 Aug 2004 10:36:15 AM PDT  Tue 10 Aug 2004 08:36:15 PM PDT   krbtgt/EXAMPLE.COM@EXAMPLE.COM         renew until Tue 17 Aug 2004 10:36:15 AM PDT

**Note**   If these commands fail, see Appendix D: "Kerberos and LDAP Troubleshooting Tips."
Test LDAP Authorization to Active Directory

[End State 2]

Testing LDAP authorization to Active Directory includes the following tasks:

  • Test LDAP connectivity to the Active Directory server

  • Test LDAP authorization data retrieval from the Active Directory server

Test LDAP Connectivity to the Active Directory Server

You can use one of the following commands to confirm that the UNIX host has LDAP connectivity to the Active Directory server. These commands confirm that your UNIX host can communicate through LDAP with your Active Directory server.

Note   The commands shown in this subsection do not validate that the LDAP configuration (described in Chapter 4) is correct. See the next subsection, "Test LDAP Authorization Data Retrieval from the Active Directory Server," for a test to validate the LDAP configuration.

To test LDAP connectivity to Active Directory , run one of the following commands

  • You can use the ldapsearch command as follows to test your LDAP connection, where the command options are defined as described in the following table.

    Note   This test assumes that you have already created the test users test01 and test02 as described in "Preparing Your Environment" in Chapter 4.

    Table 5.4. Command Options for the ldapsearch Command

    Variable

    Description

    IPAddress

    The IP address of your Active Directory server.

    ProxyDN

    The distinguished name (DN) of your LDAP proxy user.

    Password1

    The password of your LDAP proxy user.

    BaseDN

    The base DN where your UNIX users and computers are stored (for example, ou=unix,dc=example,dc=com).

    You can use either the open source or the native version of **ldapsearch** to perform these tests. The open source version of **ldapsearch** is installed as part of the builds done for the open source solution for End State 2 described in Chapter 4. The open source version of **ldapsearch** for both Red Hat and Solaris and the native version of **ldapsearch** packaged with Red Hat 9 require the addition of an **–x** switch to the commands specified here. The generic command for these three versions of **ldapsearch** is as follows: **Note:** The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.
    # ldapsearch –x -h IPAddress -D ProxyDN -w Password1 -b BaseDN -s sub
    '(cn=test*)' 
    Here is the generic command that uses the native version of **ldapsearch** on Solaris:
    
    **Note:** The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.
    
    <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml"># ldapsearch -h IPAddress -D ProxyDN -w Password1 -b BaseDN -s sub
    

    '(cn=test*)'

    For example:
    
    **Note:** The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.
    
    <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml"># ldapsearch -h 10.9.8.1 -D cn=proxyuser,cn=users,dc=example,dc=com -w
    

    Password1 –b ou=unix,dc=example,dc=com -s sub '(cn=test*)'

    If this command completes successfully, you see output that includes several attributes associated with each user (test\*) in Active Directory. The first line of returned data displays information similar to the following:
    
    <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">CN=test01,OU=Users,OU=UNIX,DC=example,DC=com</pre>
    
    
    If the command fails, the system displays an error message.
    
    **Note**   If this command fails, see Appendix D: "Kerberos and LDAP Troubleshooting Tips."
    
    • You can use the ldapsearch command as follows to test your LDAP connection, where IPAddress is the IP address of your Active Directory server:

      # ldapsearch -h IPAddress -s base -b "" "(objectclass=*)"

      For example:

      # ldapsearch -h 10.9.8.1 -s base -b "" "(objectclass=*)"

      The output from the command should be similar to the following:

      currentTime=20031217100255.0Z
      

    subschemaSubentry=CN=Aggregate,CN=Schema,CN=Configuration,DC=example,DC=com [...]

    **Note**   If this command fails, see Appendix D: "Kerberos and LDAP Troubleshooting Tips."
    

    Test LDAP Authorization Data Retrieval from the Active Directory Server

    Testing retrieval of authorization data from Active Directory requires demonstrating that the UNIX client can obtain account information from Active Directory. One way to accomplish this is to use the UNIX getent (get entries) command to test the name service switch (NSS) configuration on a UNIX client.

    Typically, the getent command retrieves a list of entries from a source specified in the /etc/nsswitch.conf file—in this case, from Active Directory. You can use the getent command to query Active Directory for various types of data. For this test, getent is used to query Active Directory to obtain authorization and identity information.

    This test also demonstrates that the LDAP configuration described in Chapter 4 is correct.

    In order to understand why the command in the following procedure specifies the passwd file, you need to understand the following:

    • Three sections of the /etc/nsswitch.conf file are defined specifically for the End State 2 solutions developed in this volume:

      • passwd. Identifies the list of data sources from which user environment configuration data (such as home directory and shell) is retrieved.

      • group. Identifies the list of data sources from which user group membership is retrieved.

      • hosts. Identifies the list of data sources from which name resolution data is retrieved.

    • For the passwd and group sections of the /etc/nsswitch.conf file, both files (the local /etc/passwd and /etc/group files) and ldap (the Active Directory database) are defined. For the hosts section of the /etc/nsswitch.conf file, both files (the /etc/hosts file) and dns (DNS servers) are defined.

    To test LDAP authorization data retrieval from Active Directory by testing the NSS configuration

    1. Query for Active Directory account information. At a command prompt on a UNIX client, type the following command:

      % getent passwd username 

      where username is the user name of an Active Directory user account that has UNIX attributes.

    2. Determine the result. The output of this command should be:

      username::10004:10000::/home/username:/bin/sh

      where username is the user name of an Active Directory user account that has UNIX attributes and the remaining fields contain data for this user retrieved from Active Directory: the path and directory name of the user’s home directory, the UID of the user (10004 in this example), the user’s GID (10000 in this example), and the user’s shell (/bin/sh in this example).

      Note   If this command returns an error or returns to the prompt without displaying any data, see Appendix D: "Kerberos and LDAP Troubleshooting Tips."

    Test Base Functionality

    To confirm that the solution functions at a basic level, perform a series of tests to validate that basic authentication and authorization are functioning correctly. This section covers testing basic authentication and authorization as well as common failure scenarios.

    In this section of the test, you introduce errors, such as those that might occur in a production environment over time, to determine whether the solution performs as expected under these error conditions.

    This section includes the following topics:

    • Prepare for base functionality testing

    • Test provisioning and deprovisioning

    • Test authentication for End States 1 and 2 solutions

    • Test authorization for End State 2 solutions

    Prepare for Base Functionality Testing

    In order to perform base functionality tests effectively, you should log on as test users with each logon interface. For the base functionality testing described in this section, you should use non-Kerberized (non-Kerberos–aware) UNIX tools.

    This section explains how to:

    • Repeat every test with each authentication interface.

    • Use non-Kerberized UNIX tools for base functionality testing.

    Repeat Every Test with Each Authentication Interface

    It is important to repeat every test for base functionality described in this section with each authentication interface that is used in your production environment. For the most part, these will be logon interfaces, but some tools and applications might also use PAM for authentication and thus will be affected by the PAM configuration described for the custom solutions in this volume.

    Although the End States 1 and 2 custom solutions included in this volume describe deploying pam_krb5 for use with PAM-enabled logons, you should test all authentication interfaces whether or not they use PAM and pam_krb5. It is important to test all authentication interfaces in order to confirm that no authentication interfaces will be adversely affected after the solution is deployed.

    For the base tests described in this section for testing pam_krb5, you should use the non-Kerberized versions of the UNIX tools mentioned in the list following. Later, in the section "Extend Testing to Your Full Environment," which more closely simulates your production environment, you should switch to the Kerberized versions of these tools. For information comparing non-Kerberized versus Kerberized versions of these tools, see the next subsection, "Use Non-Kerberized UNIX Tools for Base Functionality Testing."

    Logon interfaces and connections might include any of the following:

    • Console command-line. The command-line shell on a UNIX-based computer accessed locally on the console.

    • Console graphical user interface (GUI). The graphical interface on a UNIX-based computer. For example, the default Solaris Common Desktop Environment (CDE) or the default Red Hat GNU Network Object Model Environment (GNOME).

    • Wired or wireless connections. Computers connected to the network through wired links and those connected through wireless connections. Logon tests should be conducted with both direct and wireless connections because authentication behavior through each might differ.

    • Remote GUI. The graphical interface on a UNIX-based computer accessed through a remote GUI application, such as:

    • Remote command-line tools. The command-line shell on a UNIX-based computer accessible through a variety of remote access tools, including:

      • Standard telnet. A standard Internet protocol that lets a user on a computer on a TCP/IP–based network (such as a Windows-based or UNIX-based network) log on to a remote computer. The telnet command is a text-based terminal emulation program.

      • Kerberized telnet. Kerberized versions of telnet are available as open source and natively in most UNIX operating systems. With some versions of Kerberized telnet, such as the native Solaris version, it is possible to configure the tool either to use PAM Kerberos for authentication or to skip PAM authentication.

        IMPORTANT   When you replicate Kerberized telnet logon in your test environment, be sure to use the same configuration found in your production environment. For example, Kerberized telnet can be configured either to use or not use PAM for authentication.

      • Standard ssh. The command used to invoke the Secure Shell (SSH) protocol that lets a user log on to a remote computer securely by tunneling communications between the two computers through an encrypted session. The ssh command is more secure than telnet.

        Note   The PAM configuration for ssh on Red Hat varies between installations. By default, PAM for ssh is configured to use the standard PAM system-auth file referenced in this volume. However, Red Hat can use separate PAM configuration files for multiple tools, including ssh. Your environment might be configured to use a separate PAM configuration file for ssh. If that is the case, you will need to configure pam_krb5 separately in this file.

      • Kerberized ssh. Kerberized versions of ssh are available as open source, such as OpenSSH at https://www.openssh.com, and as a packaged product, such as SSH Tectia from SSH Communications Security at https://www.ssh.com. Kerberized versions of ssh might or might not make use of PAM Kerberos for authentication, depending on the version used or the configuration enabled.

      • Standard rlogin. The remote logon, or rlogin, tool is a member of the group of tools collectively known as the remote utilities, or r’utils. Other tools in this group include rcp, or remote file copy; rsh, or remote shell; and rwho, or remote who. The rlogin tool has functionality similar to the telnet tool although it does not use the telnet protocol.

      • Kerberized rlogin. Kerberized versions of the r’utils, including rlogin, are available both as open source and natively in most UNIX and Linux operating systems. With some versions of Kerberized rlogin, such as the native Solaris version, it is possible to configure the tool either to use PAM Kerberos for authentication or to skip PAM authentication.

        IMPORTANT   When you replicate Kerberized rlogin for logging on in your test environment, be sure to use the same configuration found in your production environment.

      Note   The standard and Kerberized ssh tools, as well as Kerberized versions of telnet, ftp, and the r’utils commands are more secure than non-Kerberized versions of telnet, ftp, and the r’utils, all of which transmit data over the network in clear text. For more information about testing with these tools, see "Use Non-Kerberized UNIX Tools for Base Functionality Testing" and "Switching to Kerberized Remote Access Tools for Extended Testing" later in this chapter.

    • Any other network links , including remote VPN , wireless , or satellite technology. Accessing your network through technology such as a virtual private network (VPN) connection, wireless connections, or over a satellite link might introduce different behavior from the behavior that occurs through access from within a standard wired LAN or WAN. This is especially true of Kerberos authentication, which is sensitive to time delays and host name resolution problems.

    Additional authentication interfaces might include:

    • Tools used to change to or act as a different user.

      • su. The native UNIX "substitute user" or su command is used, while logged on as one user, to assume the logon shell of another user. Typically, su is used to change to root user to obtain sufficient permissions to perform administrative functions without having to log off and then log back on as root.

      • sudo. The open-source "superuser do," or sudo, tool is used, while logged on as a nonroot user, to run commands as the root user. The sudo tool uses PAM (including pam_krb5) for authentication and can also be configured to use LDAP authorization. The sudo tool is packaged with the Red Hat operating system.

    • Third party or custom PAM-enabled applications. Third party or custom applications in your environment might have been configured to incorporate PAM for authentication.

    • Client/server applications. Some client/server applications might authenticate or log on the user on the server side.

    For more information about how PAM functions when establishing interoperability between UNIX clients and Windows Active Directory, see Chapter 4, "Developing a Custom Solution."

    Use Non-Kerberized UNIX Tools for Base Functionality Testing

    You can perform Kerberos authentication with UNIX tools such as telnet, ssh, ftp, and the r’utils, in two very different ways. Table 5.5 explains why you use non-Kerberized tools for the base functionality tests in this section and why you should switch to Kerberized tools later in this chapter when you extend testing to address all types of users, computers, access methods, and applications that are in use in your production environment.

    Table 5.5. Using Non-Kerberized Tools or Kerberized Tools for Different Tests

    Tools

    Which Tests?

    Explanation

    Non-Kerberized tools

    Tests here in the "Test Base Functionality" section

    You can use a standard, non-Kerberized version of the tool in combination with the custom PAM Kerberos solution described in this volume to access a UNIX-based computer remotely by using the same authentication mechanism (such as PAM-enabled login or dtlogin) that you would use to access the computer on the console. With this method, you use the standard tool to open a non-Kerberized channel from the client computer to the UNIX-based destination computer, and then the client initiates Kerberized logon directly on the destination computer.

    When a user remotely logs on to a UNIX-based client computer using one of these tools, the user's user name and password are sent from the user's initiating computer (that is, the local computer, which might be a Windows-based computer) to the UNIX-based client computer. Then, the UNIX-based client computer uses this user name and password to make a request (via PAM) to the KDC on the Active Directory server to request a Kerberos ticket-granting ticket (TGT) from the KDC. If the user name and password are correct, a TGT is returned to the UNIX client and is stored in its credentials cache. Establishing this connection is transparent to the user.

    Because the Kerberos authentication attempt is not initiated until the request reaches the UNIX-based client computer, the user name, password, and any data might be sent in clear text from the initiating computer to the UNIX-based client computer. For this reason, in a production environment, we recommend using ssh (which provides an encrypted channel for communications with or without the use of Kerberos) or Kerberized versions of these tools, as described in the next row.

    Kerberized tools

    Tests later in the section "Extend Testing to Your Full Environment"

    Kerberized versions of these tools require the use of a Kerberized client component (for example, telnet, ssh, ftp, or rlogin) on the source computer and a Kerberized server component (for example, telnetd, sshd, ftpd, or krlogind) on the destination computer.

    In order to use a Kerberized version of a tool, the user must first acquire a TGT on the source computer. The user then initiates a Kerberized session with the Kerberized server on the destination computer. Because the channel is initiated with Kerberos, the user name and password are not sent across the network at all.

    Optionally, the channel can be encrypted to provide further security for subsequent data transfer. When a Kerberized tool is used, the user will have a TGT on the destination computer only if the session has been initiated with a request to forward the TGT from the source computer to the destination computer.

    Kerberized versions of these tools are generally designed to bypass PAM authentication; but some versions, such as the native Solaris 9 versions, provide the option for configuration with PAM.

    Because it can be unclear whether a Kerberized version of one of these tools uses PAM, this chapter assumes that you will use non-Kerberized versions of these tools for the base functionality testing described in this section. When you extend testing to address your full environment, as described in the section "Extend Testing to Full Environment" later in this chapter, you should test with the versions and configurations of these tools as they are actually used in your production environment.

    Test Provisioning and Deprovisioning

    Typically, in a large enterprise environment, you perform user provisioning and deprovisioning by using a tool specifically designed for this task instead of by using the Active Directory Users and Computers snap-in. Your enterprise might use a custom tool for user provisioning, a packaged product, or a combination of these methods.

    Many enterprise user-provisioning tools are commercially available, including:

    Testing your new solution for enabling interoperability between UNIX clients and Active Directory includes confirming that you can use the provisioning tool currently in use in your environment to manage the new UNIX users that you will add to Active Directory. This is especially important for End State 2 solutions, which require the management of additional UNIX attributes in Active Directory for each UNIX user.

    When planning how to test your user-provisioning tool with the new solution, keep the following issues in mind:

    • Users and groups. You need to test provisioning and deprovisioning for both individual users and for groups.

    • Active Directory fields adapted for UNIX users. Where appropriate, you need to test populating any standard Active Directory fields with data that is relevant only for UNIX users. You need to do this, for example, if you plan to use the Organization fields (that is, the fields displayed on the Organization tab of the UserName Properties page in Active Directory Users and Computers) to store different types of data for UNIX users than is typically used for Windows users.

    • UNIX-specific attributes. For End State 2 solutions, you need to test populating the UNIX-specific attributes used for the solution. For example, you need to test populating the attributes defined by Windows Services for UNIX that are used in your solution, such as msSFU30HomeDirectory and msSFU30LoginShell. Keep in mind that the solution that you plan to deploy in your production environment might include additional UNIX attributes in addition to those described in Volume 2: Chapter 4, "Developing a Custom Solution."

    • UNIX-specific group membership. For End State 2 solutions, you need to test the UNIX-specific group membership lists associated with each group. For example, when using Windows Services for UNIX, you must test that the msSFU30MemberUid attribute in each UNIX group is in fact populated with the names of users who are members of that group. This attribute must be updated when a new user account is created, when group membership for a user account is modified, when a user account is disabled (if applicable), and when a user account is removed.

    The following table summarizes the tasks that you need to perform to test that your user-provisioning tool functions correctly with your new solution.

    Table 5.6 Summary of Tasks to Perform to Test Your User-Provisioning Tool

    For This Task:

    Confirm This:

    Provisioning

    When adding new users, ensure that all fields are correctly populated.

    Updates

    When modifying existing users, ensure that intended fields are modified and that no unintended side effects occur.

    Deprovisioning

    When removing users, ensure that all fields, in both user and group records, are correctly depopulated.

    Test Authentication for End States 1 and 2 Solutions

    Perform the procedures in this subsection to confirm that logon behavior occurs as expected for Active Directory–enabled UNIX users.

    This section includes the following tasks:

    • Test basic logon functionality for UNIX user test01

    • Test basic logon functionality for local users root and loctest01

    • Test basic logon for UNIX users who belong to many groups

    • Test password change for UNIX user test01

    • Confirm that authentication with supported encryption types works as expected

    • Confirm that expected behavior occurs when logon fails

    For changes to accounts and passwords, a delay might occur between the moment when the change is made to the moment when the change takes effect. Design your testing to validate that your timing requirements are met.

    Test Basic Logon Functionality for UNIX User test01

    [All Solutions]

    Confirm that a user can log on using Kerberos authentication against Active Directory. This test differs as follows, depending on whether you use Solaris 9 or Red Hat 9:

    • Solaris. The method that you use must be one that is currently configured for pam_krb5 authentication in the pam.conf file.

      For more information, see "Configure /etc/pam.conf for Kerberos Authentication" in Volume 2: Chapter 4, "Developing a Custom Solution."

    • Red Hat. The method that you use must be the one that is currently configured for pam_krb5 authentication in the correct file in the pam.d directory.

      For more information, see "Configure the /etc/pam.d/system-auth file for Kerberos Authentication" in Chapter 4, "Developing a Custom Solution."

    To verify logon for test01

    1. Log on. Log on as test user test01 either at the console command line of the UNIX-based computer or by using telnet (if telnetd is installed on the client computer). Use the password configured in Active Directory instead of the local password.

      Note   If you want to log on by using telnet, you might need to install and configure the telnetd service.

    2. Confirm logon. Confirm that you are successfully logged on.

      Note   If logon fails, see Appendix D: "Kerberos and LDAP Troubleshooting Tips."

    3. Confirm TGT. Use the klist command, which lists currently held Kerberos tickets, to confirm that a Kerberos ticket-granting ticket (TGT) has been granted.

      At the console command line on the UNIX-based computer, type:

      $ klist

      Confirm that the ticket listed in the data returned by klist includes the correct user name and current date and time.

    Test Basic Logon Functionality for Local Users root and loctest01

    [All Solutions]

    Confirm that a user can log on using local authentication by using a method that has been configured for Active Directory authentication. This test differs as follows, depending on whether you use Solaris 9 or Red Hat 9:

    • Solaris. The method that you use must be one that is currently configured for pam_krb5 authentication in the pam.conf file.

      For more information, see "Configure /etc/pam.conf for Kerberos Authentication" in Chapter 4, "Developing a Custom Solution."

    • Red Hat. The method that you use must be the one that is currently configured for pam_krb5 authentication in the correct file in the pam.d directory.

      For more information, see "Configure the /etc/pam.d/system-auth file for Kerberos Authentication" in Chapter 4, "Developing a Custom Solution."

    Create local user loctest01

    1. Create loctest01 locally. Use the useradd command to create a local test user, loctest01:

      # useradd -d /home/loctest01 –m -s /bin/sh loctest01

      Note   The /home/loctest01 directory and /bin/sh shell are examples for the user's home directory location and shell. In your environment, you might use a directory other than /home and a different shell.

    2. Set password. Use the passwd command to set the local UNIX password for this user:

      # passwd loctest01

      When prompted to provide a password, type a strong password, not a dictionary-based password. Typing a dictionary-based password returns a BAD PASSWORD message.

    3. Confirm directory ownership. Use the UNIX list command, ls, to confirm that the /home/loctest01 directory has been created and is owned by user loctest01:

      # ls -l

      Note   You can run these tests by using a nonroot user account that already exists instead of creating a new test user.

    To verify logon for local user loctest01

    1. Log on. Log on as test user loctest01 either at the console command line of the UNIX-based computer or by using telnet (if telnetd is installed on the client computer).

      Note   If you want to log on by using telnet, you might need to install and configure the telnetd service.

    2. Confirm logon. Confirm that you are successfully logged on.

    To verify logon for local root user (superuser)

    1. Log on. Log on as root user either at the console command line of the UNIX-based computer or by using telnet (if telnetd is installed on the client computer).

      Note   If you want to log on by using telnet, you might need to install and configure the telnetd service.

    2. Confirm logon. Confirm that you are successfully logged on.

    Test Basic Logon for UNIX Users Who Belong to Many Groups

    [All Solutions]

    Confirm that a user who belongs to a very large number of groups can log on using Kerberos authentication against Active Directory.

    Even though most UNIX implementations cannot make use of the Windows PAC, the Kerberos TGT obtained by UNIX clients (during logon for UNIX users who have Active Directory accounts) does include the PAC. Kerberos authentication for UNIX users using UDP (the historical default) can fail when a user is a member of a large number of groups if this number of groups causes the PAC data to grow to exceed the limit of the UDP packet size. In this case, TCP instead of UDP must be used for Kerberos authentication. However, some older Kerberos implementations, including native OS Solaris 9 and native OS Red Hat 9, do not support TCP for Kerberos authentication. If your organization uses the native version of either Solaris 9 or Red Hat 9, you must ensure that the number of groups that users belong to does not exceed the UDP packet size limit.

    The number of groups that can create a problem because of the size of the PAC varies depending on the specific groups of which the user is a member. (Groups defined in Active Directory do not have a set size.) Users who belong to more than 70 groups might experience problems logging on. For testing, users should be added to a number of groups equivalent to the largest number of groups that a production user might belong to, plus a margin for error. For example, if users in your environment could be members of as many as 75 groups, create test user accounts that are members of 90 or more groups.

    To verify logon for a user who belongs to a large number of groups

    1. Log on. Log on as a UNIX user who belongs to a large number of groups. Use the user's Active Directory password.

    2. Confirm logon: Perform the steps shown in Table 5.7, depending on whether the logon succeeds or fails.

      Table 5.7. Can a User Who Belongs to a Large Number of Groups Log On?

      For This Result

      Do This

      If logon succeeds

      Confirm TGT: 

      1. At the console command line on the UNIX-based computer, type klist to confirm that a Kerberos TGT has been granted.

      2. Confirm that the ticket listed in the data returned by klist includes the correct user name and current date and time.

      If logon fails

      Calculate the user's groups:

      1. On an Active Directory KDC running Windows Server 2003, type the following command at a command prompt to determine the number of groups that the user belongs to:

        tokensz /calc_groups ClientName

      In the syntax defined for the tokensz command (as described in "Authentication Fails Due to User PAC" at https://technet2.microsoft.com/WindowsServer/en/Library/3872f0d7-e4b3-49ed-9a4b-1fefbf0d45471033.mspx), ClientName refers to the name of the user whose groups you want to list.

      This command calculates the number of Active Directory groups that the user belongs to; it does not include Active Directory–enabled UNIX groups configured by using the ADSI Edit tool and the msSFU30MemberUid attribute.

      You can download the tokensz tool from the Microsoft Download Center at https://www.microsoft.com/downloads/details.aspx?FamilyID=4a303fa5-cf20-43fb-9483-0f0b0dae265c&DisplayLang=en.

    3. Depending on the result, do the following:

      If the tokensz calc_groups command indicates that user belongs to a large number of groups (typically, 70 or more groups), reduce the number of groups that the user belongs to. Alternatively, if you use TCP for Kerberos transactions between UNIX-based and Windows-based computers and the problem persists, install the hotfix "New resolution for problems that occur when users belong to many groups," available at https://go.microsoft.com/fwlink/?LinkId=23044.

      – or –

      If logon still fails, see Appendix D: "Kerberos and LDAP Troubleshooting Tips."

    4. For more information, see:

      Test Password Change for UNIX User test01

      You can use the following procedures to test whether password change requests are processed as expected. These tests assume that the following Group Policy settings are configured for your Active Directory environment:

      • Maximum password age. The number of days following a password change before the user’s password will automatically expire.

      • Interactive logon: Prompt user to change password before expiration. The number of days before automatic password expiration that the user will receive prompts at logon warning that the user's password will expire in n days.

      • At least one of the following:

        • Passwords must meet complexity requirements. Complexity requirements include that the password contain at least three character classes.

        • Minimum password length. The minimum number of characters that a password must contain.

      For information about what results to expect for each solution, see Appendix J: "Custom Technology Solutions Capability Matrix." For example, for a native OS Red Hat solution, the test "Password change required for Active Directory–enabled UNIX user" will appear to work initially, but the password will not be valid for subsequent logons.

      For additional information about password issues besides the information in this chapter, see the entries for "Password policy," "Manual password change," "Password change on logon," and "Password warning" in Appendix J: "Custom Technology Solutions Capabilities Matrix."

      To confirm that password change requests are processed as expected

      • Test specific password change behavior. For each condition listed in Table 5.8, test whether the desired behavior occurs.

        Table 5.8. Test Each of These Conditions

        Condition

        Test

        Password change required for Active Directory–enabled UNIX user

        [All Solaris Solutions and Open Source Red Hat Solutions]

        1. In Active Directory Users and Computers, set the password for user test01 to require a password change.

        2. Log on to the UNIX-based computer as test01 and, when prompted, change the password to a password that meets the password policy requirements for your organization.

        3. Confirm that the logon is successful.

        4. Log off and log on again using the new password.

        5. Confirm that the logon is successful.

        [Native OS Red Hat Solutions]

        Password change might appear to work initially, but the password will not be valid for subsequent logons.

        Password change required for local nonroot user

        [All Solutions]

        1. On the UNIX-based computer, use the passwd command to set the password for local user loctest01 to require a password change:

          # passwd –f loctest01
        2. Log on to the UNIX-based computer as loctest01 and, when prompted, change the password to a password that meets the password policy requirements for the local UNIX-based computer.

        3. Confirm that the logon is successful.

        4. Log off and log on again using the new password.

        5. Confirm that the logon is successful.

        Password policy

        [All Solaris Solutions and Open Source Red Hat Solutions]

        1. In Active Directory Users and Computers, set the password for user test01 to require a password change.

        2. Log on to the UNIX-based computer as test01 and, when prompted, attempt to change the password to a password that does not meet password policy requirements for your organization.

        3. Confirm that password change and logon are denied.

        4. Attempt to log on to the UNIX-based computer as test01 using the old password.

        5. Confirm that the logon is successful.

        6. Log off and attempt to log on again with the password used during the unsuccessful password change.

        7. Confirm that the logon is denied.

        [Native OS Red Hat Solutions]

        Not applicable: Because the password change cannot be done for the native Red Hat solution, this test does not apply.

        Password warning

        [Open Source]

        1. Set the number of days specified in the Group Policy setting Interactive logon: Prompt user to change password before expiration equal to the number of days specified in the Group Policy setting Maximum password age (for example, 5 days for both). This ensures that the account of any user with a password set to expire will also be within the password warning period.

        2. Because changes to Group Policy settings do not take effect immediately, open a command prompt and run the following command to force a Group Policy update:

          gpupdate /force

        3. Log on to the UNIX-based computer as test01 and confirm (for open source solutions) that a password warning expiration message appears and that logon is successful.

        [Native OS Solaris Solutions]

        This test will fail for the native OS Solaris solutions—no password expiration warning will appear.

        [Native OS Red Hat Solutions]

        Not applicable: Because the password change cannot be done for the native Red Hat solution, this test does not apply.

        Manual password change for Active Directory–enabled UNIX user

        [All Solutions]

        1. Log on to the UNIX-based computer as a user with an Active Directory account (such as test01), open a command shell, if necessary, and then type:

          % kpasswd

        2. You should receive the following prompt for the current (old) password for user test01:

          kpasswd: Changing password for test01.
          Old password:

          Type the current password for test01.

        3. When you see the following prompts, type and confirm a new password that meets password policy requirements for your organization:

          New password:
          New password (again):
          Kerberos password changed.

        4. Log off, log on again, and then confirm that the logon is successful.

        Manual password change for local nonroot user

        [All Solutions]

        1. Log on to the UNIX-based computer as the local nonroot user, loctest01, open a command shell, if necessary, and then type:

          % passwd

        2. When prompted, type the current password for the local nonroot user, and then type and confirm a new password.

        3. Log off, log on again, and then confirm that the logon is successful.

        Manual password change for local root user

        [All Solutions]

        1. Log on to the UNIX-based computer as the local root user, open a command shell, if necessary, and then type:

          # passwd

        2. When prompted, type and confirm a new password.

        3. Log off, log on again, and then confirm that the logon is successful.

        Confirm That Authentication with Supported Encryption Types Works As Expected

        [All Solutions]

        UNIX-based computers support a set of encryption types that is different from the set of encryption types supported by Active Directory. The custom solutions presented in this volume will work correctly only with encryption types used in your production environment that are supported by both Active Directory and the UNIX solution selected.

        Use the following procedure to confirm that each encryption type supported by the selected solution that you plan to deploy in your production environment works as expected. You should test the strongest encryption type that is compatible with your systems and applications.

        To confirm that each encryption type works as expected

        1. Identify encryption types available for your solution. List the encryption types that you use that are supported by the custom solution that you plan to deploy. Possible encryption types are shown in Table 5.9.

          Table 5.9. Which Encryption Types in Your Environment Are Supported by Your Interoperability Solution?

          Supported Encryption Types for Open Source Solutions

          Supported Encryption Types for Native OS Solutions

          Available in My Environment?

          RC4-HMAC

          Yes / No

          DES-MD5

          Yes / No

          DES-CRC

          Yes / No

          DES-MD5

          Yes / No

          DES-CRC

          Yes / No

          The recommended practice is to use the most secure encryption type supported by the solution that you plan to deploy:

          • Highest security. If possible, use RC4-HMAC.

          • Medium security. If your selected solution does not support RC4-HMAC or if other Kerberized functions in your environment do not support RC4-HMAC, use DES-MD5.

          • Least security. Use DES-CRC only if no other option is available.

            Note   Microsoft does not recommend the use of DES-CRC.

        2. Configure krb5.conf for your strongest encryption type. Configure encryption on at least one UNIX client for the strongest encryption type used in your environment. On each host, edit the [libdefaults] section of the krb5.conf file to confirm that only the encryption type to be tested is configured. This example shows configuration for the DES-CRC (des-cbc-crc) encryption type:

          [libdefaults]
          

           default_realm = EXAMPLE.COM    dns_lookup_realm = false    dns_lookup_kdc = false    default_tkt_enctypes = des-cbc-crc    default_tgs_enctypes = des-cbc-crc

        Configuration entries for the DES-MD5 and RC4-HMAC encryption types are as follows:
        
        <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">   default_tkt_enctypes = des-cbc-md5
        

           default_tgs_enctypes = des-cbc-md5

        – or –
        
        <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">   default_tkt_enctypes = rc4-hmac
        

           default_tgs_enctypes = rc4-hmac

        **Note**   Other entries in the \[libdefaults\] section of the krb5.conf file will vary based on the solution selected.
        
        **Note**   For the purposes of this test, only one encryption type is configured at a time. In an operating environment, you might want to configure more than one encryption type. For example:
        
           default\_tkt\_enctypes = rc4-hmac des-cbc-md5
        
           default\_tgs\_enctypes = rc4-hmac des-cbc-md5
        
        **Note:** The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.
        
        <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">The encryption type listed first is the one that is used first. The best
        

        practice is to list them in descending order by strength so that the strongest is used first.

        1. Create a key table for each encryption type. Create a key table containing a key with the appropriate encryption type on host.

          IMPORTANT   This step assumes that the Certified Security Solutions css_adkadmin tool is installed on the UNIX host. For more information about installing css_adkadmin, see the subsection "Install Kerberos Tools" in "Use Solaris 9 with Open Source Components for End States 1 and 2" or the subsection "Install Kerberos Tools" in "Use Red Hat 9 with Open Source Components for End States 1 and 2" in Chapter 4, "Developing a Custom Solution."

          The ktpass command does not support the creation of an RC4 key table and gives an error if you attempt to create a CRC key table.

          Log on to the UNIX host, open a command shell if necessary, and use the css_adkadmin command to specify the desired encryption type to create a key table on the UNIX host, where fqdn is the fully qualified domain name of the UNIX host; Administrator is a principal (a user account) in the Active Directory database with sufficient permissions to add, modify, and delete users and computers from the database; +use**_**des specifies DES encryption; and BaseDN is the base DN for the Active Directory container holding the UNIX-based computer accounts:

          # css_adkadmin -p Administrator -q "ank +use_des -group BaseDN -k host/fqdn"

          For example:

          • The following command creates an RC4-HMAC key table:

            Note: The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.

            # css_adkadmin -p Administrator1 -q "ank -group ou=computers,ou=unix,
            

        dc=example,dc=com -k host/unix01.example.com"

          - The following command creates a DES-MD5 key table:
            
            **Note:** The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.
            
            <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml"># css_adkadmin -p Administrator1 -q "ank +use_des -group ou=computers,
        

        ou=unix,dc=example,dc=com -k host/unix01.example.com"

        **Note**   You do not need to create separate key tables for the DES-MD5 and DES-CRC encryption types. Both encryption types can use the same key table.
        
        1. Log on with each encryption type. Log on to at least one UNIX client for each encryption type in your environment and confirm that the logon is successful.
        Confirm That Expected Behavior Occurs When Logon Fails

        [All Solutions]

        Use the following procedure to confirm that when a user should be denied logon, logon is in fact denied. See also the tests in "Missing or Incorrect Key Table" later in this chapter.

        To confirm that incorrect logons fail

        • Test that incorrect configurations result in logon failure. For each condition listed in the following table, use Active Directory Users and Computers to set the UNIX user account to the specified start state. Then, on a UNIX host, perform the tests listed in the table.

          Table 5.10. Confirm that Logon Fails for Each of the Following Conditions

          Condition

          Test

          Incorrect password

          [All Solutions]

          1. Try to log on as user test01 with an incorrect password.

          2. Confirm that logon is denied.

          Expired account

          [All Solutions]

          1. In Active Directory Users and Computers, set the Account Expires End of for user test01 to a date in the past.

          2. Try to log on as user test01.

          3. Confirm that logon is denied.

          Repeated incorrect password

          [All Solutions , except Solaris Native OS Solutions though a GUI]

          1. Try to log on as user test01 with the wrong password a sufficient number of times to lock out the account based on your Group Policy setting.

          2. Confirm that the account is in fact locked out when you exceed the lockout threshold.

          Note   This test should work correctly for the following solutions:

          • All Red Hat solutions through all logon interfaces, including GUI interfaces.

          • Solaris open source End State 1 and End State 2 solutions through all logon interfaces, including GUI interfaces.

          • Solaris native OS End State 1 and End State 2 solutions through all logon interfaces except GUI interfaces. This test fails for these solutions for GUI logons because these solutions require that you set users to the Do not require Kerberos preauthentication option in Active Directory Users and Computers in order to use a GUI to log on. An account set to Do not require Kerberos preauthentication will never lock out no matter how many times you type the incorrect password. Using the Do not require Kerberos preauthentication option in a production environment is not recommended.

          For additional information about account lockout issues besides the information in this chapter, see the entries for "Set account lockout," "Account locked out," and "Account expiration" in Appendix J: "Custom Technology Solutions Capabilities Matrix.

          Locked account

          [All Solutions]

          1. While the account for user test01 is still locked out, as indicated in the preceding row, try to log on as user test01.

          2. Confirm that logon is denied.

        Test Authorization for End State 2 Solutions

        Perform the procedures in this subsection to confirm that authorization data for UNIX users is retrieved successfully from the LDAP authorization store in Active Directory and to confirm that the authorization data returned allows the user to access files and directories on the UNIX host.

        This section describes how to:

        • Test basic authorization functionality for UNIX user test02.

        • Test access control for UNIX users test02, test03, and test04.

        Test Basic Authorization Functionality for UNIX User test02

        [All End State 2 Solutions]

        Confirm that a user can log on using both Kerberos authentication and LDAP authorization against Active Directory. This test differs depending on whether you use Solaris 9 or Red Hat 9:

        • Solaris. The method that you use must be one that is currently configured for pam_krb5 authentication in the pam.conf file.

          For more information, see "Configure /etc/pam.conf for Kerberos Authentication" in Chapter 4, "Developing a Custom Solution."

        • Red Hat. The method that you use must be the one that is currently configured for pam_krb5 authentication in the correct file in the pam.d directory.

          For more information, see "Configure the /etc/pam.d/system-auth File for Kerberos Authentication" in Chapter 4, "Developing a Custom Solution."

        To verify logon for test02

        1. Log on. Log on either at the console command line of the UNIX-based computer or by using telnet (if telnetd is installed on the client computer) as test user test02.

          Note   If you want to log on by using telnet, you might need to install and configure the telnetd service.

        2. Confirm logon. Confirm that you are successfully logged on.

          Note   If logon fails, see Appendix D: "Kerberos and LDAP Troubleshooting Tips."

        3. Confirm id and group output. Confirm that the output of the id and group commands shows that the user's ID (UID) and primary group ID (GID) are correctly mapped to the user's user name and primary group name and that a complete list of all groups of which the user is a member can be retrieved, as illustrated by the examples in Table 5.11.

          Table 5.11. Confirm That id and group Commands Produce the Correct Output

          Command

          Example Output

          $ id

          Solaris output:

          uid=10000(test02) gid=10000(tstgrp01)

          Red Hat output:

          Note: The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.

          uid=10002(test02) gid=10001(tstgrp01) 
          groups=10001(tstgrp01),10002(tstgrp02)
          $ groups

          Output for both Solaris and Red Hat:

          tstgrp01 tstgrp02
          If the output of the **id** command includes only the UID and GID numbers but not the user and group names in parentheses, or if the **groups** command returns no data or only one group, the test has failed.
          1. Confirm home directory and shell. Confirm that the home directory and shell for the user have been correctly set by using the pwd and echo $SHELL commands, as illustrated in the examples in Table 5.12.

            Table 5.12. Confirm that Home Directory and UNIX Shell are Correct

            Command

            Output

            $ pwd
            /home/test02
            $ echo $SHELL
            /bin/sh

            Note   If these tests fail, refer to Appendix D: "Kerberos and LDAP Troubleshooting Tips."

            Test Access Control for UNIX Users test02, test03, and test04

            [All End State 2 Solutions]

            LDAP testing to verify successful authorization includes confirmation that, upon logon, a UNIX user with an Active Directory account has the correct authorization data and also that the identity and group membership associated with this authorization data can be used to access files and directories on the UNIX host.

            Note   The steps in the following procedure assume that your test environment is set up as described in the section "Preparing Your Environment" in Chapter 4, "Developing a Custom Solution."

            To create UNIX test groups tstgrp03 and tstgrp04

            1. Create UNIX test groups in Active Directory. Open Active Directory Users and Computers in Administrative Tools, and then configure the following:

              1. Right-click UNIX-Groups, point to New, and then click Group.

              2. In New Object – Group, in Group Name, type tstgrp03, and then click OK.

              3. Repeat steps a and b that you just used to create tstgrp03 to create another group called tstgrp04.

            2. Add UNIX attributes to groups tstgrp03 and tstgrp04. In Active Directory Users and Computers, add UNIX attributes to tstgrp03 as follows:

              1. In the console tree under example.com, expand UNIX-OU, and then click UNIX-Groups.

              2. In the details pane, right-click tstgrp03, and then click Properties.

              3. On the UNIX Attributes tab, set the following properties for tstgrp01:

                NIS Domain: example (If your test domain name is not called example.com, use your actual test domain name.)

                (GID) Group ID: 10003 (Use a GID that is not currently in use on your UNIX hosts.)

                Do not add any members at this time.

              4. Repeat steps a, b, and c that you just used to assign UNIX attributes to tstgrp03 to assign UNIX attributes to group tstgrp04, using a unique GID.

            3. Add test users to tstgrp03 and tstgrp04. Click Start, click Run, type adsiedit.msc, click OK, and then configure the following:

              Note   Using ADSI Edit to perform this step is necessary because the UNIX Attribute tab for the group in Active Directory Users and Computers lets you populate the msSFU30PosixMember attribute for groups; however, it does not let you populate the msSFU30MemberUid attribute. You cannot map the msSFU30PosixMember attribute to the memberuid attribute on the UNIX-based computers because msSFU30PosixMember is a distinguished name (DN) and the UNIX memberuid is an IA5String, as is msSFU30MemberUid.

              1. In the console tree, expand Domain [ComputerName.example.com], expand DC=example,DC=com, expand OU=UNIX-OU, and then expand OU=UNIX-Groups.

              2. Right-click CN=tstgrp03, and then click Properties.

              3. On the Attribute Editor tab, click the msSFU30MemberUid attribute, and then click Edit.

              4. In Multi-valued String Editor, in Value to add, type test02, click Add. Repeat to add test04.

              5. Click OK twice.

              6. Repeat steps a through e that you just used to add users test02 and test04 to tstgrp03 and to add user test03 to tstgrp04.

            To create test files and directories

            1. Create test files and directories. Log on to the UNIX host as root user and run the following commands to create test files and directories:

              # mkdir /tmp/testdir
              

            mkdir /tmp/testdir/testdir1

            mkdir /tmp/testdir/testdir2

            mkdir /tmp/testdir/testdir3

            echo “This is test file 1.” > /tmp/testdir/testfile1

            echo “This is test file 2.” > /tmp/testdir/testfile2

            echo “This is test file 3.” > /tmp/testdir/testfile3

            1. Set user and group ownership. As root user, run the following commands:

              1. Set user ownership on the test files and directories:

                # chown test02 /tmp/testdir/testdir1
                

            chown test03 /tmp/testdir/testdir2

            chown test04 /tmp/testdir/testdir3

            chown test02 /tmp/testdir/testfile1

            chown test03 /tmp/testdir/testfile2

            chown test04 /tmp/testdir/testfile3

            2.  Set group ownership on the test files and directory:
                
                <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml"># chgrp tstgrp03 /tmp/testdir/testdir1
            

            chgrp tstgrp03 /tmp/testdir/testdir2

            chgrp tstgrp04 /tmp/testdir/testdir3

            chgrp tstgrp03 /tmp/testdir/testfile1

            chgrp tstgrp03 /tmp/testdir/testfile2

            chgrp tstgrp04 /tmp/testdir/testfile3

            1. Set file permissions. As root user, run the following commands to set permissions on the test files and directories:

              # chmod 700 /tmp/testdir/testdir1
              

            chmod 770 /tmp/testdir/testdir2

            chmod 770 /tmp/testdir/testdir3

            chmod 600 /tmp/testdir/testfile1

            chmod 640 /tmp/testdir/testfile2

            chmod 640 /tmp/testdir/testfile3

            1. Check ownership and permissions. After completing these steps, run the following command as root user to confirm that the ownership and permissions have been set correctly:

              # ls –l /tmp/testdir

              The output from this command should be similar to:

              total 24
              

            drwx------  2 test02  tstgrp03   4096 Feb 17 12:54 testdir1 drwxrwx---  2 test03  tstgrp03   4096 Feb 17 13:03 testdir2 drwxrwx---  2 test04  tstgrp04   4096 Feb 17 13:03 testdir3 -rw-------  1 test02  tstgrp03     21 Feb 17 12:39 testfile1 -rw-r-----  1 test03  tstgrp03     21 Feb 17 12:40 testfile2 -rw-r-----  1 test04  tstgrp04     21 Feb 17 12:40 testfile3

            To verify access control data for test users

            1. Log on as user test02. Log on either at the console command line of the UNIX-based computer or by using telnet (if telnetd is installed on the client computer) as test user test02.

            2. Check group membership. Run the following command to check the group membership for user test02:

              % groups

              The output from this command should be:

              tstgrp01 tstgrp02 tstgrp03

              CAUTION   If the output from the groups command does not match this output, the following tests will not be valid. Correct the group membership before proceeding.

            3. Test ability to access existing files. Run the following commands to test that user test02 can access the test files (the expected output is shown following each command):

              1. User test02 can access testfile1:

                % cat /tmp/testdir/testfile1
                

            This is test file 1.

            2.  User **test02** can access **testfile2**:
                
                <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">% cat /tmp/testdir/testfile2
            

            This is test file 2.

            3.  User **test02** cannot access **testfile3**:
                
                <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">% cat /tmp/testdir/testfile3
            

            cat: /tmp/testdir/testfile3: Permission denied

            Based on the group memberships of user **test02** and the permissions set on the files, user **test02** should be able to read **testfile1** and **testfile2** but not **testfile3**.
            
            1. Test ability to create a file in existing directories. Run the following commands to test that user test02 has access permissions to write to the test directories (the expected output is shown following each command—no output appears when test02 has permissions to write to the directory):

              1. User test02 can write to testdir1:

                $ touch /tmp/testdir/testdir1/test02
                

            [no output appears in response to the above command]

            2.  User **test02** can write to **testdir2**:
                
                <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">$ touch /tmp/testdir/testdir2/test02
            

            [no output appears in response to the above command]

            3.  User **test02** is denied permission to write to **testdir3**:
                
                <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">$ touch /tmp/testdir/testdir3/test02
            

            touch: creating '/tmp/testdir/testdir3/test02': Permission denied

            Based on the group memberships of user **test02** and the permissions set on the files, user **test02** should be able to create a new file in directories **testdir1** and **testdir2** but not in **testdir3**.
            
            1. Test ability to list contents of existing directories. Run the following commands to test that user test02 does have access permissions to the read the directories (the expected output is shown following each command—the actual list of files returned might vary depending on other tests performed):

              1. User test02 can read testdir1:

                % ls -l /tmp/testdir/testdir1
                

            total 0 -rw-r--r--    1 test02   tstgrp01        0 Feb 17 12:54 test02

            2.  User **test02** can read **testdir2**:
                
                <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">% ls -l /tmp/testdir/testdir2
            

            total 0 -rw-r--r--    1 test02   tstgrp01        0 Feb 17 12:54 test02

            3.  User **test02** is denied permission to read **testdir3**:
                
                <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">% ls -l /tmp/testdir/testdir3
            

            ls: /tmp/testdir/testdir3: Permission denied

            Based on the group memberships of user **test02** and the permissions set on the files, user **test02** should be able to list the contents of directories **testdir1** and **testdir2** but not **testdir3**.
            
            1. Repeat tests for user test03. Repeat the preceding tests for user test03. The output from the groups command for user test03 should be:

              tstgrp01 tstgrp02 tstgrp04

              Based on the group memberships of user test03 and the permissions set on the files, user test03 should be able to read the test files testfile2 and testfile3 but not testfile1, write a new test file to directories testdir2 and testdir3 but not testdir1, and list the contents of directories testdir2 and testdir3 but not testdir1.

            2. Repeat tests for user test04. Repeat the preceding tests for user test04. The output from the groups command for user test04 should be:

              tstgrp01 tstgrp02 tstgrp03

              Based on the group memberships of user test04 and the permissions set on the files, user test04 should be able to read the test files testfile2 and testfile3 but not testfile1, write a new test file to directories testdir2 and testdir3 but not testdir1, and list the contents of directories testdir2 and testdir3 but not testdir1.

            Test Potential Failure Cases

            Testing failure modes is important to ensure that the solution you plan to deploy operates securely in the expected manner. You can use the test results in this section to develop and enhance troubleshooting guides for your help desk personnel. Examples of appropriate tests that should cause authentication or, in some cases authorization, to fail are described in the following procedures.

            This section includes the following test topics:

            • Clock skew

            • Missing or incorrect key table

            • Mismatched or unsupported encryption types

            • Active Directory failover

            • Restarting a UNIX-based computer

            • LDAP proxy authentication via Kerberos

            Clock Skew

            [All Solutions]

            On a network, clock skew refers to the difference in the time displayed by the clocks on the computers in your network environment. In a Kerberos environment, if the clocks on your computers are not synchronized, authentication fails.

            To introduce clock skew and verify that authentication fails

            Note   If NTP is in use in your environment, you should temporarily disable NTP on the UNIX client to be tested. If NTP is running, you might encounter inconsistent results when manually changing times.

            1. Log on. On the UNIX client, log on as test user test01 and confirm that the logon succeeds.

            2. Reset time so that it is unsynchronized. Reset the time on the UNIX client so that it is more than 5 minutes earlier or later than the time on the Active Directory server against which the client authenticates:

              1. Log on as or su to root. Log on to the UNIX-based computer as root, or use the su (substitute user) command to change to root, and then open a command shell.

              2. Ascertain current date and time. Type the following command to determine the current date and time on the computer:

                # date
              3. Introduce unsynchronized time. Check the time on the Active Directory server, and then reset the time on the UNIX-based computer so that it is no longer synchronized with the Active Directory server. The following command, for instance, sets the clock on the UNIX-based computer to 10 minutes faster than the Active Directory date and time of February 20 at 6:08 P.M.:

                # date 02201818

                You do not need to specify the year in this command unless you want to change the year. This date command should display the following output:

                Mon Feb 20 08:18:00 PST 2006
            3. Log off and then log on again. On the UNIX client, log off and then try to log on as test user test01 again. Confirm that the logon fails.

            4. Resynchronize time. Resynchronize the time between Active Directory and the UNIX host by resetting the clock on the UNIX host to match the time on the domain controller.

            5. Log on and confirm credential acquisition. On the UNIX client, log on as test user test01 again and confirm that the logon succeeds. Type the following command to confirm that credentials have been acquired:

              % klist

              The output of this command should show a credential with the current date and time. For example:

              Note: Some of the lines in the following code have been displayed on multiple lines for better readability.

              Ticket cache: /tmp/krb5cc_10004
              

            Default principal: test01@EXAMPLE.COM   Valid starting                   Expires             Service principal Mon 20 Feb 2006 08:25:07 AM PST  Mon 20 Feb 2006 06:25:22 PM PST  krbtgt /EXAMPLE.COM@ EXAMPLE.COM         renew until Mon 27 Feb 2006 08:25:07 AM PST

            Missing or Incorrect Key Table

            In the Developing Phase, you created a Kerberos service key table file (the krb5.keytab file) for each UNIX-based computer. The krb5.keytab file contains a key for the host principal of the UNIX-based computer (host/fqdn@EXAMPLE.COM). The key table file stores the key—the password—for the UNIX host's computer account. With this key, the logon service on the UNIX-based computer decrypts the service ticket for this service, which the user acquired as part of the attempt to use Kerberos authentication to log on to the UNIX-based computer. If the key stored in this key table is invalid, or if this key table is missing or unreadable, the logon attempt should fail.

            Problems that can affect key tables include:

            • Permissions incorrect on key table. The key table needs to be readable by the service that will use the key to decrypt the service ticket. In this case, the service runs as root and the key table should be owned by root and readable only by root.

            • Corrupt key table file. A key table file can become corrupted if the file is transferred with ftp using ASCII mode instead of binary mode.

            • Incorrect key in key table. If the key stored in the key table becomes unsynchronized with the key stored in Active Directory, the key table is no longer valid. This can happen if a new key table is created but not swapped in place of the existing key table. Creating a new key table changes the password stored in Active Directory.

            • Incorrect principal in key table. The name of the principal stored in the key table must be of the form host/fqdn@REALM, where fqdn is the fully qualified domain name of the computer on which the key table resides and REALM is the Active Directory domain/realm. For example:

              host/unix01.example.com@EXAMPLE.COM
            • Incompatible encryption type. The encryption type of the key stored in the key table must be one that is supported by the solution and configured in the krb5.conf file. For example, a key table encrypted with the RC4-HMAC encryption type is not valid for the custom native OS solutions described in this volume, which do not support this encryption type.

            • Missing key table. Some UNIX-based computers consider a missing key table to be a different problem from the case of an existing key table with any of the problems listed in the preceding bullets. Native OS solutions with this design might allow a user to log on when no key table is in place. This is not a good security practice, and most solutions will deny logon with either an incorrect key table or a missing key table. You should test how a missing key table affects the solution's behavior so that you can adequately plan how you want to handle this behavior in your production environment.

            Missing Key Table

            [All Solaris Solutions , and Open Source Red Hat Solutions]

            For example, in the test in the following procedure, you remove the key table to simulate a missing key table, try to log on as the user test01, and then confirm that the logon fails.

            Note   This test will fail for native OS Red Hat 9 solutions. A security failure in native OS Red Hat 9 allows a user to log on even when the key table is missing or incorrect because credential verification is not performed. This is one of the reasons why we do not recommend deploying a native OS Red Hat 9 solution in your production environment.

            To remove the key table and verify that authentication fails

            1. Log on as test01. On the UNIX client, log on as test user test01 and confirm that the logon succeeds.

            2. As root , move the key table to make it unavailable. On the same UNIX client, log on as root or su to root, and then rename the Kerberos key table file to make it unavailable to test01 (renaming the key table saves a copy that you restore in a later step in this procedure):

              For Red Hat (open source only):

              # mv /etc/krb5.keytab /etc/krb5.keytab.save

              For Solaris:

              # mv /etc/krb5/krb5.keytab /etc/krb5/krb5.keytab.save
            3. Confirm logon failure. On the UNIX client, log off, try to log on as test user test01, and then confirm that the logon fails.

            4. Restore the key table file. Rename the saved Kerberos key table file to its original name:

              For Red Hat (open source only):

              # mv /etc/krb5.keytab.save /etc/krb5.keytab

              For Solaris:

              # mv /etc/krb5/krb5.keytab.save /etc/krb5/krb5.keytab
            5. Confirm logon. On the UNIX client, log off, log on as test user test01, and then confirm that the logon succeeds.

            Incorrect Key Table

            [All Solaris Solutions , and Open Source Red Hat Solutions]

            In the test in the following procedure, you create a new key table file that contains an invalid key and confirm that authentication fails.

            Note   This test will fail for native OS Red Hat solutions. A security failure in native OS Red Hat 9 allows a user to log on even when the key table is missing or incorrect because credential verification is not performed. This is one of the reasons why we do not recommend deploying a native OS Red Hat 9 solution in your production environment.

            To create a key table with an invalid key and verify that authentication fails

            1. Log on as test01. On the UNIX client, log on as test user test01 and confirm that the logon succeeds.

            2. As root , move (rename) the key table to make it unavailable. On the UNIX client, log on as root or su to root, and then rename the Kerberos key table file to make it unavailable to test01 (renaming the key table saves a copy to restore in a later step in this procedure):

              For Red Hat (open source only):

              # mv /etc/krb5.keytab /etc/krb5.keytab.old

              For Solaris:

              # mv /etc/krb5/krb5.keytab /etc/krb5/krb5.keytab.old
            3. Create a new key table. Create a new Kerberos key table file for this computer by using the ktpass tool or the css_adkadmin tool.

              Note   See the appropriate section in Chapter 4, "Developing a Custom Solution" for instructions about how to create a key table (be sure to use the correct encryption type for the key table, as indicated in Chapter 4):

              • Create Kerberos Service Key Table

                [Windows] [Native OS Only] [End States 1 and 2]

              • Create a Service Key Table

                [Open Source] [Solaris 9] [End States 1 and 2]

              • Create a Service Key Table

                [Open Source] [Red Hat] [End States 1 and 2]

            4. Copy new key table to client. Copy the key table onto the UNIX client.

              CAUTION   Because the key stored in the key table file must be kept confidential, in a production environment, be sure to securely copy the key table file to the UNIX host by using an encrypted channel or a physically secure method (such as a floppy disk).

            5. Confirm logon. On the UNIX client, log on as test user test01 and confirm that the logon that now uses the new key table succeeds.

            6. As root , move (rename) the new key table to make it unavailable. On the UNIX client, log on as root or su to root, and then rename the new key table file to make it unavailable to test01:

              For Red Hat (open source only):

              # mv /etc/krb5.keytab /etc/krb5.keytab.new

              For Solaris:

              # mv /etc/krb5/krb5.keytab /etc/krb5/krb5.keytab.new
            7. As root , make the old key table the current key table. On the UNIX client, log on as root or su to root, and then rename the old key table to make it the current key table:

              For Red Hat (open source only):

              # mv /etc/krb5.keytab.old /etc/krb5.keytab

              For Solaris:

              # mv /etc/krb5/krb5.keytab.old /etc/krb5/krb5.keytab
            8. Confirm logon failure. On the UNIX client, try to log on as test user test01 and confirm that the logon fails. The logon should fail because when you create the new key table, the old key table is no longer valid because the new key table contains a different key.

            9. Restore the new key table. Rename the saved Kerberos key table file (the new key table) to its original name:

              For Red Hat (open source only):

              # mv /etc/krb5.keytab /etc/krb5.keytab.old
              

            mv /etc/krb5.keytab.new /etc/krb5.keytab

            **For Solaris:**
            
            <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml"># mv /etc/krb5/krb5.keytab /etc/krb5/krb5.keytab.old
            

            mv /etc/krb5/krb5.keytab.new /etc/krb5/krb5.keytab

            1. Confirm logon. On the UNIX client, log off, log on again as test user test01, and then confirm that the logon succeeds.
            Mismatched or Unsupported Encryption Types

            [All Solutions]

            UNIX-based computers support a set of encryption types that is different from the set of encryption types supported by Active Directory. The custom solutions in this volume work correctly only with encryption types that are supported by both Active Directory and the UNIX solution selected.

            The possible encryption types for the custom open source and native OS solutions described in this volume are listed in Table 5.13 in order of security, with the most secure encryption type available for each solution at the top of each list.

            Table 5.13. Open Source and Native OS Encryption Types

            Supported Encryption Types for Open Source Solutions

            Supported Encryption Types for Native OS Solutions

            RC4-HMAC

            DES-MD5

            DES-CRC (not recommended)

            DES-MD5

            DES-CRC (not recommended)

            The procedures in this section test that authentication failure occurs if you configure a mismatched or unsupported encryption type.

            To configure a native OS solution for RC4-HMAC encryption and verify that authentication fails

            1. Log on. On the UNIX client, log on as test user test01 and confirm that the logon succeeds.

            2. As root , specify RC4-HMAC. On the same UNIX client, log on as root or su to root, and then edit the [libdefaults] section of the krb.conf file to add RC4-HMAC as an encryption type:

              [libdefaults]
              

                  default_realm = EXAMPLE.COM       default_tkt_enctypes = rc4-hmac des-cbc-md5 dec-cbc-crc       default_tgs_enctypes = rc4-hmac des-cbc-md5 dec-cbc-crc

            **Note**   The structure of the \[libdefaults\] section of the krb5.conf file varies by solution. This example shows only those entries that are common to all solutions. The section for your solution might contain additional settings, which should not be removed.
            
            1. Confirm logon failure. On the UNIX client, try to log on as test user test01 and confirm that the logon fails.

            2. Restore original configuration. On the UNIX-based computer, log on as root or su to root, and then edit the [libdefaults] section of the krb.conf file to remove the RC4-HMAC encryption type added in step 2:

              [libdefaults]
              

                  default_realm = EXAMPLE.COM       default_tkt_enctypes = des-cbc-md5 dec-cbc-crc       default_tgs_enctypes = des-cbc-md5 dec-cbc-crc

            **Note**   The structure of the \[libdefaults\] section of the krb5.conf file varies by solution. This example shows only those entries that are common to all solutions. The section for your solution might contain additional settings, which should not be removed.
            
            1. Confirm logon. On the UNIX client, log on as test user test01 and confirm that the logon succeeds.

            To configure an open source solution for DES3-CBC-SHA1 encryption and verify that authentication fails

            1. Log on. On the UNIX client, log on as test user test01 and confirm that the logon succeeds.

            2. As root , configure DES3-CBC-SHA1. On the UNIX client, log on as root or su to root, and then edit the [libdefaults] section of the krb5.conf file to add DES3-CBC-SHA1 as the only encryption type:

              Note   The DES3-CBC-SHA1 encryption type is commonly used for open source versions of Kerberos but is not supported by Active Directory. If you use an open source version of Kerberos and do not explicitly configure encryption types, this encryption type will be the first one that is tried for authentication.

              [libdefaults]
              

                  default_realm = EXAMPLE.COM       default_tkt_enctypes = des3-cbc-sha1       default_tgs_enctypes = des3-cbc-sha1

            **Note**   The structure of the \[libdefaults\] section of the krb5.conf file varies by solution. This example shows only those entries that are common to all solutions. The section for your solution might contain additional settings, which should not be removed.
            
            1. Confirm logon failure. On the UNIX client, log on as test user test01 and confirm that the logon fails.

            2. Restore original configuration. On the UNIX client, log on as root or su to root, and then edit the [libdefaults] section of the krb.conf file to replace the DES3-CBC-SHA1 encryption type used for the test. Do one of the following, as appropriate for your environment:

              If you use the DES-CBC-MD5 encryption type with your solution, edit the krb.conf file as follows:

              [libdefaults]
              

                  default_realm = EXAMPLE.COM       default_tkt_enctypes = des-cbc-md5 dec-cbc-crc       default_tgs_enctypes = des-cbc-md5 dec-cbc-crc

            – or –
            
            If you use the RC4-HMAC encryption type with your solution, edit the krb.conf file as follows:
            
            <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">[libdefaults]
            

                  default_realm = EXAMPLE.COM       default_tkt_enctypes = rc4-hmac des-cbc-md5 dec-cbc-crc       default_tgs_enctypes = rc4-hmac des-cbc-md5 dec-cbc-crc

            **Note**   The structure of the \[libdefaults\] section of the krb5.conf file varies by solution. This example shows only those entries that are common to all solutions. The section for your solution might contain additional settings, which should not be removed.
            
            1. Confirm logon. On the UNIX client, log on as test user test01 and confirm that the logon succeeds.
            Active Directory Failover

            [All Solutions]

            In a robust authentication infrastructure, you must plan to provide for authentication or for both authentication and authorization when one or more servers that provide authentication and/or authorization services is unavailable. On a UNIX-based computer, you do this by listing multiple Active Directory servers in the Kerberos and (for End State 2 solutions) LDAP configuration files.

            Failover to an Available Active Directory Server Enables Logon

            In this test, you simulate the loss of the primary Active Directory server to which UNIX clients direct Kerberos and (for End State 2 solutions) LDAP requests and test whether logon can still be completed in this failover condition.

            CAUTION   If your Active Directory servers also act as DNS servers for your UNIX-based computers, disabling connectivity between the UNIX-based computer and one or more of the Active Directory servers will affect any functions on the UNIX-based computer that require host name resolution lookups—not just those related to authentication and authorization.
            If the first DNS server listed in the UNIX-based computer's /etc/resolv.conf file is removed from service, functionality of the UNIX-based computer will be severely affected. In this situation, authentication attempts might fail as a result of logon timeouts even if a second Active Directory server is available to handle Kerberos and LDAP requests. Therefore, when testing failover, you might find it helpful to manually configure the /etc/hosts file for name resolution or to manually configure the order of entries in the /etc/resolv.conf file to ensure that the first entry is a server that will remain available throughout the testing.

            You can introduce problems that will cause a UNIX client, in the event of the failure of the first Active Directory server, to transfer authentication and authorization requests to the second or subsequent Active Directory servers configured in the Kerberos configuration file (krb5.conf) and LDAP configuration file (ldap.conf, or, for a native OS Solaris solution, in the ldap_client_file) in any of several ways, including:

            • Unplug the Active Directory server from the network.

            • Demote the Active Directory server.

            • Configure a firewall between the Active Directory server and the UNIX client to block packet transmission for the Kerberos and/or LDAP ports required for functionality.

            The test in the following procedure uses the first method (disconnecting the server), which is the simplest method in a small test environment.

            Note   In a test environment that is shared with other test projects (or in a production environment), unplugging a server can have unintended consequences. In such an environment, using the firewall option is recommended.

            To confirm that logon succeeds for an Active Directory user when the first Active Directory server is unavailable

            1. Log on. On the UNIX client, log on as test user test01 and confirm that the logon succeeds.

            2. Identify the first Active Directory server. On the UNIX client, review the entries in the krb5.conf and ldap.conf or (for a native OS Solaris solution) ldap_client configuration files to identify the Active Directory server listed first in these configuration files.

              For example, the following [realms] section from the krb5.conf file shows adserver01 listed first for authentication and adserver02 listed second. With this configuration, Kerberos authentication requests will be sent first to adserver01 and then to adserver02 if adserver01 does not respond.

              [realms]
              

                    EXAMPLE.COM = {         kdc = adserver01.example.com:88         kdc = adserver02.example.com:88         admin_server = adserver01.example.com:749         kpasswd_server = adserver01.example.com:464         kpasswd_protocol = SET_CHANGE         }

            The following line from the ldap.conf file shows adserver01 configured first for LDAP authorization followed by adserver02. If adserver01 does not respond to LDAP authorization requests, the requests will be sent to adserver02.
            
            <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">uri ldap://adserver01.example.com ldap://adserver01.example.com</pre>
            
            
            The following entry from the /var/ldap/ldap\_client\_file for the native OS Solaris End State 2 solution shows adserver01 (IP address 10.9.8.1) configured first for LDAP authorization followed by adserver02 (IP address 10.9.8.2):
            
            <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">NS_LDAP_SERVERS= 10.9.8.1 10.9.8.2</pre>
            
            1. Ensure that the first DNS server remains available. Review the name server (DNS server) entries for host name resolution on the UNIX client in the /etc/resolv.conf file to confirm that the server to be removed from service, adserver01 in this example, is not the first DNS server listed in the /etc/resolv.conf file.

              For example, configure the /etc/resolv.conf file as follows, where (in this example) 10.9.8.2 is the IP address of a DNS server that will continue to respond to requests during the test:

              domain example.com
              

            nameserver 10.9.8.2 nameserver 10.9.8.1

            **Note**   The configuration style of your /etc/resolv.conf file might differ from the style used here.
            
            1. Disconnect the first Active Directory server. Remove the first Active Directory server listed in the Kerberos and LDAP configuration files by disconnecting it from the network. (Alternatively, you can use one of the other methods noted previously.)

            2. Confirm that failover enables logon to the available Active Directory server. On the UNIX client, log on as test user test01 and confirm that the logon succeeds. You should see a small delay before the logon succeeds, during which the failover process enables the UNIX client to transfer its logon request to the available Active Directory server.

            3. Restore "failed" Active Directory server. Return the Active Directory server to service and confirm that logon succeeds with no delay.

            Local Logon When All Active Directory Servers Are Unavailable

            Unlike Windows-based computers, UNIX-based computers do not cache Active Directory credentials to allow an Active Directory-enabled UNIX user to log on when no Active Directory servers are available to respond to requests. When all Active Directory servers are unavailable, Active Directory users cannot log on to the Active Directory domain. Therefore, it is important that, if this situation occurs, local users, and especially the local root user, can still log on locally.

            Note   Although UNIX-based computers typically do not cache Active Directory credentials for UNIX users with Active Directory accounts, both commercial solutions described in this volume, DirectControl from Centrify and Vintela Authentication Services (VAS) from Quest Software, provide caching of Active Directory credentials to allow Active Directory users to log on even if all Active Directory servers are unavailable.

            IMPORTANT   For this test, you either need a DNS server that is not also an Active Directory server, or you need to configure the /etc/hosts file on the UNIX-based computer to contain name resolution data for your Active Directory servers.

            To confirm that logon succeeds for local users when no Active Directory server is available

            1. Log on. On the UNIX client, log on as test user test01 and confirm that the logon succeeds.

            2. Ensure that name resolution remains available: 

              1. On the UNIX client, configure host name resolution to a source other than Active Directory. For example, add entries such as the following to the local /etc/hosts file:

                10.9.8.1      adserver01.example.com adserver01
                

            10.9.8.1      adserver02.example.com adserver02

            2.  If you use /etc/hosts for supplemental host name resolution, confirm that the /etc/nsswitch.conf on the UNIX client is configured to read the local /etc/hosts file for host name resolution. For example:
                
                <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml">hosts:      files dns</pre>
            
            1. Disconnect all Active Directory servers. Remove all Active Directory servers listed in the Kerberos and LDAP configuration files by disconnecting them from the network. (Alternatively, you can use one of the other methods noted in the preceding subsection.)

            2. Log on locally as root. On the UNIX client, try to log on as the local root user and confirm that the logon succeeds. A significant delay might occur before logon succeeds.

              Note   If logon fails, attempt another access method (for example, console command-line, console GUI, or telnet) to confirm that the logon failure is because of a universal problem instead of a timeout with the specific logon method selected.

            3. Log on locally as a nonroot user. On the UNIX client, try to log on as a local nonroot user and confirm that the logon succeeds. This logon will be subject to the same delay that affects the root logon.

            Restarting a UNIX-based Computer

            [All Solutions]

            Depending on computer and environment configuration, it is possible that restarting a UNIX-based computer (client or server) could cause the solution to cease to function. This might be the result of an automated configuration change implemented only at startup, a dependency on a process that is not automatically started at startup, a clock reset, or any number of other changes.

            To test logon after restarting a UNIX-based computer

            1. Log on. On the UNIX client, log on as test user test01 and confirm that the logon succeeds.

            2. Restart. Restart the UNIX client.

            3. Confirm logon after restart. On the UNIX client, log on as test user test01 and confirm that the logon succeeds after the process of restarting the UNIX client is complete.

            LDAP Proxy Authentication via Kerberos

            [Open Source] [Solaris 9 or Red Hat 9] [End State 2]

            For Solaris or Red Hat End State 2 solutions that use open source components, authentication for the LDAP proxy user is accomplished using Kerberos. The proxy user must be authenticated before authorization data can be retrieved.

            The LDAP proxy user authenticates by using the UNIX kinit command with a password stored in a key table to request a Kerberos TGT. In the following test, you confirm that, if valid credentials for the LDAP proxy user do not exist, logon fails.

            Note   Because a Kerberos key table is used, you can automate this process. See Chapter 4, "Developing a Custom Solution" for instructions about how to create a cron job to automatically acquire this credential periodically.

            To confirm that missing credentials for the LDAP proxy user cause authorization to fail

            IMPORTANT   This test assumes that you have created a cron job to acquire credentials by using the proxy user name (service/proxy), key table path and file name (/etc/proxy.keytab), and credentials cache path and file name (/var/tmp/proxycreds) as specified in Chapter 4, "Developing a Custom Solution."

            1. Log on. On the UNIX client, log on as test user test02 and confirm that the logon succeeds.

            2. As root , remove LDAP proxy user's credentials cache file. On the UNIX client, log on as root or su to root, and then remove the credentials cache file for the LDAP proxy user:

              # rm /var/tmp/proxycreds
            3. As root , restart NSCD. On the UNIX client, log on as root or su to root, and then restart the Name Service Cache Daemon (NSCD**)**:

              For Red Hat:

              # /etc/rc.d/init.d/nscd restart

              For Solaris:

              # /etc/init.d/nscd stop; /etc/init.d/nscd start
            4. Confirm logon failure. On the UNIX client, try to log on as test user test02 and confirm that the logon fails.

            5. As root , acquire new credentials for LDAP proxy user. On the UNIX client, log on as root or su to root, and then run the following command to acquire new credentials for the LDAP proxy user:

              For Red Hat:

              Note: The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.

            /usr/kerberos/bin/kinit -k -t /etc/proxy.keytab -c /var/tmp/proxycreds

            service/proxy && chown nscd:sshd /var/tmp/proxycreds && chmod 640 /var/tmp /proxycreds

            **For Solaris:**
            
            **Note:** The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.
            
            <pre IsFakePre="true" xmlns="https://www.w3.org/1999/xhtml"># /usr/bin/kinit -k -t /etc/proxy.keytab -c /var/tmp/proxycreds service
            

            /proxy && chmod 644 /var/tmp/proxycreds

            **Note**   These commands specify the path to the native OS version of **kinit**. Alternatively, you can use the open source version of **kinit**, which was compiled and installed as part of the developing process for End State 2 in Chapter 4. You can find the open source version of **kinit** in /usr/local/bin. The open source version of **kinit** is required for systems that are configured to use RC4-HMAC encryption.
            
            1. Restart NSCD. On the UNIX client, log on as root or su to root, and then restart the NSCD service:

              For Red Hat:

              # /etc/rc.d/init.d/nscd restart

              For Solaris:

              # /etc/init.d/nscd stop; /etc/init.d/nscd start
            2. Confirm logon. On the UNIX client, log on as test user test02 and confirm that the logon succeeds.

            LDAP proxy user authentication failure can be the result of a number of proxy-user credentials cache problems, including:

            • Missing credentials cache.

            • Incorrect encryption type for credentials in credentials cache.

            • Incorrect permissions on credentials cache.

            • Incorrect ownership of credentials cache.

            • Incorrect path or file name for credentials cache specified in /etc/ldap.conf file.

            Extend Testing to Your Full Environment

            After you complete testing to verify that the selected solution supports the base functionality required in your production environment, you must extend the testing to address all types of users, computers, access methods, processes, and applications in use in your production environment. Testing should include both:

            • Those users, computers, access methods, and applications that will change as a result of implementation of the UNIIX-to-Windows interoperability solution.

            • Those users, computers, access methods, and applications that require authentication or authorization but which will not be directed to Active Directory for this data. You need to include these in testing to ensure that implementation of the solution will not disable any existing functionality in your environment.

              Even if you chose an End State 1 solution and thus have opted not to use Active Directory as your authorization data store, you must still confirm that all types of users can log on and retrieve authorization data from whatever data stores are in use in your environment.

            This means, for example, that you test user type A with all computer types, all access methods, all computers, and all applications; and then you test user type B with each of these for all combinations used in your environment. Your test plan should include a matrix of these combinations.

            Consider both the final configuration of computers and applications once the solution is fully deployed and any transitional configurations. For example, if your deployment plan calls for using existing systems in parallel with the new system for a period of time, it might be important to design test cases that can account for this situation.

            Before you begin the series of tests in this section, make sure that your test lab is set up to simulate your production environment as closely as possible. For more information about how to do so, see the section "Expand Test Lab to Model Your Production Environment" earlier in this chapter.

            This section includes tests for:

            • Multiple types of users.

            • Multiple computer types and configurations.

            • Less common access methods.

            • Switching to Kerberized remote access tools for extended testing.

            • Enterprise management tools.

            • Additional system tools and functions.

            • Two-factor authentication.

            • Additional PAM modules.

            • Custom, open source, and third-party applications.

            • Additional NIS mappings.

            • Capacity and stress tests.

            • Multi-domain forests and trusts.

            • Additional network features.

            Multiple Types of Users

            [All Solutions]

            It is important to test authentication and authorization functionality for each type of user that exists in your production environment. Even if you have opted not to use Active Directory as your authorization data store, you still must confirm that all types of users can log on and retrieve authorization data.

            You should test each user type with each access method in use in your production environment (see "Identify All Access Methods Used in Your Production Environment" earlier in this chapter).

            To check authentication and authorization for each user type

            1. Log on as one type of user. Log on as the first of several types of Active Directory–enabled UNIX user that exists in your production environment.

              For a list of all UNIX user types in your production environment, see the step labeled "Identify all UNIX user types in your production environment" in the procedure "To create a large number of UNIX users for the test environment" in the section "Expand Test Lab to Model Your Production Environment" earlier in this chapter.

            2. Confirm logon. Confirm that you are successfully logged on.

              Note   If logon fails, see Appendix D: "Kerberos and LDAP Troubleshooting Tips."

            3. Confirm TGT. Use the klist command, which lists currently held Kerberos tickets, to confirm that a Kerberos ticket-granting ticket (TGT) has been granted.

              At the console command-line on the UNIX-based computer, type:

              $ klist

              Confirm that the ticket listed in the data returned by klist includes the correct user name and current date and time.

            4. Confirm authorization data. Authorization data required for UNIX logon includes (at least) the following UNIX attributes: UID, primary GID, home directory, and logon shell (such as /bin/sh or /bin/csh).

              Use the series of commands shown in the following table to confirm that UNIX authorization data is retrieved successfully.

              Table 5.14. Use These Commands to Test Retrieval of UNIX Authorization Data

              Command

              Output

              $ id

              Solaris output:

              uid=10000(test02) gid=10000(tstgrp01)

              Red Hat output:

              Note: The line has been split into multiple lines for readability.However, while trying it out on a system you must enter it as one line without breaks.

              uid=10002(test02) gid=10001(tstgrp01) 
              groups=10001(tstgrp01),10002(tstgrp02)

              Note   If the output of the id command displays only the UID and GID numbers but not the user and group names in parentheses, the test has failed.

              $ groups

              Output for both Solaris and Red Hat:

              tstgrp01 tstgrp02

              Note   If the output of the groups command displays no data or only one group, the test has failed.

              $ pwd
              /home/test02
              $ echo $SHELL
              /bin/sh
              **Note**   If any of these tests fail, refer to Appendix D: "Kerberos and LDAP Troubleshooting Tips."
              1. Repeat logon for this user with each shell type. Repeat these steps with the current user for each shell type (such as /bin/sh or /bin/csh) that is in use in your production environment.

              2. Repeat logon with another user type. Repeat these steps with each type of user that exists in your production environment.

              Multiple Computer Types and Configurations

              [All Solutions]

              It is important to test authentication and authorization functionality for each type of computer configuration used in your environment:

              • Repeat the procedure "To check authentication and authorization for each user type" in the preceding section to test each type of UNIX-based or Linux-based computer in use in your organization. Each computer configuration should be tested with each user type, where applicable.

              Make sure that you test all types of computer configurations that exist in your production environment, including the following:

              • Computers that are used by a single user as a desktop.

              • Computers that have multiple network cards (multihomed computers).

              • Computers that are configured to allow logon by multiple users in a lab environment.

              • Computers that receive the most logon requests remotely and computers that receive the most logon requests at the console should be tested separately because they might have different configurations and/or different applications installed.

              • Computers that have a variety of different types of applications installed should be tested separately. For example, some computers might run automated jobs while others do not; different computers run different types of automated jobs.

              • Computers that have different levels of patching or have different patch requirements.

              • Computers that run operating system versions that are different from the ones specifically described for the custom solutions in this volume.

              Less Common Access Methods

              [All Solutions]

              Most users in a production environment log on using only a few access methods. However, it is important to identify and test all access methods used in your environment:

              • Repeat the procedure "To check authentication and authorization for each user type" in the section "Multiple Types of Users" to test each type of access method in use in your environment.

              CAUTION   Failure to test all possible access methods might lead to failures in the production environment after deployment. As you extend your testing environment to reflect your actual production environment, you must ensure that both commonly and infrequently used access methods are tested.

              Switching to Kerberized Remote Access Tools for Extended Testing

              [All Solutions]

              The tests provided earlier in this chapter in the section "Test Base Functionality" suggested that you use non-Kerberized versions of remote access tools such as telnet, ssh, and r’utils for those tests because these tools use the standard login function to provide authentication. When the standard login function is configured to make use of the pam_krb5 library that is being tested for the solution, these versions of the tools will always use PAM and pam_krb5 for authentication.

              However, in this section where you expand your testing environment to reflect your actual production environment, the best practice is to use Kerberized versions of these tools. Depending on the interoperability solution selected and its configuration, Kerberized versions of these tools might or might not make use of pam_krb5 authentication. Those that do not make use of pam_krb5 authentication will make direct Kerberos connections from the Kerberized client application (such as telnet or ssh) to the Kerberized server application (such as telnetd or sshd).

              You should test all uses of remote access tools that occur in your production environment, including both non-Kerberized versions and Kerberized versions that do and do not use PAM authentication.

              CAUTION   In a production environment, you should not use non-Kerberized versions of services such as telnet because they send passwords across the network in clear text. For optimal security in your production environment, we recommend that you configure computers to use the Kerberized versions of telnetd, ftpd, krlogind, and other remote tools.

              For a brief description that contrasts non-Kerberized versus Kerberized versions of these tools, see the subsection "Use Non-Kerberized UNIX Tools for Base Functionality Testing." You can find detailed information about how to configure and use Kerberized versions of these tools in the following sources:

              Enterprise Management Tools

              [All Solutions]

              The interoperability solution that you plan to deploy in your production environment will likely interact with any enterprise management tools in your organization. You should design tests to exercise the functionality of these tools and how they might interact with the authentication and authorization methods chosen.

              The enterprise management tools might include:

              • User and computer provisioning systems.

              • Identity management systems.

              • Patch management systems.

              • System monitoring tools.

              • Enterprise single sign-on systems.

              • Federation systems.

              • Network management systems.

              Additional System Tools and Functions

              [All Solutions]

              In addition to the explicit tests included earlier in this chapter, you should test any other functionality or tool used in your environment that relates to authentication or authorization, especially those that already use Kerberos or LDAP. Even if you have opted not to use Active Directory as your authorization data store, you should still test all tools and functions that require either authentication or authorization.

              Tools and functions that you need to test include:

              • Superuser do (sudo) functionality.

              • Role-based access control (RBAC) functionality.

              • Any other tools.

              Your environment might have many additional functions that require authentication or authorization.

              Superuser Do (sudo)

              [All Solutions]

              In UNIX or Linux operating systems, the sudo tool lets a user run programs as the system's superuser. By default, for security, users who run sudo must supply their own passwords before running the destination command or application.

              If you use sudo in your UNIX environment, test all uses of sudo, including the following:

              • Use sudo to authenticate against Active Directory.

              • Use sudo and retrieve authorization data from Active Directory.

              • Use sudo with any data store that you use in your environment in addition to Active Directory to verify that implementation of the solution has not interfered with sudo functionality.

              Role-based Access Control (RBAC)

              [All Solutions]

              Role-based access control (RBAC) is an approach to restricting system access to authorized users. RBAC differs from the access control lists (ACLs) used in traditional discretionary access control systems in that RBAC assigns permissions to specific operations with meaning in the organization instead of to low-level data objects. If your UNIX environment includes RBAC, you should test RBAC even if you do not plan to migrate authorization to Active Directory. You need to confirm that implementation of the interoperability solution will not interfere with the way in which you use RBAC.

              RBAC information is typically stored in the following databases, which are often implemented as NIS maps:

              • user_attr. Extended user attributes.

              • auth_attr. Authorization attributes.

              • prof_attr. Rights profile attributes.

              • exec_attr. Profile execution attributes.

              Any Other Tools

              [All Solutions]

              Review and identify any other tools, including tools that are software applications as well as any scripts used in your environment that might require authentication, authorization, or both authentication and authorization. Make sure that you include tests for all of these tools that test their functionality with all application, user, and computer types.

              Two-Factor Authentication

              [All Solutions]

              Integrating two-factor authentication into your interoperability solution is outside the scope of this document. However, if you use a two-factor authentication method, such as token or smart cards, you should design test cases to exercise the functionality.

              Additional Pluggable Authentication Modules

              [All Solutions]

              If you currently use or plan to use additional pluggable authentication modules (PAM), you should carefully evaluate whether they will be used in the same PAM stack. For example, you might be replacing the use of your existing PAM module with the pam_krb5 module—but there might be a period of time during deployment when both modules must be active.

              You should carefully analyze your use of PAM modules and design appropriate tests.

              Custom, Open Source, and Third-Party Applications

              [All Solutions]

              Testing of custom, open source, or third-party applications should include all applications used in your production environment. At minimum, you must test the following types of applications:

              • PAM-enabled applications.

              • Non-PAM-enabled applications that require authentication.

              • Applications that require authorization data.

              PAM-enabled Applications

              [All Solutions]

              PAM is a standard used not only by most UNIX operating systems but also by some applications. Your UNIX-to-Windows interoperability solution introduces an additional PAM library into the PAM configuration on each UNIX-based computer in the solution. Therefore, any application operating on these computers that uses PAM authentication will encounter the reference to the pam_krb5 library when reading the PAM configuration file. You must test and confirm that such applications work correctly with Active Directory for authentication, if applicable, or correctly authenticate to other data stores by using other PAM libraries (for example, pam_unix or pam_ldap) if authentication for these applications will not be migrated to Active Directory.

              Your environment might include open source, third-party, or custom PAM-enabled applications. Collect information on applications that perform authentication and review them to determine whether they use PAM for authentication.

              Non-PAM–enabled Applications That Require Authentication

              [All Solutions]

              You must test non-PAM–enabled applications, including:

              • Kerberized applications. Your environment might include many non-PAM–enabled applications that use Kerberos for authentication. PAM-enabled applications other than the base operating system tools such as login and telnet are uncommon. Most Kerberized applications are built with GSS-API. Because PAM is a relatively new technology, most legacy Kerberos applications are not PAM-enabled. The modification of the Kerberos configuration files on each UNIX-based computer enabled for the interoperability solution can affect any other Kerberos functionality on these computers. Therefore, you must test non-PAM–enabled Kerberized applications.

              • Applications that use LDAP authentication. Although the End State 1 and End State 2 interoperability solutions do not make use of LDAP for authentication, other applications in your environment might use LDAP for authentication. For End State 2 solutions, you must configure LDAP on each UNIX-based computer to support LDAP authorization, which might affect applications that use LDAP for authentication.

              • Other applications. You should also test any other applications that run on UNIX-based computers that are included in your interoperability solution. Testing should focus on the authentication and authorization aspects of the application.

              Applications That Require Authorization Data

              [All Solutions]

              You must test applications that require authorization data, including:

              • Applications that use LDAP authorization. Your environment might include applications that use existing LDAP data stores to store authorization data for their users. Before testing, you must discover if any such applications exist in your environment and, if any do exist, determine whether they will be migrated to Active Directory. You should test both those that will be migrated to Active Directory and those that will not. For this testing, you need a test version of any authorization data stores found in your production environment that are used by these applications.

              • Applications that use non-LDAP NSS authorization. You must test any application that uses the portions of NSS configured for an End State 2 solution (at a minimum, the passwd and group sections of the /etc/nsswitch.conf file) to confirm that the application still functions following the modifications to the /etc/nsswitch.conf file that are required for implementation of the interoperability solution. If you plan to implement any additional LDAP configurations for NSS, you must also test applications that use these configurations. For more information, see the next section, "Additional NIS Mappings."

              • Other applications. You should also test any other applications that run on UNIX that are included in your interoperability solution. Testing should focus on the authentication and authorization aspects of the application.

              Additional NIS Mappings

              [All Solutions]

              You must test the automount and netgroup NIS mappings and any additional NIS mappings that exist in your production environment.

              Automount

              [All Solutions]

              In UNIX or Linux operating systems, the automount tool is used to mount remote directories. The most common use of automount is to mount centrally stored home directories.

              You can configure the automount tool to use LDAP authorization. If you use LDAP for automount, you must confirm that automount still operates correctly when you deploy an interoperability solution that uses Active Directory LDAP as the LDAP authorization data store.

              If you use local files or other data stores for automount, access control for Active Directory users does not make use of automount. However, you should still carry out basic testing with LDAP configured against Active Directory to verify that no functionality is lost.

              Netgroups

              [All Solutions]

              In UNIX, a netgroup file defines a network-wide group of hosts and users. In some UNIX or Linux operating systems, netgroups are used to restrict access to NFS file systems and to restrict remote logon and shell access.

              You can configure netgroups to use LDAP authorization. If you use LDAP for netgroup, you must confirm that netgroup still operates when you deploy an interoperability solution that uses Active Directory LDAP as the LDAP authorization data store.

              If you use local files or other data stores for netgroup, access control for Active Directory users will not make use of netgroup. However, you should still carry out basic testing with LDAP configured against Active Directory to verify that no functionality is lost.

              Other NIS Mappings

              [All Solutions]

              In addition to automount and netgroup, other NIS mappings might be used in some UNIX or Linux environments and can be configured in the nsswitch.conf file to use LDAP. If you already use these NIS maps with LDAP, you must confirm that these NIS maps continue to operate correctly when you deploy an interoperability solution that uses Active Directory LDAP as the LDAP authorization data store.

              If you use local files or other data stores for these NIS maps, you should still carry out basic testing with Active Directory configured for LDAP to verify that no functionality is lost.

              Capacity and Stress Tests

              [All Solutions]

              If you perform capacity and stress testing on one client, 30 clients, 300 clients, or 3,000 clients, you will obtain very different results. Adequately testing your interoperability solution requires that you include a realistic number of clients in these tests in order to obtain reliable data for how the solution you plan to deploy will perform in your production environment.

              Capacity and stress testing for this solution is similar to such testing for either a Windows-only or a UNIX-only environment. Therefore, capacity and stress testing that you currently use that is designed for either environment is applicable to this solution with some relatively minor adjustments. The primary differences between Windows and UNIX environments when designing capacity and stress testing for a mixed environment are the following:

              • Manual load balancing is used on UNIX clients. Each UNIX client is manually configured for load balancing by entering specific Active Directory server names in the Kerberos (krb5.conf) and LDAP (ldap.conf or ldap_client_file) configuration files—you must take this UNIX-based load balancing into consideration when performing capacity and stress testing for your new solution. Windows administrators might not be aware that Active Directory server names are included in local UNIX files because Windows does load balancing automatically.

                Note   The instructions provided in Chapter 4, "Developing a Custom Solution" cover configuration of Kerberos authentication with manual load balancing. However, many Kerberos implementations also support load balancing through DNS lookups. You might want to consider this option for your production environment. For End State 2 solutions, LDAP authorization must still be handled with manual load balancing.

              • Solutions that use SSL/TLS create a heavier load. Typically, Windows-based computers joined to an Active Directory forest use only Kerberos authentication. A UNIX-based computer that runs Solaris and that uses the native OS End State 2 solution will use public key asymmetric encryption during LDAP channel authentication. The asymmetric key encryption used is much more resource-intensive than the symmetric key encryption used in Kerberos authentication. The open source End State 2 solutions use Kerberos for the LDAP channel authentication, the load for which should not be significantly different from that in a Windows environment.

                Note   The native OS Red Hat solution does not support the use of SSL/TLS.

              • UNIX NSCD decreases load on the Active Directory server. For End State 2 solutions, using the UNIX NSCD service can significantly decrease the impact of the solution on network and Active Directory load. The NSCD service caches LDAP data retrieved from Active Directory to reduce lookups for duplicate data. In situations where large loads are expected, you might need to tune the NSCD cache configuration on each UNIX client. See the man page for nscd.conf.

              As part of evaluating the potential impact of deploying your interoperability solution on server and network load, you might decide that you need to update your existing Active Directory infrastructure to handle the additional users.

              When you design capacity and stress tests, carefully examine the size and architecture of your environment to determine how complex this testing needs to be. You should include as much capacity and stress testing as possible in your pre-deployment testing. In addition, you should try to design your pilot deployment (described in "Conduct a Pilot Deployment" later in this chapter) to test capacity and stress further to simulate your full production environment as closely as possible.

              For a complex system, it is important that capacity and stress tests include those listed in Table 5.15.

              Table 5.15. Include These Tests When Performing Capacity and Stress Testing

              What To Test

              Description

              Test more users than actually exist

              Test a much larger number of test users than your expected real-world user base (after projected growth).

              Test simultaneous user logons

              Simulate a large number of simultaneous user logons for both peak and sustained throughput.

              Test simultaneous password changes

              Simulate a large number of simultaneous user password changes for both peak and sustained throughput.

              Include random logons and password changes in your tests

              Randomize the order that test users perform logons and password changes.

              Test all users, platforms, and software

              Make sure that test users represent a wide variety of user types from a wide variety of locations and that users' computers run all hardware platforms and software types available in your production environment.

              Test all access methods and applications

              Test all access methods and applications that use authentication and authorization data. Some access methods and applications might create more stress than others.

              Test multiple routine operations

              Make sure that test users perform other operations in addition to logging on and changing passwords.

              It is important to understand that, without the NSCD service running, even a simple operation such as listing the files in a directory results in an LDAP query to the Active Directory server.

              Multi-Domain Forests and Trusts

              [All Solutions (if applicable)]

              The instructions provided in this volume for developing and testing a custom solution assume the use of a single-domain forest with no cross-realm trusts. If your production environment includes multidomain forests, cross-realm trusts, or other complicated domain structures, your test environment must reflect at least the same level of complexity.

              Additional Network Features

              [All Solutions (if applicable)]

              Testing your interoperability solution should include any additional network features, such as network address translation or slow network links, that might be affected by deploying the solution.

              Network Address Translation (NAT)

              [All Solutions (if applicable)]

              Many implementations of Kerberos include the IP address of the source computer in ticket requests sent to the Kerberos KDC. If the KDC reads the request, compares the IP address in the request with the IP address of the sending computer and finds a mismatch, the ticket might be denied (depending on the configuration of the Kerberos KDC or application). For this reason, the use of network address translation in an enterprise can interfere with Kerberos functionality. If your environment uses address translation, especially if this address translation occurs on or between computers that will use Kerberos authentication, this type of address translation should be replicated in your test environment and tested.

              [All Solutions (if applicable)]

              Kerberos uses time stamps in ticket requests as part of the method by which it determines that you are who you say you are. A Kerberos KDC—in this case, Active Directory—will grant a ticket to a user if the user’s request is time-stamped no more than n minutes earlier or later than the time on the KDC. Here, n is the Active Directory Group Policy configuration that specifies the allowed time difference between computers in the Kerberos environment (clock skew), which is 5 minutes by default.

              When using slow network links, such as satellite links, it is possible that interactions between clock skew, client retries, and replay protection can cause authentication failures. If your production environment includes slow network links, you should design tests to exercise these links, including introducing a small degree of clock skew.

              Monitor Potential Long-Term Failures

              Testing how your solution functions over an extended period of time is important in order to catch any potential problems that might take time to develop or that might be intermittent. Identifying and solving these problems during the Stabilizing Phase will make your production deployment much easier. Factors to consider include those described in the tests in this section.

              Realistically, the amount of time you need to maintain your test lab deployment to ensure that these tests are useful is probably at least one month. Just how much time you need depends on your environment, but the longer the test the more likely you are to find and resolve any issues that will arise sooner or later in your production environment. At minimum, you should push more than the expected number of authentications that will be performed in production during a 12-month period.

              Ideally, your test period should cover as many significant time cycles as possible, including:

              • End of week.

              • End of month.

              • End of quarter.

              • Any time period that has meaning in your IT environment.

              This section includes the following long-term tests:

              • Monitor time synchronization over time

              • Monitor host name resolution over time

              • Monitor restarts of UNIX-based computers over time

              • Monitor automated processes over time

              Monitor Time Synchronization Over Time

              [All Solutions]

              Kerberos is sensitive to time synchronization, so you need to monitor the various systems over a period of time to check whether they stay synchronized or gradually drift so far apart that eventually Kerberos functionality fails.

              For information about how to test synchronization between UNIX and Active Directory manually, how to integrate time synchronization and NTP monitoring into your enterprise monitoring system, and how to configure automatic alerts if synchronization drifts, see the section "Test Time Synchronization" earlier in this chapter.

              Monitor Host Name Resolution Over Time

              [All Solutions]

              Kerberos is highly dependent on correct host name resolution, typically performed by the DNS service. Even subtle host name resolution problems, such as correct forward lookup records but a partially incorrect reverse lookup record, can cause Kerberos authentication to fail. UNIX-based computers are also highly dependent on correct host name resolution for general operation. You must monitor your environment over time to verify that ongoing host name resolution updates do not introduce problems that might cause Kerberos to fail.

              Monitor Restarts of UNIX-based Computers Over Time

              [All Solutions]

              Typically, UNIX-based computers (clients or servers) are not restarted on a regular basis. Often, in large enterprises, policies are in place to restart shared UNIX-based computers periodically—perhaps once a month or once every three months. Therefore, it is possible that changes can be made on a UNIX-based computer that might interfere with the correct functionality of the interoperability solution but which might not become apparent until after the computer is restarted at a later time.

              Restarting a UNIX-based computer once as a test (as recommended in the subsection "Restarting a UNIX-based Computer" earlier in this chapter) confirms that nothing on the computer at the time it is restarted creates a problem. However, when you expand your test environment to more accurately model your production environment and monitor it over a period of time, you should also monitor UNIX-based client or server restarts over time to verify that subsequent restarts do not result in unexpected behavior.

              Monitor Automated Processes Over Time

              [All Solutions]

              Identify any software, such as applications, custom scripts, or cron jobs, that runs periodically and might interfere with functionality during or after its execution. This might include components of a large variety of enterprise management, monitoring, or backup applications.

              Because this topic is so broad and at the same time dependent on your environment, it is not possible to provide specific guidelines here for monitoring all of the automated processes in your organization. However, the following subsection provides one example of monitoring an automated process over time.

              Example: LDAP Proxy User Authentication

              [Open Source] [Solaris or Red Hat] [End State 2]

              One example of monitoring an automated process over time, for Solaris or Red Hat End State 2 solutions that use open source components, is to determine whether the cron job scheduled to run periodically to request a credential for the LDAP proxy user continues to run consistently and correctly after a period of time elapses. If the cron job fails, any authorization requests, including logon and simple commands such as directory listings (with the UNIX list or ls command), will begin to fail on the UNIX-based computer. However, this failure might not be obvious immediately because the existing credentials acquired for the LDAP proxy user might be valid for several hours after the cron job fails or because authorization data for users might be cached locally through the use of the NSCD service.

              It is possible that an incorrectly configured cron job might acquire credentials correctly for the proxy user but not frequently enough, leaving a gap of time between the expiration of the previous credentials and the acquisition of new credentials. It is during this gap that authorization requests will fail. Intermittent failures of this sort can be particularly difficult to diagnose.

              To determine whether the LDAP proxy user cron job malfunctions over time

              1. Log on intermittently. Log on at irregular intervals over the test period.

              2. Use a script to request authorization continually. Create a script or scripts to request authorization data repeatedly over time. Scriptable functions that request authorization data include:

                • Directory listing. Use the ls command with the -l switch in a directory where there are files and directories owned by Active Directory users and groups.

                • Get entries. Use the getent command with the passwd and group options to request listings of user and group data from Active Directory.

              Develop an Installation Script for the Pilot Deployment

              [All Solutions]

              Typically, most organizations with a network infrastructure that includes both Windows and UNIX will want to deploy the new interoperability solution across a large number of computers. To facilitate installation and configuration of the solution you plan to deploy on multiple and possibly divergent types of computers, you should develop an installation script to run on your UNIX clients during the Stabilizing Phase.

              When you conduct a pilot deployment (described later in this chapter), you use the prototype script that you develop now to join multiple computers to the domain. After the pilot deployment, you can use the experience gained there to rework the script to help ensure that you will be able to successfully join multiple computers to Active Directory during the production deployment. If you choose to deploy the solution into your production environment in stages, you can adapt and update the installation script for each phase.

              For information about which tasks the production installation process needs to accomplish so that you can design a prototype script now, see Chapter 6: "Deploying a Custom Solution."

              Resolving Issues

              [All Solutions]

              The goal of testing is to discover and track issues that need to be resolved and to learn from the testing experience to help ensure successful pilot and production deployments.

              Fix Bugs

              Members of the Test and the Development teams work together in the reiterative process of troubleshooting and resolving bugs discovered during testing.

              Integrate Experience into Deployment Planning

              After performing the tests described earlier in this chapter and resolving all bugs whose resolution is necessary for a successful deployment, you should adjust your planned deployment strategies based on what you have learned so far during the Developing and Stabilizing Phases.

              Conducting a Pilot Deployment

              [All Solutions]

              After the Test team, working with the Development team, has tested and stabilized the solution that you plan to deploy, the next step is for the Release Management team to conduct a pilot deployment by deploying the solution to one or more groups of typical users in your production environment. Performing a pilot deployment helps you develop your deployment, loading, and operational expertise before you roll out the solution to the entire organization.

              Before you start, review the pilot plan created by using the “Pilot Plan Template” job aid during the Planning Phase and modified, optionally, during the Developing Phase.

              One way to conduct a pilot is in two or more stages: first, with a subset of users who are familiar with IT issues; second, with a larger group of typical end users in your organization. Overall, the pilot users should be a representative cross-section of your community so that the pilot deployment includes all of the following:

              • Applications that are widely used or of critical importance in your organization.

              • Representative physical, organizational, and network locations.

              • Connection types typically used in your organization.

              • Operating systems, including versions and patches, used in your organization.

              • All skill levels present in your organization.

              • Multihomed hosts.

              • Load balancers, that is, computers (such as multiple Active Directory servers) or software (such as BEA WebLogic or F5 Networks) used to accomplish load balancing.

              • Active Directory data store location, if multiple containers are used for organization of users.

              You can use Table 5.16, which provides a synopsis of pilot tasks, as a checklist when you perform your pilot deployment.

              Table 5.16. Checklist of Tasks to Perform a Pilot Deployment

              Task

              Action

              1. Identify success criteria

              Review success criteria identified during the Planning Phase using the “Pilot Plan Template” job aid.

              After you identify the pilot users (in the next step), members of the Test and Release Management teams and pilot participants should agree on the success criteria for the pilot.

              If appropriate, update the success criteria.

              2. Identify pilot users

              Identify users who will participate in the pilot.

              Select:

              • A set of users who have both UNIX accounts and Active Directory accounts.

              • A set of users who have only UNIX accounts.

              • A set of users from an established data store, such as a local /etc/passwd user account database or a NIS identity store.

              • For the initial pilot, select knowledgeable users, such as help desk personnel.

              After you successfully complete an initial pilot deployment with these knowledgeable users, you will repeat the pilot with a subset of users whose expertise is in non-IT areas.

              3. Identify pilot computers

              Identify which computers you want to include in the pilot, and back up all system and data information.

              4. Install the solution

              Perform the same steps that you will take when you deploy the solution in your production environment, adjusting the steps where appropriate for the pilot users.

              For detailed information about these steps, see the following sections under "Deploying the Solution" in Chapter 6, "Deploying a Custom Solution."

              • Rationalize UIDs and GIDs

              • Prepare the Support Staff and User Community for the Deployment

              • Upgrade the Infrastructure (this includes "Upgrade the Active Directory Infrastructure" and "Upgrade the UNIX Infrastructure" subsections)

              • Preconfigure UNIX-based or Linux-based Computers

              • Create or Import User and Group Data into Active Directory

              • Enable the Deployed Solution

              5. Let pilot users perform daily tasks

              Ask the pilot users to perform their usual tasks so that their UNIX hosts interact with other infrastructure components and with a variety of applications on the network.

              For example, for all pilot users, check that any interaction between their UNIX hosts and other production systems and production infrastructure functions as expected.

              For example, verify that the following continue to function correctly:

              • Any application running on the UNIX-based computer.

              • Any server application for which the UNIX-based computer is the client.

              • Enterprise monitoring systems. (As indicated earlier in the section "Test Time Synchronization," you should have already integrated time synchronization and NTP monitoring into your enterprise monitoring system).

              • Systems that might be sensitive to changes in the Active Directory schema, such as identity management or directory synchronization systems.

              6. Perform administrative tasks on servers

              Exercise every element of the production environment. For example:

              • Change the Kerberos key table by changing the password or computer and service accounts.

              • Perform a wide variety of typical administrative tasks on the Active Directory server and on other servers that host resources that the pilot users need to access.

              • Confirm that certificate expiration functions as expected. Consider using a very short lifetime for certificates.

              • If you log UNIX system log (syslog) data to a remote computer, test what happens if the remote computer of the syslog is unavailable. In UNIX, programs do not log information but instead use the syslog service (syslogd) to send all data to a syslog server that saves the data in a log file. If problems occur, syslogd can be a blocking process.

              7. Interview pilot users

              Interview or send a questionnaire to the pilot users to determine whether switching to the use of Active Directory authentication and authorization caused any problems or raised any issues for the users.

              8. Resolve issues

              Resolve any issues that you encounter or that the pilot users report.

              In addition to issues directly related to authentication and authorization, you should look for and resolve problems related to such issues as network and server traffic and security.

              9. Update pilot and deployment plans

              Based on your experience in deploying the pilot and on the feedback you receive from users who participate in the pilot:

              • Update your pilot plan.

              • Update your deployment plan (part of the master project plan).

              • Update (or write) training materials for operations personnel.

              • Update (or write) informational or training materials for end users.

              10. Confirm success criteria have been met

              Confirm that the pilot deployment has met the success criteria specified in "Identify success criteria" at the start of this checklist.

              11. Perform an expanded pilot with a set of typical end users

              For the initial pilot just completed, you used knowledgeable users, such as help desk personnel.

              Now, expand the pilot to include a larger number of computers and a set of users whose expertise is in an area other than IT.

              This time, deploy the solution just as you plan to do for the full production deployment. (See Chapter 6, "Deploying Your Custom Solution" and the "Deployment Plan Template" job aid.)

              In some environments, multiple pilots might be useful.

              12. Looking forward toward the production deployment

              Depending on the size of your organization and the number of UNIX-based or Linux-based servers, workstations, and users that will become part of the Active Directory domain, it might be advisable to deploy your interoperability solution in stages.

              After one or two initial small-scale pilot deployments, you can perform a phased deployment to introduce one new set of users at a time into the newly heterogeneous network environment.

              Stabilizing Phase Major Milestone: Release Readiness Approved

              This completes the Stabilizing Phase for the UNIX-to-Windows interoperability solution that you plan to deploy. You have tested the solution, resolved issues found, and successfully completed one or more pilot deployments.

              At this major milestone, your team and customers should formally agree that all outstanding issues have been addressed and that the solution is ready to be released into the production environment.

              Download

              Get the Windows Security and Directory Services for UNIX Guide

              Update Notifications

              Sign up to learn about updates and new releases

              Feedback

              Send us your comments or suggestions