Chapter 1: Choosing the Right Technology and Planning Your Solution
On This Page
Introduction and Goals
Choosing the Right Technology Solution for End States 1 and 2
Planning Your Solution
Planning Phase Major Milestone: Project Plans Approved
Introduction and Goals
The most important decision that you will make during the Planning Phase of the project to establish interoperability between computers running UNIX and Microsoft® Windows® operating systems in your organization will be choosing which technology solution to implement to meet your desired end state. The five end states defined in this guide either enable UNIX clients to authenticate to Active Directory® directory service; use Active Directory to authorize as well as to authenticate UNIX clients; or establish a cross-realm trust between UNIX and Windows infrastructures that remain separate. You can choose any of several technology solutions to implement the end state that you want to achieve.
The information provided in this chapter is designed to help you make this decision. At the end of the Planning Phase, you will have:
Determined which technology solution is the best means for your organization to reach your chosen end state.
Developed a solution architecture that includes a conceptual, logical, and physical design.
Developed a detailed functional specification document that provides direction and guidance for the rest of the project's phases.
Validated the technology and set up the development and testing environments.
Created a master project plan and schedule.
Continued to effectively manage risks associated with the project.
Team leaders can consult the UNIX Migration Project Guide (UMPG) for guidelines about organizing the team and managing the processes required to plan your project successfully.
This chapter is intended for the individuals filling the team lead roles, including the Program Management, Development, Test, User Experience, and Release Management Roles. The Program Management and Development Roles are especially important during this phase because they are responsible for the completion and acceptance process for most of the deliverables. The other roles provide their input and content to the Program Management Role.
Because the Planning Phase involves all areas and aspects of the project, it is important that the aggregate knowledge prerequisites presented in the “About This Volume” of this guide are met. If gaps were identified during the process of completing the “Project Team Skills Template” job aid, it is important that you have either filled or are actively working to fill them at this time. Reviewing the decisions made during the Envisioning Phase can help you understand the implications of the choices that you need to make during the Planning Phase.
IMPORTANT Before completing your planning preparations, you should also review the information in the chapters about the commercial or custom solutions that you might want to deploy. The commercial chapters both contain detailed overviews that introduce their products and explain what each has to offer. The custom chapters, especially Volume 2: Chapter 4, "Developing a Custom Solution" and Chapter 5, "Stabilizing a Custom Solution," contain important information that you need to review before making a final decision.
Major Tasks and Deliverables
Because they represent the documentation of complex decisions that encompass several technical factors and organizational requirements, the work completed during the Planning Phase may require several iterations of each deliverable. The following set of deliverables and activities will need to be completed during this phase:
The functional specification, which describes the functionality of the solution that you decide to implement and includes the following artifacts or documents:
Key design goals that development uses to make decisions.
Usage scenarios that describe how the solution helps users solve their business problems.
The conceptual, logical, and physical designs of the solution.
Validation of the technology.
The master project plan, which defines how the solution will be built by integrating and includes individual plans such as:
The master schedule, which describes when each piece of the work will be completed and shows how the work will be coordinated, taking into account dependencies.
An updated risk assessment with management plans for top risks.
Completed setup of the development and test environments.
The job aids that are included with the Windows Security and Directory Services for UNIX Guide for the Planning Phase are listed in this section.
Security Plan Template. This job aid defines the necessary actions to ensure a secure solution in the production environment.
Budget Plan Template. This job aid defines the estimated costs of building and deploying the solution.
Development Plan Template. This job aid describes the solution development process for the project.
Test Plan Template. This job aid describes all aspects of testing the solution before deployment in the production environment.
Deployment Plan Template. This job aid defines the necessary actions to ensure the smooth deployment and transition of the solution to the production environment.
Pilot Plan Template. This job aid defines the necessary actions to create a pilot program that validates the solution in the production environment before a full deployment of the solution.
Operations Plan Template. This job aid defines the necessary actions to ensure that the solution will be operated appropriately in the production environment.
Note Not all job aids will be discussed in detail in this chapter because their creation and content relates more with the "process and people" information provided in the UNIX Migration Project Guide.
Choosing the Right Technology Solution for End States 1 and 2
During the Envisioning Phase, you will have determined which end state is the best match for your business, user, and technical requirements. During the Planning Phase, you will decide upon the technology solution that is most suitable to achieve your chosen end state. This decision is based on the requirements you identified and whether you want to employ a solution that uses only native operating system (a native OS solution) components; a solution that uses the native operating system as a foundation but adds open source and freely available Kerberos and Lightweight Directory Access Protocol (LDAP) components and tools (an open source solution), or a commercial product. The following table depicts the technology solutions available to achieve either End State 1 or 2.
Table 1.1. Technology Solutions for Achieving End States 1 and 2
1st Decision: End State
2nd Decision: Solution Technology
End State 1
Native OS. Using UNIX or Linux native OS components.
End State 1
Open Source. Using UNIX or Linux native OS components and open source and freely available software.
End State 2
Native OS. Using UNIX or Linux native OS components.
End State 2
Open Source. Using UNIX or Linux native OS components and open source and freely available software.
End State 2
Centrify. Using the DirectControl commercial product from Centrify.
End State 2
VAS. Using the Vintela Authentication Services (VAS) commercial product from Quest Software.
CAUTION We do not recommend deploying the native OS Red Hat 9 solution in your production environment because of the security risks inherent in this solution. For more information, see the discussion in the section "Use Red Hat 9 with Native OS Components for End States 1 and 2" in Volume 2: Chapter 4, "Developing a Custom Solution."
Overview of Custom Solutions
The custom solutions described in this volume include both End State 1 solutions (UNIX clients use Active Directory only for authentication) and End State 2 solutions (UNIX clients use Active Directory Kerberos for authentication and use Active Directory LDAP for authorization). To achieve either End State 1 or End State 2, you can use either native OS components only or you can use open source and freely available software in addition to native OS components.
The following subsections briefly summarize native OS and open source solutions. For a comparison of their respective advantages and limitations, see "Native OS or Open Source" in the introduction of Volume 2: Chapter 4, "Developing a Custom Solution." For detailed steps for deploying each of several available custom solutions, see the appropriate section in Chapter 4.
Native OS Solutions
Using a native OS solution to achieve either End State 1 or 2 involves using only the native components that are distributed as part of the operating system by the operating system vendor. An example is using Sun Solaris 9 with the native Sun Kerberos. The components are installed by default when the operating system is installed in the manner recommended by the instructions provided in this guide. This also includes the installation of recommended updates or patches as distributed by the operating system vendor.
The advantages associated with choosing a native OS solution include:
Capability to achieve your chosen end state without additional technologies, potentially reducing the costs and risks associated with the solution.
An implementation that may be relatively easier to conduct than other technology solutions.
Availability of technical support from the operating system vendor.
A single source for patches and updates.
However, a native OS solution may offer fewer user-friendly features than open source or commercial solutions.
Open Source Solutions
Open source solutions use the native OS, but also require additional Kerberos and LDAP components and tools that are available as open source and free downloads from third-party vendors. Examples include using either Sun Solaris 9 or Red Hat 9 with freely available versions of Kerberos such as MIT Kerberos or Heimdal Kerberos.
The advantages associated with choosing an open source solution include:
Availability of more user-friendly features and in some cases greater security than the native OS solution.
Available for download at no extra cost.
However, if you choose to use an open source solution to achieve either End State 1 or 2, you will need to ensure that you have the resources and infrastructure to support the technology. If you do not currently have a preexisting development environment or the developers with the tools and resources necessary to handle open source software, you need to be willing to take on the cost and effort of implementing one. Although open source solutions are freely available software, you might still be required to install and, in some cases, compile and build additional software. Because these components are not provided by an operating system vendor, technical support from a vendor will not be available. Users will need the skills and knowledge required to support the solution themselves or have a contract with a third party for this support.
Overview of Commercial Solutions
The two commercial solutions that are included in this volume are DirectControl from Centrify and Vintela Authentication Services (VAS) from Quest Software. You can deploy either DirectControl or VAS to achieve an End State 2 solution in which UNIX clients use Active Directory Kerberos for authentication and Active Directory LDAP for authorization. These commercial solutions allow you to consolidate computer, user, and group accounts in Active Directory for authentication, authorization, and directory services beyond authentication and authorization.
The advantages associated with a commercial solution might include:
A solution that is ready to implement "out-of-the-box," resulting in an easier implementation than other solutions.
User-friendly functions and features.
Technical support and professional services available from the product vendor.
However, the costs associated with these support options might not be included in the price of the product itself and will need to be calculated into the budget.
For in-depth information about DirectControl and VAS see:
Volume 2: Chapter 2, "Using Quest Software VAS to Develop, Stabilize, Deploy, Operate, and Evolve End State 2."
Volume 2: Chapter 3, "Using Centrify DirectControl to Develop, Stabilize, Deploy, Operate, and Evolve End State 2."
The Web site for Centrify DirectControl at http://www.centrify.com.
The Web site for Quest Software VAS at http://www.vintela.com.
Selecting the Appropriate Technology Solution
The process of selecting an appropriate technology solution is closely tied to the process of selecting an appropriate end state. Your decision should be partially based on your validation of the results of the End State Selection Tool (ESST) during the Envisioning Phase and your specific user, technology, and business requirements. The questions in the ESST are intended to help you through the selection process by systematically narrowing the choices of recommended end states, and the appropriate technology solution to achieve that end state, according to your answers. Although the ESST suggests a "best fit" end state and technology solution based on your requirements, you will also need to consider such factors as your attitude toward open source technologies, available and required resources, the cost of the solution and the availability of technical support, and user features that are needed when making your final decision.
Similar to selecting an end state, you may already know which technology solution you would like to implement. By referring to the results of the ESST you can verify the validity of that decision, identify any inherent risks, and check for the availability of specific features or functionality of a technology solution. Conversely, you can work from a list of desired features and functionality to discover which end state and technology is best suited to achieve them.
Regardless of your approach, it is important to remember that the ESST is designed to supplement and document, but not replace, your decision-making process. The selection of end state and technology solution should be considered an iterative process that allows you to validate the appropriateness of the recommended end state and technology solution as you refine your specific requirements.
Planning Your Solution
Planning how to implement the technology solution that you want to deploy requires that you develop a design for the solution appropriate to your organization and that you validate that the technology components of the solution you want to deploy are compatible with your network environment. In addition, the recommended practice is to create a functional specification, develop an appropriate set of project plans, and set up development, testing, and staging environments in order to maximize productivity when the Developing Phase starts. Finally, you should create a project schedule and gain approval from key stakeholders.
Each of these tasks is described in the following sections.
Develop the Solution Design
Developing the solution begins with a design process, the results of which are reflected as a component of the functional specification. The design process prepares teams for their responsibilities during the Developing Phase by building upon the vision the team developed and business requirements gathered during the Envisioning Phase.
The design process gives the team a systematic way to work from abstract concepts down to specific technical details by developing conceptual, logical, and physical designs for the solution. These three designs include the features and services that define the functionality of the solution as well as the component specifications that define the products used to deliver the required features and services. The design process should not be considered three separate design processes, but as three overlapping stages of a single design continuum.
The following sections contain a brief overview of each design phase, an example of a conceptual, logical, and physical design for one business scenario, and considerations that may influence how you approach the creation of each design. These considerations are examples of the types of issues and situations that you may need to identify and take into account as you develop your designs. However, it is important to keep in mind that the content of these designs will differ from organization to organization based on individual needs and requirements.
Note As the team moves from gathering requirements to designing the solution to creating detailed functional specifications, it is important to maintain traceability between requirements and solution features. Traceability does not have to be on a one-to-one basis; in fact, it often is not. There may be features that satisfy more than one requirement, and it may take many features to satisfy a single requirement. Every solution must be traceable to its requirement counterpart. Maintaining traceability serves as one way to check the correctness of design and to ensure that the design meets the goals and requirements of the solution.
The conceptual design is a high-level view of how the end state and technology solution you chose will address the business needs and user requirements you defined during the Envisioning Phase. It may be helpful to review the vision/scope document as you create the conceptual design to ensure that the high-level business requirements are captured.
The conceptual design should be considered the starting point of your design process. It can be as simple as a four or five sentence paragraph or a rough sketch that describes the desired results of your project. As you become more involved in creating the logical and physical design, you will begin to identify how the different components will fit together within your infrastructure and which specific technologies you will be implementing.
Example of Conceptual Design
Note The examples presented in the solution design sections continue the Northwind Traders business scenario introduced in Volume 1: Chapter 2, “Envisioning Your Windows Security and Directory Services Solution” and extend to the creation of the conceptual, logical, and physical designs. These examples are intended to illustrate how one company approached the design process. It may be necessary to review the Northwind Traders business scenario in the Envisioning Phase chapter in order to provide context for the information presented in the example.
To successfully address the business requirements they established during the Envisioning Phase, Northwind Traders has chosen to use an open source solution to achieve End State 2.
In their conceptual design they indicate that users on UNIX-based computers will be able to authenticate to and receive authorization data from Active Directory and will use Kerberos to authenticate to all major applications. This will allow Northwind Traders to achieve their vision of "One user name, one password, one place for user information." It will also allow them to address their business requirements of meeting security-related regulatory requirements and reduce the risk of service unavailability, while creating an infrastructure that is flexible to accommodate any future changes.
To illustrate what they want to accomplish, Northwind Traders created the following simple diagram that depicts their design from a high-level conceptual perspective.
Figure 1.1. Conceptual design diagram
The logical design takes the conceptual design and defines the high-level interaction and integration of IT components, without detailing the specific technologies used. The logical design is intended to bridge the gap between the conceptual design and the physical design, creating traceability to specific design-related decisions that reflect the user, technology, and business requirements defined during the Envisioning Phase.
The purpose of the logical design is to create a foundation upon which you can build the physical design that explicitly defines which technologies will be used. The details of the logical design should also include a description of the solution in terms of how it fits into your organization and infrastructure. Your logical design for End States 1 and 2 should depict your network architecture in a series of diagrams showing networks, service components, and network connection elements, as well as including the following:
How legacy identity stores such as /etc/passwd files and Network Information Service (NIS) servers fit into your solution.
Whether you have multiple UNIX identity stores or a single identity store for all UNIX users.
The complexity of the relationship of UNIX accounts to the existing Active Directory infrastructure if you have a multiple-domain or multiple-forest Active Directory configuration.
How computers are being used, including which UNIX-based computers users log on to either locally or remotely and which UNIX-based computers are used as application servers that only require administrative logons.
Note You may find it helpful to draw or sketch a picture of the logical design that includes the major components of this solution and all major applications, servers, directories, and infrastructures within your organization.
Example of Logical Design
Northwind Traders created a logical design diagram of their chosen solution that depicts how Kerberos authentication will be used for authentication for user logon and for accessing Application 1 and Application 2 from both Windows and UNIX clients.
Figure 1.2. Logical design diagram
The logical design shows Active Directory as the only authentication and authorization store once the old system is retired, addressing their need for "One user name, one password, and one place for user information." The removal of the old infrastructure also allows for a much simpler infrastructure to service the entire enterprise.
By implementing Kerberos, Northwind Traders is able to increase the security of their environment in two ways:
By preventing the user name and password from passing over the network unencrypted.
By establishing an encrypted connection for all application traffic data.
Because all users are authenticating to Active Directory, a single password policy can be enforced in the environment. In this configuration, authentication auditing is also centralized, making it easier for monitoring and analysis. This centralization greatly simplifies implementation of security-related regulatory requirements.
The diagram also depicts two uses of authorization data:
Basic workstation authorization. In this case, when the user logs on to the UNIX client, the UNIX client communicates directly with the Active Directory server to retrieve user attributes from the Active Directory LDAP data store. The UNIX client uses this information to set up the workstation environment, such as the user's home directory and shell and access control information—UID, GID, and group memberships—that control what files and directories on the UNIX workstation the user can access.
Access control for Application 2. In this case, Application 2 makes queries directly to Active Directory to obtain authorization information beyond simple group membership. For example, the application might make authorization decisions based on the specific OU containing the user who is running the application or the OU containing the computer account for the UNIX client from which the application was accessed.
The enterprise management system shown as part of the logical design was created by Northwind Traders developers and is responsible for user account life-cycle management (provisioning, changes, deprovisioning) and for computer account management, including key table management and configuration file management.
Kerberos, LDAP, and DNS Considerations for Logical Design
There are specific Kerberos, LDAP, and Domain Name System (DNS) related considerations that may influence how you approach creating your logical design.
Intranet and extranet use of Kerberos. Typically, Kerberos is used only in an intranet scenario. However, if you want to provide authentication and, optionally, authorization for Internet or extranet users, you may want to investigate alternative authentication approaches. This is because using Kerberos for those users requires opening additional ports on the firewall so that external clients can contact the Active Directory servers directly, but opening firewall ports might raise security issues. Issues with time synchronization and DNS requirements might also occur for client computers that are not controlled by your organization.
Kerberos names. Your existing Kerberos principal names should meet the required naming standards. The naming standards used for UNIX or Linux accounts should also meet the existing naming standards for integration into Active Directory or should be modified to be acceptable. For example, some historical applications and systems limited names to lowercase and fewer than eight characters. Although these limits generally no longer apply, if your organization has applications that need to adhere to these limits, then your Kerberos principal names should also be within these limits.
Encryption types. Active Directory Kerberos supports the following commonly used encryption types: DES-CRC (also called des-cbc-crc), DES-MD5 (also called des-cbc-md5), and RC4-HMAC. Because the older DES encryption types are less secure than newer encryption types such as RC4-HMAC, the best practice is to use the most secure encryption type supported by the solution that you select. Of DES-MD5 and DES-CRC, DES-MD5 is the more secure choice. The custom native OS solutions described in this guide support DES-MD5 and DES-CRC. The custom open source solutions support RC4-HMAC in addition to DES-MD5 and DES-CRC.
CAUTION Microsoft does not recommend the use of DES-CRC. Use DES-CRC only if no other option is available.
Note For information about Microsoft Windows Server™ 2003 support for these encryption types, see "How the Kerberos Version 5 Authentication Protocol Works" at http://technet2.microsoft.com/WindowsServer/en/Library/4a1daa3e-b45c-44ea-a0b6-fe8910f92f281033.mspx. For more information about Kerberos encryption types in general, see RFC 3961 at http://tools.ietf.org/wg/krb-wg/draft-ietf-krb-wg-crypto/rfc3961.txt.
Key tables. A key table is required on each UNIX client and UNIX application server using Kerberos. You can manually create a computer account on the Active Directory server and then use ktpass on the Active Directory server to map this computer account to the service principal name needed for the UNIX-based server and create a key table. You would then need to copy the key table file from the Active Directory server to the UNIX client using a secured method such as an encrypted channel or a physically secure method (such as a floppy disk).
Key version number. Your environment should not include domain controllers running both Microsoft Windows 2000 and Windows Server 2003 because of potential key version number issues. Although a domain controller running Windows Server 2003 correctly increments the key version number, for the server running Windows 2000, the key version number is set to either 0 or 1 and is ignored because it is not incremented correctly.
IP ports. Specific IP ports are needed for Kerberos service. The default Kerberos Transmission Control Protocol (TCP) ports are 88 for the KDC and 749 for the administration server. Unless there is a specific reason for modifying these, it is recommended that they be left at the default settings to ease administration and troubleshooting. Additionally, port 3268 for LDAP to global catalog server communication also should be open. For more information on required ports, see "Configuring an Intranet Firewall: Active Directory Communication" at http://www.microsoft.com/technet/prodtechnol/exchange/guides/E2k3FrontBack/f9733398-a21e-4b40-8601-cfb452da82ad.mspx.
Problems may occur with IP address checking when an address translation method is used between the KDC and the client. Proxy servers and other Network Address Translation (NAT) devices on the network can cause the client address included in the Kerberos ticket to not match the network address the client appears to be using to communicate. If addresses are being checked, then Kerberos authentication will fail. By default, Windows Server 2003 does not check the address in Kerberos tickets. For more information, see the KdcDontCheckAddresses registry setting at “Kerberos Authentication Tools and Settings” at http://technet2.microsoft.com/WindowsServer/en/Library/b36b8071-3cc5-46fa-be13-280aa43f2fd21033.mspx.
Time synchronization. A time synchronization infrastructure is necessary because Kerberos requires the rough synchronization of all system clocks to within a tolerance of less than 5 minutes to function correctly. For more information on time synchronization, see "Administering the Windows Time Service" at http://www.microsoft.com/technet/prodtechnol/windowsserver2003/operations/wts.mspx.
Kerberos ticket lifetime. The lifetime of a Kerberos ticket is usually between 8 and 10 hours, depending upon the Kerberos implementation and configuration. Your environment may require more frequent authentication to reduce the chances of a user's ticket being stolen and misused, or longer to reduce the number of times that a given user has to reauthenticate. Unless there are specific reasons to change the ticket lifetime, you should use the default lifetime as supplied with your implementation.
The renewable lifetime of a ticket is the period of time that a Kerberos ticket can be automatically renewed without user intervention. This is usually set to a maximum of seven days. The exact length of the renewable ticket is specified by the user when it is requested, but it cannot exceed the maximum. Unless there is a specific reason to modify this setting, it is recommended that the default setting be used.
Organizational unit (OU). Some solutions do not support the use of multiple OUs branching from the root. Data for users and groups for these solutions can only be found in one branch. For example, these solutions can find users under ou=unix1,dc=domain,dc=com or ou=unix2,dc=domain,dc=com, but not both. However, these solutions can be configured to find users in both ou=users1,ou=unix,dc=domain,dc=com and ou=users2,ou=unix,dc=domain,dc=com. The difference is that these two OUs are under ou=unix and not under the root.
Kerberos credentials. If the SASL–GSS-API Kerberos method is used for authentication of the LDAP proxy user, Kerberos credentials for the proxy user must be acquired periodically. A cron job must be implemented to reacquire a valid credential for the proxy user into the ticket cache before the credential expires. By default, on most systems, credentials are valid for 10 hours. This validity period is configurable; but in configuring this time, consideration must be made both to security and convenience. A longer validity time for the credentials may be more convenient, but it also creates a larger window during which the credentials can be grabbed and used maliciously.
Secure Sockets Layer/Transport Layer Security. Using SSL/TLS may introduce more network traffic and use more resources on the server as compared to GSS-API with Kerberos. Additionally, using SSL/TLS requires the acquisition of SSL certificates. These certificates can either be acquired from an external source or generated by an internal certification authority server from your own PKI.
Name Service Caching Daemon (NSCD). The NSCD is used for End State 2 native OS and End State 2 open source solutions to reduce the overhead of LDAP requests to Active Directory. This daemon handles caching of authorization data so that once requested, a specific piece of data does not need to be requested again until the cache expires. If you are trying to achieve very short intervals between disabling accounts and the resulting blocks from using resources, you need to keep in mind any interactions with NSCD and how this may affect your desired results.
Schemas. An End State 2 solution requires Active Directory support for UNIX authorization data (UID, primary GID, home directory, and UNIX logon shell). If your domain controllers run an operating system earlier than Windows Server 2003 Release 2 (R2), which includes UNIX attributes by default, you can extend the Active Directory schema so that it can store UNIX attributes. The RFC 2307, customer-defined, and Windows Services for UNIX schemas are viable options as schema extensions. The custom solutions in this guide provide instructions that use Windows Services for UNIX to handle UNIX attributes in Active Directory; the instructions used to develop these custom solutions are not compatible with the R2 schema. Both of the commercial solutions provide more than one approach for supporting UNIX attributes. None of the solutions developed in this guide were tested with Windows Server 2003 R2.
If you have an existing data store using a particular schema, it might be easier to use a similar schema with Active Directory to aid with migration or synchronization. This is particularly true if you will continue to use the existing store. For more information, see RFC 2307: "An Approach for Using LDAP as a Network Information Service."
Operating systems and operating system versions. Some older operating system and operating system versions might not fully support Windows Services for UNIX. To use Windows Services for UNIX, the operating system might require either the use of an alternative schema that you add manually to your Active Directory servers, or a proxy server to translate the support schema on the UNIX client to the Windows Services for UNIX schema on the Active Directory server. The solutions described in this guide focus on the Red Hat 9 version of Linux and the Solaris 9 version of UNIX, which are both compatible with Windows Services for UNIX.
Dynamic Auxiliary Classes. The Dynamic Auxiliary Classes are only available when all domain controllers are running Windows Server 2003 and the forest functional mode is at Windows Server 2003. Dynamic Auxiliary Classes is an option you may want to consider for use when designing a schema extension. For more information, see “Dynamic Auxiliary Classes” at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/ad/ad/dynamic_auxiliary_classes.asp.
Domain Name System (DNS)-specific Considerations
DNS host name resolution. Host name resolution must be correct for Kerberos-based solutions. Although DNS must function for LDAP-based solutions as well, Kerberos is especially sensitive to host name resolution problems. The difference between short host names (NETBIOS name) and longer fully qualified domain names (FQDN) should be kept in mind. You should generally use FQDNs with Kerberos, but in a multidomain forest it is required. By default, Internet Explorer will only allow Windows integrated authentication (Kerberos) to computers using the short names. You may want to consider distributing the settings for IE to identify all of your intranet computers using FQDN. This distribution can be accomplished using Group Policy.
For more information on using Kerberos authentication with Internet Explorer and using the FQDN, see the following Knowledge Base articles:
“Unable to negotiate Kerberos authentication after upgrading to Internet Explorer 6” at http://support.microsoft.com/kb/q299838/.* *
“Intranet site is identified as an Internet site when you use an FQDN or an IP address” at http://support.microsoft.com/default.aspx?scid=kb;en-us;303650.
DNS server configuration. You may experience performance issues with a UNIX client if the first DNS server configured in /etc/resolv.conf is unavailable. This may also occur even if the second DNS server configured in /etc/resolv.conf is available. In general, the machine performance for network operations may suffer and logon with these solutions might be unacceptably slow or may fail.
Note Information regarding the use of srv records for Kerberos server resolution is beyond the scope of this guide.
Considerations for Logical Design Using Commercial Products
If you choose to use a commercial product to achieve your chosen end state, the logical design may be different than if you choose native OS components or an open source solution. In addition to the considerations listed in the previous section, because commercial products are deployed into an existing, and perhaps quite large, heterogeneous infrastructure, the following preexisting conditions will need to be taken into consideration during planning:
Migration of users. Solutions for End States 1 or 2 require the migration of users from the existing UNIX environment to Active Directory. Migration of users has two main components: associating the UNIX user with the corresponding Windows account also owned by that user (which may or may not have the same account name); and ensuring that each user's User ID is unique within the scope of systems associated with that user in Active Directory.
Migration of user accounts is from a UNIX-based store (such as the local UNIX configuration files /etc/passwd and /etc/group, a NIS or NIS+ store, or an LDAP-based store) to Active Directory. You will need to consider this migration as part of your logical design and understand that the specifics of the process will depend on the commercial solution you choose and the requirements of your environment.
Available features and functionality. Although the commercial products share some similarities, there are also differences in available features and functionality. One example is the use of LDAP schema extensions. Both Centrify DirectControl and Quest Software VAS support Windows Server 2003 R2 and Windows Services for UNIX. Each also offers proprietary solutions for enabling Active Directory support for UNIX attributes.
Kerberos and LDAP services. By implementing a commercial solution, some of the considerations that applied when using discrete Kerberos and LDAP services may no longer apply to your logical design. For example, the commercial products are capable of automating or simplifying the process of joining a domain and generating Kerberos key tables, configuring the PAM and NSS modules for authentication and authorization, and maintaining the time service and synchronization elements that are required for proper Kerberos functionality.
Regardless of which commercial product you choose, it will be necessary for you to fully investigate and understand the proprietary components of each. This will help you create a logical design that is appropriate and specific to your requirements and environment.
The physical design of the solution specifies which logical pieces fit into specific physical pieces of the architecture and creates a set of physical design models that explain the specific technology components required to successfully complete your project. As you create the design, you will need to decide what needs to be purchased and where it will be physically placed. By generating this detailed list of components, you can begin estimating all associated costs, the project schedule, and any additional resources required for your project. The physical design should also include information such as the components of your network topology and how you will address security and performance-related concerns.
Example of Physical Design
Northwind Traders created two diagrams that illustrate the physical design for their solution. The first diagram depicts the technical components of their solution and how these components interact in order to provide authentication and authorization.
Figure 1.3. Physical design: authentication and authorization components
The diagram shows that authentication is handled by PAM using the pam_krb5 module. The pam_krb5 module uses Kerberos libraries and configuration files—referred to here as the Kerberos client—to authenticate to Active Directory, and the krb5.keytab service key table is used to authenticate the computer to Active Directory.
Authorization is handled by NSS, which uses the NSCD to cache results locally if it is running. The nss_LDAP module then uses the LDAP client to contact Active Directory and retrieves the authorization data. The LDAP connection is protected using GSS-API Kerberos. The authentication of the LDAP connection to Active Directory is done by the SASL layer using Kerberos. This utilizes the user’s Kerberos credentials as well as proxy.keytab and the Kerberos credential cache, which was previously created by a cron job, to authenticate to Active Directory.
The second diagram shows all of the major network and infrastructure computers as well as the different client networks within the Northwind Traders environment. It shows how many UNIX-based and Windows-based computers are within the organization and how they are grouped. Northwind Traders has a firewall that will allow most traffic to the Active Directory servers and application servers, but will prevent most traffic between the trader, staff, and acquisition network segments. Two new Active Directory servers will also need to be deployed as part of the solution: New Active Directory 3, which will be used for increased capacity, and New Active Directory 4, which will service remote locations. These new servers will provide a wider base of authentication servers for increased redundancy and stability.
Figure 1.4. Physical design: network and infrastructure components
The information in this diagram helps Northwind Traders identify the resources necessary for their solution, including how many client workstations are included in each network segment, where the Active Directory servers will be placed, and whether the firewall and router infrastructure will change.
Considerations for Physical Design
As you create your physical design, you should begin identifying the specific technologies and components necessary to successfully complete the project. For example, you should define the capability of your network topology to provide your computers running Windows, UNIX, or Linux operating systems with sufficient bandwidth and low latency on the network links to the servers that provide Kerberos and LDAP services.
Another factor you should take into account is deciding on the location of the servers within your network. You will need to ensure that authentication and authorization data is kept as secure as possible and in an area inaccessible to the public or unauthorized people. Sufficient network protection to block access to all TCP/IP ports that are not required for authentication or authorization services should also be available, and the computer should be positioned to restrict physical access to only authorized personnel. Additionally, you should always place a subordinate server at a separate location to the master server, bearing in mind that the subordinate server has the same security requirements as the master server.
For large-scale infrastructures, there is a need to ensure that the server is capable of processing the maximum number of user connections at a peak time, such as the start of the business day. The computer should have sufficient memory, processor capability, and bandwidth available to respond to all requests. If your organization is on a scale that is not easily handled by a single server, then you should consider implementing one or more subordinate servers or splitting your organization into two or more realms.
Validate the Technology
Often in parallel to the design process, the team will validate the technologies being used in the solution. The purpose of this validation is to ensure that the technology components of the native OS, open source, or commercial product are compatible with your environment and will be capable of addressing your requirements.
It will be necessary to obtain relevant documentation and specifications regarding your chosen solution from your vendor. If a vendor is unavailable for your solution, you may need to seek out the appropriate resources independently or contract a third-party vendor for assistance. You will then need to validate the technology components by confirming that they will work as they are described in these specifications. The technology validation should verify that the solution you chose is capable of addressing the user, business, and technology requirements you established during the Envisioning Phase.
Technical Proof of Concept
After validating the technologies, the team then makes an initial pass at creating a model of the technology to be implemented to produce an initial proof of concept. This initial iteration of the proof of concept will produce both answers and additional questions regarding the solution prior to its actual development. This information helps in the risk management process and identifies any changes needed to the overall designs that must be incorporated into the specifications.
The proof of concept provides evidence of the applicability of the chosen solution to address your requirements in a production environment. Therefore, the proof of concept should be as specific as possible to the end state and technology solution you have chosen.
You should keep the following guidelines in mind as you develop the proof of concept:
Create a development lab that simulates your production environment to the extent possible given your time and resource constraints. However, a full replication of your production environment is not recommended at this stage because the goal for the proof of concept is to investigate which of the solutions will work and is appropriate for your organization.
Include at least two Active Directory servers to validate replication and failover functionalities.
If you want to investigate more than one solution as part of the decision-making process to determine which one is appropriate for your organization, perform a separate proof-of-concept deployment in your development lab for each technology path that you want to try.
If you do perform multiple proof-of-concept deployments in your development lab, to optimize the validity of a comparison between solutions, we recommend using identical development environments for each solution tried.
Later, after a successful initial deployment, including some testing, of one or more solutions in your development lab to validate the proof of concept, you will perform more rigorous testing during the Stabilizing Phase. During the Stabilizing Phase testing, you should mirror the production environment as closely as possible. Include all types of computers, users, and applications in the testing as well as any special cases that are part of your production environment, such as remote offices with slow, unreliable, or high-latency connections, VPN connections, and load balancing equipment.
Baseline the Environment
During the Planning Phase, the team should conduct an audit or inventory of the current-state of the production environment in which the solution will operate. The information collected should include server configurations, the physical network, desktop software, and all relevant hardware, software, and user configurations. This information will help create a baseline that allows the team to account for any changes that will be required or design issues that may cause the solution to be at risk.
The following list contains examples of the type of information you should collect when conducting a baseline of your environment:
Operating systems and operating system versions.
Current systems used for synchronization.
Current systems used for provisioning.
Current systems used for authentication and authorization.
DNS and time synchronization infrastructure.
32-bit and 64-bit versions of operating systems and all packages on those operating systems.
Network topology, including proxy, and NAT devices.
Current Kerberos configuration.
Current authentication and authorization configuration of the UNIX-based computers.
Numbers, membership, and consistency of users and groups, specifically whether the UIDs and GIDs are different on different computers and whether users are members of different groups on different computers.
Current network load or other bottlenecks that either the solution needs to limit impact upon or that must be improved before the solution can be deployed.
Create the Functional Specification
The functional specification is the technical description of the solution and is the basis for building project plans and schedules. The Program Management Role takes the lead in creating it, with input from the role leads regarding their areas of responsibility. During the Planning Phase, the functional specification is baselined and placed under change control.
Because it includes all requirements, documents, and design specifications, the functional specification contains the information required to design the solution. It serves as formal documentation of the design process and incorporates the conceptual, logical, and physical designs. Accordingly, high-level design decisions such as server placement and network configuration should be included in the functional specification.
A basic functional specification should include the following:
A summary of the vision/scope document as agreed upon and refined, including background information to place the solution in a business context.
Any additional user and customer requirements beyond those already identified in the vision/scope document.
The solution design as developed in the "Develop the Solution Design" section of this chapter.
Specifications of the components that will be part of the solution.
The functional specification should describe, without ambiguity, the complete functionality of the solution under development. Quantitative measurements should be included in the functional specification whenever possible. Quantifying performance or business metrics in a functional specification is significant because the information can be used to develop justifications (for example, in development and operations) for a project. These metrics are as much a part of the specification as any other functional details.
The following items are examples of information that should be included in a functional specification when planning heterogeneous security and directory solutions:
Features. The functional specification should document the complete set of planned features for the solution. If possible, the features of the solution should be expressed using both words and diagrams. Quantitative specifications for the solution, such as the number of user accounts, concurrent user capacity, and authentication and directory performance metrics should be clearly stated.
Security requirements. A functional specification should specify the strength of security to be used for LDAP binds and Kerberos authentication, including a description of any encryption standards to be used and the type and location of security systems such as the KDCs and the LDAP servers. It should also include requirements for password policies and the accessibility of certain applications. For example, you should keep in mind that users with root access on the UNIX-based computer will have access to the service key tables and any LDAP configuration files that include LDAP proxy account user names and passwords. It may be necessary to consider adding a periodic audit of file permissions on critical files including service key tables and LDAP configuration files to the security requirements.
Legal requirements. Legal requirements must be clearly understood and stated in a functional specification, including what needs to be done to adhere to any customer, business, or governmental requirements.
Risk analysis. Risk analysis should include descriptions of potential impact to the project and mitigation strategies. For example, the risk analysis documents should state what the risk of failing to obtain necessary hardware would be and provide a mitigation strategy for dealing with this risk.
The following are examples of information that should not be included in a functional specification when planning heterogeneous security and directory solutions:
Details of software architecture. Too much detail in a functional specification can overburden a project team with extraneous facts.
Detailed directory schema. A high-level description of directory details is sufficient to include in a functional specification.
Develop the Project Plans
During the Planning Phase, it is necessary to create plans for future phases of the project including security, budget and resources, development, testing, and operational requirement plans. Various team roles are responsible for creating each of these individual plans, which are in turn used by Program Management to create a single master project plan. The following sections contain a brief overview and description of each project plan. For more information on how to create these plans, refer to the associated job aid. Table 1.2 lists the required project plans and the team roles responsible for producing each plan.
Table 1.2. Individual Project Plans and Roles
Note Schedules are not included as part of these plans; scheduling is a separate task that is discussed in greater detail in the UNIX Migration Project Guide.
The security plan describes how the solution will meet the required security levels in order to operate successfully. The plan also defines any actions that are necessary to help ensure a secure solution in the production environment, identifies potential security risks, and describes security standards that will help mitigate and reduce those risks to acceptable levels.
The following are some considerations you should take into account when developing your security plan:
Validating that the solution meets the organization's security policy.
Making necessary changes to your security policy as a result of implementing the solution.
Ensuring the continuity of security during solution deployment.
Providing security guidance to end users and operations.
For more information on how to create the security plan, refer to the “Security Plan Template” job aid. Additionally, you should take note of how the Kerberos-specific and LDAP-specific information listed in the “Considerations for Logical Design” section may affect your security plan.
The budget plan defines the estimated total cost of the solution—for example, hardware, software, training, and resource costs—and also includes the cost of each project or subproject required to deliver the solution. The purpose of this plan is to help provide a clear indication of all of the costs associated with investing in the solution.
For more information on how to create the budget plan, refer to the “Budget Plan Template” job aid.
The development plan describes the process of solution development and complements the functional specification. The development process as it is detailed in this plan should include all of the development stages relevant to your organization and chosen solution.
Documenting the development process indicates that the team has discussed, and agreed upon, the structure and direction to be used during the development of the project. The purpose of the development plan is to provide guidelines and processes to the teams that are creating the solution, allowing them to proactively focus on actually creating the solution. The guidelines and standards established in the development plan promote meaningful communication between different teams and can be used as a reference to re-emphasize the approach and processes that were approved.
For more information on how to create the development plan, refer to the “Development Plan Template” job aid.
The test plan describes the strategy and approach to organizing and managing the test-related activities of your project. The test plan should take into account all of the appropriate aspects of testing your chosen solution, such as identifying testing objectives, methodologies and tools, expected results, responsibilities, and resource requirements. The purpose of the test plan is to ensure that the testing process is conducted in a thorough and organized manner that enables the team to determine the stability and release-readiness of the solution.
The following are examples of testing activities you may include in your test plan:
Test security settings to ensure they conform to the organization's security policies and performance requirements.
Test the scalability of your solution according to the number of users that need to be supported in your organization.
Test at least one UNIX client of each operating system and operating system version. Note that you may want to add more clients for load testing.
Test the solution deployment tools and procedures.
Include remote offices and users and authentications over slow connections in the test environment if these types of users will be supported in production.
Create test-related guidance that includes instructions on verifying or updating the test plan and for the creation of test case documents.
For more information on how to create the test plan, refer to the “Test Plan Template” job aid.
The deployment plan describes the processes associated with preparing, installing, training, stabilizing, and deploying the solution to operations. It may include details regarding installation scenarios at remote or branch offices, monitoring for stability, and verifying the reliability of the new solution. The purpose of the deployment plan is to provide detailed guidelines for every aspect of the solution deployment.
The following are some considerations you should take into account when creating the development plan:
How to push configuration changes to UNIX-based computers.
How to securely generate and place Kerberos service key tables on the UNIX-based computers.
How the new system fits in with your existing systems, processes, and other major new efforts.
How user accounts are created, modified, and retired, and which systems are currently responsible for these tasks and how they will integrate with the new solution.
How the identity management, enterprise systems management, single sign-on, and federated identity systems are currently deployed or planned, and how these systems will be affected or integrated with the new solution.
How to handle multiple authorization data settings for the same user.
How to address load balancing, specifically the impact of adding users and systems to the Active Directory infrastructure. You will need to consider such details as the types of users, their location relative to the resources they are accessing, and how they are accessing these resources.
How you will maintain the existing authorization data store, either local files or some other data store, for End State 1. When defining the authentication process, you should be aware of any issues that may arise surrounding account and password synchronization, account provisioning and deprovisioning, and synchronization of account stores when using some other store for authorization.
How to address the need to remove accounts for migrated users from the old data store to reduce the possibility of incorrect permissions. For example, if the data source for authorization data is configured separately from the source for authentication data on UNIX, it is possible to remove user "bob" from the old authentication data source and still see "bob" log on using data from the old authorization data source. To minimize this possibility, it will be necessary to remove the account information from both data stores.
Whether TCP should be considered for your solution. For example, by default, communications between Active Directory and the UNIX client use user datagram protocol (UDP). However, if a user is a member of a large number of groups, the PAC size can grow to exceed the size supported by UDP], which will cause logons for these users to fail. If this might be a problem, you should consider TCP, keeping in mind that TCP is not supported by all solutions.
Table 1.3. Support for UDP and TCP by Solution
Quest Software VAS
Native OS custom solutions
Open source custom solutions
For more information on how to create the deployment plan, refer to the “Deployment Plan Template” job aid.
Many organizations may consider undertaking a pilot deployment of their solution. A pilot deployment provides validation of business requirements and technical specifications prior to an actual deployment of the solution into the production environment. The pilot also provides valuable insight into the development process, impact on end users, and resource usage in the production environment. The results of the pilot provides important feedback to stakeholders regarding the possible success of the solution after it is formally released and can help facilitate the decision on moving the solution to the production environment.
The pilot plan describes what aspects of the solution will be delivered as a pilot and provides the details necessary to conduct and evaluate a successful pilot. The content of the pilot plan ensures that all contributing project teams understand their roles and responsibilities, in addition to all associated resource requirements specific to pilot development, testing, and deployment activities.
For more information on how to create a pilot plan, refer to the “Pilot Plan Template” job aid.
The operations plan contains guidance on the day-to-day operations, extended maintenance, and contingency operations of the solution after its deployment. The components of your operations plan may include backup recovery steps, managing configuration changes, and necessary training or transfer of skills for the staff. By planning for operations at the beginning of the project, you can ensure that there is ample time to institute all of the operational factors necessary for successful implementation.
The following are some considerations you may need to take into account when creating your operations plan:
How to monitor the acquisition of credentials by a cron job for the proxy user to authenticate and retrieve authorization data.
How to monitor the lifetime of certificates for a solution using SSL/TLS.
How to train the help desk team on the specific types of questions to ask of a UNIX user and the interpretation of their answers.
How often it is necessary to rotate service key tables.
Determining the appropriate monitoring tools, logging and auditing methods, levels to support regulatory issues, and uptime or business continuity requirements.
For more information on how to create an operations plan, refer to the “Operations Plan Template” job aid.
Set Up the Development and Test Environments
The development, testing, and staging environments must be properly set up before moving into the Developing Phase in order to maximize Development team productivity and to mitigate risks. To avoid delay, the development and testing environments should be set up even as plans are being finalized and reviewed.
Note Your development and test environment should accurately reflect the end state and the technology solution you have chosen.
Establishing a development environment requires setting up hardware and other infrastructure resources and any project structure or policy elements that will guide solution development activities. The team should install the hardware and infrastructure according to the designs that were created and the configurations established in the development plans.
The process of creating the testing environment should be centered on setting up the hardware architecture according to the configurations established in the test plan and obtaining approval from the Development team in this decision process. Similar to the development environment, the testing environment should be designed to emulate the production environment as closely as possible, but the team must balance the level of representation with the associated costs. Other necessary choices to make for the testing environment include, for example, deciding on a common bug-tracking utility, required tools, and software that should be installed.
The staging environment is where the team tests content and software being changed or deployed to ensure that they function as expected. The content and software are moved from the development environment to the staging environment before they are published to the production environment. Solution components successfully tested in the development environment may not necessarily work after all elements are in place and have been integrated. The staging environment will provide a place for all solution elements to make sure they work before being put into production.
Hardware and software installed in the staging environment should mimic the production environment as closely as possible. Depending upon the availability of hardware resources, an individual server could perform several different server roles in the staging environment. However, it should be recognized that all deviations from the production environment detract from the representative value of testing and should therefore be clearly known, understood, agreed upon, and tracked. Ideally, the staging servers are built according to the documented server build procedures that are developed. This ensures that testing accurately portrays what will be seen in production.
Create the Master Project Schedule
The master project schedule combines the estimated schedules associated with each of the individual project plans into a single schedule that, in addition to the functional specification draft, helps inform and create a schedule for the project. The team may choose to modify portions of the functional specification or master project plan in order to meet a required release date. Although features, resources, and schedule may vary, a fixed release date will prompt the team to prioritize features, assess risks, and plan adequately.
Some important points to remember when creating the master schedule include:
The schedules of the individual project plans will need to be integrated into an overall migration project schedule.
A milestone early in the project schedule to establish the proof of concept will help provide input for overall project estimations.
The master schedule should be created by the Program Management Role in consultation with representatives of all roles in order to identify any possible dependencies or scheduling conflicts.
Note For more information on creating the master project schedule, refer to the UNIX Migration Project Guide.
Close the Planning Phase
Closing the Planning Phase requires completing a milestone-approval process, culminating in the Project Plans Approved Milestone. The team must document the results of all of the tasks it has performed during this phase before submitting the project to key stakeholders and customers for approval. The list of deliverables that should be completed during the Planning Phase includes:
Master project plan.
Master project schedule.
Updated risk management documents.
Planning Phase Major Milestone: Project Plans Approved
At the Project Plans Approved Milestone, the project team and key project stakeholders should have agreed that milestones have been met, the schedule is realistic, project roles and responsibilities are properly defined, and mechanisms are in place for addressing areas of project risk. The team approves the functional specification, the master project plan, and master project schedule, which then become the project baseline and provide the basis for making future trade-off decisions.
Not all of the decisions that are reached during the Planning Phase are final. As work progresses in the Developing Phase, the team should formally review and approve any suggested changes to the baseline by using a change control process.
Project teams usually mark the completion of a milestone with a formal sign off. Key stakeholders, typically including representatives of each team role and any important customer representatives who are not on the project team, signal their approval of the milestone by signing or initialing a document stating that the milestone is complete. The sign-off document becomes a project deliverable and is archived for future reference.
When all appropriate deliverables have been properly created and approved and the Planning Phase is formally closed, the development of the solution begins in the Developing Phase. The instructions provided in this guide regarding the Developing Phase are different depending on which end state and technology solution you selected:
If you are using VAS to achieve End State 2, proceed to Chapter 2, “Using Quest Software VAS to Develop, Stabilize, Deploy, Operate, and Evolve End State 2.”
If you are using Centrify DirectControl to achieve End State 2, proceed to Chapter 3, “Using Centrify DirectControl to Develop, Stabilize, Deploy, Operate, and Evolve End State 2.”
If you are using only native OS components or native OS components with an open source solution, proceed to Chapter 4, “Developing a Custom Solution.”