Chapter 11 — Stabilizing Phase

Published: June 21, 2004 | Updated: October 29, 2004

On This Page

Introduction and Goals Introduction and Goals
Testing to Stabilize the Solution Testing to Stabilize the Solution
Resolving Performance Issues Resolving Performance Issues
Test Tracking and Reporting Test Tracking and Reporting
User Acceptance Testing User Acceptance Testing
Conducting a Pilot Conducting a Pilot
Customer Acceptance Testing Customer Acceptance Testing
Closing the Stabilizing Phase Closing the Stabilizing Phase

Introduction and Goals

The purpose of the Stabilizing Phase is to reduce the risks of releasing the solution to production by ensuring that the migrated databases and applications meet the acceptance criteria for deployment. Stabilizing can be thought of as a time to focus on quality assurance. The mindset of the project team should change from completing the development (migration) steps to getting the solution to a known state of quality. This is the time to determine whether the migrated applications and databases operate in the manner that you predicted during the Envisioning Phase and take any necessary corrective measures if they do not.

In the Stabilizing Phase, the test team implements the test plan against the feature-complete solution and emphasizes use and operation in realistic production environment conditions. The team focuses on prioritizing and resolving bugs and preparing the solution for release. As bugs are identified, you may find that some issues were not covered by the existing test plan. It is possible that the Stabilizing Phase will reveal issues that should become new test cases, resulting in additional iterative testing.

Each database must fulfill at least the following criteria (there may be other project-specific criteria that must also be met):

  • The physical database structure must meet performance, security, and availability requirements.

  • All operational procedures for managing the database must be in place.

  • The database schema must be equivalent to the original Sybase database.

  • All stored procedures and triggers must operate correctly.

  • All supporting rules, defaults, and data types must be created properly.

  • All tables must be defined appropriately and have correct foreign-key/primary-key relationships to any associated tables.

  • All appropriate indexes must be present.

  • All data must be transferred without loss or corruption.

In addition, each migrated application that uses the migrated databases must:

  • Be capable of connecting to the correct database.

  • Exhibit the same functionality as the original application.

  • Meet performance and security requirements.

The stabilization process is iterative — testing and piloting will identify bugs that should be fixed, and then the system should be retested. The system will undergo testing in a lab environment and then piloting in a production environment. As errors are found and bugs are fixed, each release candidate will undergo testing and piloting until the system is deemed ready for deployment.

Note  For stabilization purposes, you may find it more convenient to treat each application and database as a separate project. When treated separately, each database or application can be tested, piloted, and released independently, although you will need to be sensitive to any dependencies. For example, it may not make sense to release an application before releasing any databases it uses.

Team Roles in the Stabilizing Phase

The following table is repeated from the UMPG to help project team members quickly identify which material in the Planning Phase chapters they need to read. The primary team roles that drive the Stabilizing Phase are Test and Release Management.

Table 11.1 Responsibilities of Team Roles During Stabilization


Stabilizing Phase Focus

Product management

Communications plan execution; production launch planning

Program management

Project-tracking; bug prioritization


Bug resolution; code optimization

User experience

Stabilization of user performance materials, training materials, user assistance materials


Testing; bug reporting and status; configuration testing

Release management

Pilot setup and support; deployment planning; operations and support training

Deliverables of the Stabilizing Phase

Your deliverables for the Stabilizing Phase may include some or all of the following:

  • Pilot review

  • Deployment-ready versions of:

    • Source code and executables

    • Scripts and installation documentation

    • End-user help and training materials

    • Operations documentation

    • Release notes

  • Testing and bug reports

  • Project documents

The interim milestones of the Stabilizing Phase are:

  • Bug convergence occurs

  • Zero bug bounce reached

  • Release candidate ready

Testing to Stabilize the Solution

Testing performed during the Developing Phase should have established that:

  • The data from the Sybase database was moved intact to Microsoft® SQL Server™.

  • The syntax for the backend logic, such as stored procedures and triggers, was converted correctly and performs as expected.

  • The client application programming interfaces (APIs) have been effectively redirected and are capable of communicating with the new database.

The goal of this type of testing is to ensure that each individual element of the new environment works as intended.

During the Stabilizing Phase, testing should ensure that the capabilities of the solution function as intended in a dynamic context — with all the elements working together — and that performance is stable.

This testing culminates in the customer acceptance test. This is a final signoff from the project customer that the migrated solution is ready for deployment.

Performance, Scalability, and Stress Testing

When conducting performance, scalability, and stress tests, you compare the metrics of the new environment against the performance of the premigration technology. A fundamental objective of the migration is to achieve equal or improved performance. Because of variations in the structure of database and client logic and the tools in use, many specifics can only be determined by your particular organization.

Unlike piloting, the operations performed by these tests should be carefully controlled to ensure repeatability. It is common for the tasks that are used to test the system to be performed by automated scripts.

Performance Testing

Performance testing should be performed to validate the throughput of the databases and response times of the client applications. It is important that you have some baseline figures from the original system against which to compare the measurements you should take. It is also important to load the database with realistic production volumes of known data. Client workloads must be equally realistic.

Scalability Testing

Scalability testing assesses the scalability of the system as a whole. A scalable system has a longer lifetime than a nonscalable system. Even if you have specified an upper limit for the work volumes that the system is designed to expect, it is worthwhile to determine whether exceeding this limit requires the simple addition of hardware or a complete redesign of the entire system to achieve acceptable response times.

Instances of clients should be added and the response times measured. If the system scales well, the response time of the system will grow in a linear rather than exponential manner. Adding processors and disks should cause a predictable decrease in response time. If you have the hardware resources available, you should test this hypothesis.

Stress Testing

Stress tests, also known as load tests, are specifically designed to identify issues or bugs that might present themselves when the solution under development is highly stressed. By stressing the solution — which most commonly entails loading the solution beyond the level that it was designed to handle — new issues or bugs can emerge.

Note  Stress testing differs from scalability testing in that its purpose is to examine how the functionality of the system starts to break down when predicted loads are exceeded. Stress testing typically involves unusual workloads (extremely long-running transactions, operations that lock thousands of rows, and so on), whereas scalability testing performs common, everyday tasks.

Regression Testing

Regression testing is the process of repeating tests on a new build or version of the product. The purpose of regression testing is to:

  • Verify that the modifications made to the system as the result of a bug fix or functional change have not adversely affected other parts of the system. Any problems found during the course of the project that have been patched and fixed are rechecked with the final iteration of the build.

  • Ensure that any code changes (fixes) that have been incorporated into your database or applications because they were initially put into production in the Sybase environment are maintained, and ensure that the problems the fixes resolved have not reappeared in the migrated application or database.

  • Check that any patches to the code made during the migration to either the database or client applications are still present in the final, migrated iteration.

Regression testing involves tracking information about prior fixes. It is especially helpful to have available the procedures that demonstrated the original behavior or problem.

Resolving Performance Issues

During testing, you may discover problems that warrant a review of the application or database design to discover strategies for performance optimization.

Table 11.2 lists resources that provide more information about ways to resolve performance problems and improve the response of your environment.

Table 11.2 Resources for Designing Performance Improvements



"Designing Federated Database Servers" at

This article describes how to achieve high levels of performance, such as those required by large Web sites, by balancing the processing load across multiple servers.

"Query Tuning" at

This article describes how better design of the queries that are used by an application can significantly improve performance.

"Application Design" at

This article describes how design of the user application can significantly improve performance. Application design includes transaction boundaries, locking, and the use of batches.

"Optimizing Utility and Tool Performance" at

This article describes how some of the tools that are supplied with SQL Server can improve performance and explains the effect of running these tools and your application at the same time.

"Optimizing Server Performance" at

This article describes how you can change settings in the Microsoft Windows operating system and SQL Server to improve overall performance.

Query Tuning

Experience suggests that most performance problems must be addressed by analyzing the application, queries, and updates that the application is submitting to the database and how these queries and updates interact with the database schema. System-level server performance tuning does not usually provide an adequate solution.

During performance testing, you may find unexpected long-lasting queries and updates. These problems can be caused by:

  • Slow network communication.

  • Inadequate memory in the server computer or not enough memory available for SQL Server.

  • Lack of useful statistics.

  • Out-of-date statistics.

  • Lack of useful indexes.

  • Lack of useful data striping.

When a query or update takes longer than expected, use the following checklist to improve performance.

  • Is the performance problem related to a component other than queries? For example, is the problem slow network performance? Are there any other components that might be causing or contributing to performance degradation? Windows System Monitor can be used to monitor the performance of SQL Server and non-SQL Server-related components.

    For more information, see "Monitoring with System Monitor" at

  • If the performance issue is related to queries, which query or set of queries is involved? You can use SQL Profiler to help identify the slow query or queries.

    For more information, see "Monitoring with SQL Profiler" at

    The performance of a database query can be determined by using the SET statement to enable the following options:

    The information gathered by these tools allows you to determine how a query is executed by the SQL Server query optimizer and which indexes are being used. Using this information, you can determine if performance improvements can be made by rewriting the query, changing the indexes on the tables, or perhaps modifying the database design.

    For more information, see "Analyzing a Query" at

  • Was the query optimized using useful statistics? Statistics on the distribution of values in a column are automatically created on indexed columns by SQL Server. They can also be created on nonindexed columns either manually, using SQL Query Analyzer or the CREATE STATISTICS statement, or automatically, if the auto create statistics database option is set to true. These statistics can be used by the query processor to determine the optimal strategy for evaluating a query.

    Maintaining additional statistics on nonindexed columns involved in join operations can improve query performance.

    For more information, see "Statistical Information" at

    You can monitor the query using SQL Profiler or the graphical execution plan in SQL Query Analyzer to determine if the query has enough statistics.

    For more information, see "Errors and Warning Events Category" at

  • Are the query statistics up-to-date? Are the statistics automatically updated? SQL Server automatically creates and updates query statistics on indexed columns (as long as automatic query statistic updating is not disabled). Additionally, statistics can be updated on nonindexed columns either manually, using SQL Query Analyzer or the UPDATE STATISTICS statement, or automatically, if the auto update statistics database option is set to true. Up-to-date statistics are not dependent upon date or time data. If no UPDATE operations have taken place, the query statistics are still up-to-date.

  • If statistics are not set to update automatically, you should set them to do so.

    For more information, see "Statistical Information" at

  • Are suitable indexes available? Would adding one or more indexes improve query performance?

    For more information, see "Index Tuning Recommendations" at

  • Are there any data or index hot spots? Consider using RAID and disk striping.

    For more information, see "Data Placement Using Filegroups" at

  • Is the query optimizer provided with the best opportunity to optimize a complex query?

    For more information, see "Query Tuning Recommendations" at

Advanced Query Tuning Concepts

SQL Server performs sort, intersect, union, and difference operations using in-memory sorting and hash join technology. Using this type of query plan, SQL Server supports vertical table partitioning, which is sometimes called columnar storage.

SQL Server employs three types of join operations:

  • Nested loops joins

  • Merge joins

  • Hash joins

These types of join operations are discussed in the following subsections.

When to Use Nested Loops Joins

If one join input is quite small (such as fewer than 10 rows), and the other join input is fairly large and indexed on its join columns, index nested loops are the fastest form of join operations because they require the least I/O and the fewest comparisons.

For more information about nested loops, see "Understanding Nested Loops Joins" at

When to Use Merge Joins

If the two join inputs are not small but are sorted on their join column (for example, if they were obtained by scanning sorted indexes), merge join is the fastest join operation. If both join inputs are large and the two inputs are of similar sizes, merge join with prior sorting and hash join offer similar performance. However, hash join operations are often much faster if the two input sizes differ significantly from each other.

For more information, see "Understanding Merge Joins" at

When to Use Hash Joins

Hash joins can process large, unsorted, nonindexed inputs efficiently. They are useful for intermediate results in complex queries because:

  • Intermediate results are not indexed (unless explicitly saved to disk and then indexed) and often are not suitably sorted for the next operation in the query plan when they are produced.

  • Query optimizers estimate only intermediate result sizes. Because estimates can be incorrect by an order of magnitude wrong in complex queries, algorithms to process intermediate results not only must be efficient, but also must degrade gracefully if an intermediate result turns out to be much larger than anticipated.

The hash join allows reductions in the use of denormalization to occur. Denormalization is typically used to achieve better performance by reducing join operations, in spite of the dangers of redundancy, such as inconsistent updates. Hash joins reduce the need to denormalize. Hash joins allow vertical partitioning (representing groups of columns from a single table in separate files or indexes) to become a viable option for physical database design.

For more information, see "Understanding Hash Joins" at

Network Traffic

The greatest barrier to achieving performance in any database client application will be the volume of network traffic that is generated. Network roundtrips describe the conversational traffic sent between the client application and SQL Server for each batch and result set. The following design strategies can help to minimize network traffic:

  • Use stored procedures.

    Stored procedures encapsulate programmatic logic in the database server. By using stored procedures in your applications, you can reduce the amount of conversational traffic between client and server, thus minimizing network roundtrips. For example, if your application takes different actions based on data values received from SQL Server, embed those decisions directly in a stored procedure whenever possible, eliminating the additional network traffic required to pass the data back to a client application that makes the same decision.

    Note, however, that — by default — when a stored procedure includes multiple statements, SQL Server sends a message to the client application at the completion of each statement that details the number of rows affected. Most applications do not need these messages. If you are confident that your applications do not need them, you can disable these messages to improve performance on a slow network. Use the SET NOCOUNT session setting to disable these messages for the application.

    For more information about SET NOCOUNT, see the Transact-SQL Reference in SQL Server Books online at

    For more information about the benefits of using stored procedures, see "Effects of Stored Procedures on Application Performance" at

  • Retrieve small result sets.

    Retrieving needlessly large result sets (for example, thousands of rows) for browsing on the client adds CPU and network I/O load, makes the application less capable of remote use, and limits multiuser scalability. It is better to design the application to prompt the user for sufficient input so that submitted queries generate modest result sets.

    For more information, see "Optimizing Application Performance Using Efficient Data Retrieval" at

    Application design techniques that facilitate this approach include exercising control over wildcards when building queries, mandating certain input fields, disallowing ad hoc queries, and using the TOP, PERCENT, or SET ROWCOUNT Transact-SQL statements to limit the number of rows returned by a query.

    For more information about using TOP and PERCENT, see "Limiting Result Sets Using TOP and PERCENT" at

    For more information about SET ROWCOUNT, see the Transact-SQL Reference in SQL Server Books online at

  • Allow cancellation of a query in progress.

    Irresolvable performance problems can result when an application forces the user to restart the client computer to cancel a query. When an application cancels a query, such as by using the ODBC sqlcancel function, you must always either commit or roll back the transaction. Canceling the query alone does not commit or roll back a user-defined transaction. Without indicating whether you want to commit or roll back the transaction, all locks acquired within the transaction are retained after the query is canceled.

    Note  This issue also applies to DB-Library and other APIs that you can use to cancel queries.

  • Always implement a query or lock time-out.

    You should design queries and locks to time-out. Make the appropriate API call to set a query time-out. For example, use the ODBC SQLSetStmtAttr function.

    For more information about setting a query time-out, see "SQLSetStmtAttr Function" at

    For more information about setting a lock time-out, see "Customizing the Lock Time-out" at:

  • Use application development tools that allow explicit control over the Transact-SQL statements sent to SQL Server.

    Some tools transparently generate Transact-SQL statements based on higher-level objects but do not provide crucial features such as query cancellation, query time-out, and complete transactional control. It is often not possible to maintain good performance or to resolve performance problems if the application generates transparent Transact-SQL statements because those statements do not allow explicit control over transactional and locking issues, which are critical to performance.

  • Do not mix decision support and online transaction processing (OLTP) queries.

    For more information, see "Online Transaction Processing versus Decision Support" at

  • Replace cursors with set-oriented Transact-SQL statements.

    When updating through a cursor, the client application requires the server to maintain row locks or version information for every row, just in case the client asks to update the row after it has been fetched. In set-oriented Transact-SQL statements, locks or version information are not required. The client application tells the server to update the set of records that meet specified criteria, and the server figures out how to accomplish the update as a single unit of work.

    Use of a cursor also implies that the server is maintaining client state information, such as the user's current rowset at the server. State information is usually maintained in temporary storage. Providing this type of temporary storage for a large number of clients is an expensive use of server resources. A better strategy with a relational database is for the client application to get in and out quickly, maintaining no client state at the server between calls. Set-oriented Transact-SQL statements support this strategy.

    However, if the query uses cursors, you should determine whether the cursor query could be written more efficiently by using either a more efficient cursor type, such as fast forward-only, or a single query.

    For more information, see "Optimizing Application Performance Using Efficient Data Retrieval" at

  • Keep transactions as short as possible. For more information, see "Effects of Transactions and Batches on Application Performance" at

  • Use prepared execution to execute a parameterized Transact-SQL statement.

    For more information, see "Prepared Execution" at

  • Always process all results to completion. Applications that stop processing result rows without canceling the query can lead to blocking and slow performance.

    For more information, see "Understanding and Avoiding Blocking" at

  • Ensure that your application is designed to avoid deadlocks.

    For more information about designing to minimize deadlocks, see "Minimizing Deadlocks" at

    For more information about deadlocks, see "Deadlocking" at

  • Ensure that all the appropriate options for optimizing the performance of distributed queries have been set.

    For more information about distributed queries in SQL Server 2000, see "Optimizing Distributed Queries" at

You will also find the following links useful for providing guidance on building SQL Server applications that make efficient use of resources and are well-optimized for the network:

Test Tracking and Reporting

A key aspect of the Stabilization Phase is the monitoring and recording of bugs and the rate at which they occur. You should use a tool for recording and tracking bugs. Microsoft Excel or an equivalent spreadsheet can provide a simple tracking method. Alternatively, there are many publicly-available bug-tracking tools.

Bug Status and Life Cycle

Reporting, verifying, and fixing bugs has a distinct workflow, and you should aim to use a tool that supports the capability to track who found the bug, how to repeat it, what state it is in now, and who is working on it. Although the names may change from organization to organization and from tool to tool, essentially the lifecycle of a bug is:

  • New. This is a bug that has been identified by someone, with identifiable symptoms and, if possible, a repeat-by set of instructions that describe the steps that led to the bug being noticed. A new bug has not yet been addressed by the development team.

  • Open. This bug has been accepted by the development team and verified as a bug. However, nobody is yet assigned to work on it.

  • Assigned. An individual works to fix this bug.

  • Fixed. The developer has released a new version of the code that is believed to address the problem. It is ready for retesting. If it still fails the test, the bug should be moved back to the assigned state with additional notes from the tester.

  • Closed. The retest has passed, and the bug is no longer present. Many organizations and systems will not allow a closed bug to be reopened, but require a fresh bug to be filed if the defect resurfaces. However, this is not a hard and fast rule.

At each change of status, team members should be notified of the change. For example, when a developer marks a bug as fixed, the test team should be aware that they need to rerun some tests to verify the bug fix and close it.

There are also certain state transitions that should not be permitted; for example, it does not make sense for a bug to go from Assigned to New (or Assigned to Closed). Only the test team should be permitted to close bugs when they are convinced that the problem has been addressed.

Bugs can have both a priority and a severity, though they are often confused or merged. The severity of a bug describes how much of the system is affected by the bug, how easy it is to work around the issue, and how much of a problem it represents. For example

  • Severity 1. This is a bug that can affect the performance, availability, or usability of the entire system for all users, with no simple work-around.

  • Severity 2. This is a bug that can lock out a single user; a bug that, while affecting all users, has a simple, effective workaround; or a bug that affects only one subsystem.

  • Severity 3. This is a bug that effects a small portion of one subsystem.

  • Severity 4. This is a minor bug, with a simple workaround. A typographical error on a screen of text that does not adversely affect the use of the application is an example of a bug of this severity.

There may be more (or fewer) severity levels in an organization, but four is fairly standard.

Priorities indicate how important it is to fix it before fixing something else. Usually, severity 1 bugs will also be priority 1, but this is not always the case. For example, incorrect wording on a screen (a severity 4 bug normally) might lead testers toward incorrect actions, thereby triggering a severity 2 bug. It may be expedient to give the severity 4 bug a Priority 1 treatment, and then work on the severity 2 issue.

Bug Convergence

During initial testing, it is likely that testers will report bugs at a rate faster than the development team can address them. However, after this initial flurry, the rate at which new bugs are identified will decrease and the development team will be able to start catching up. Bug convergence is the point at which the team makes visible progress against the active bug count. It is the point at which the rate of bugs that are resolved exceeds the rate of bugs that are found. Because the bug rate will still vary — even after it starts its overall decline — bug convergence usually manifests itself as a trend rather than a fixed point in time.

Interim Milestone: Bug Convergence Reached

After bug convergence has been achieved, the number of bugs should continue to decrease until very few (if any) bugs are reported in a given time frame.

Zero Bug Bounce

In an ideal environment, you would not attempt to deploy the databases and applications until every possible bug has been identified and fixed. However, it is very difficult to assess when every bug has been located — testing can only show the presence of bugs rather than prove their absence.

Zero bug bounce is the point in the project when development resolves all the outstanding bugs raised by testing and there are no current active bugs. It is still probable that more bugs will be reported, and the rate at which they occur may again temporarily

exceed the rate at which they can be fixed, but you should find that these occurrences will happen less frequently as time goes by. Careful bug prioritization is vital because every bug that is fixed introduces the risk of creating a new bug or regression issue. Achieving zero bug bounce is a clear sign that the team is in the final stage as it progresses toward a stable system.

You can decide that a database or application is eligible for release when the frequency of bug reports drops to a certain level and that the system is predominantly in a zero bug state.

Interim Milestone: Zero Bug Bounce Achieved

At this point, there are no outstanding bugs identified, and the system is eligible for either an interim or full release.

Release Candidates

A release candidate is a fully-functional implementation of a migrated database or application. The first release candidate should be subjected to final rigorous functional testing by the test team before being released for user acceptance testing and then piloting. As bugs are found and fixed, further release candidates will be built, tested, and piloted.

Remember that there are fundamental differences between a release candidate client application and a release candidate database. Making a client application available for release is a matter of building a deployment package that can be installed onto a user's computer. The deployment package should automate the configuration process and ensure that the application is directed toward an appropriate database. Often, the application can be installed by the user and used immediately.

Releasing a database is a matter of building the new structures and importing the data. Although as much of this as possible should be automated through scripts, it will still require the specialist knowledge of SQL Server administration and support staff to perform the installation. The time required will also be significant, and the process should be scheduled for a period of time that minimizes the disruption to users running client applications. If possible, you should consider building the release database on an additional set of computers and then switching the network to direct client applications to the new configuration after the installation process has completed.

Interim Milestone: Release Candidate Built and Preproduction Testing Completed

The release candidate is ready for user acceptance testing and piloting.

User Acceptance Testing

User acceptance testing generally determines whether the application meets the customer acceptance criteria in relation to business requirements. In a migration project, you are testing an existing database design and application functionality that have already passed customer acceptance. In most cases, only the technological environment in which the functionality is delivered has changed.

For the purposes of a migration project, acceptance testing should focus on determining whether the implementation of the new technology components produces correct outcomes from the standpoint of the use scenarios on which the original applications are based. Acceptance testing should address such factors as whether actual output is as

expected for a specific user scenario, whether transaction processing time meets preexisting standards, and whether the migrated environment supports established procedures as expected. If possible, you should use the acceptance tests and criteria for the original application as a base.

For database engine migration, a central aspect of acceptability is that the migrated database is accessible by the client applications without degrading performance.

You should seek to create automated scripts to perform acceptance testing because of the repeatability and accuracy required.

Interim Milestone: User Acceptance Testing Complete

The databases and applications are functionally complete and their operation meets user expectations. The system is ready for an initial release.

Conducting a Pilot

Depending upon the complexity of the database and applications, you may have elected to conduct a pilot. Piloting involves deploying the databases and applications in an environment in which actual users perform real scenarios using a fully-functional implementation of the solution. The duration of the pilot should be a predetermined, fixed period. You should also identify the success criteria of the pilot and agree on the criteria with the participants.

Note  Piloting differs from acceptance testing in that the pilot runs either in the production environment or something very similar. Acceptance testing is performed in a lab using the test environment.

Conducting a pilot also includes testing the accuracy of supporting documentation, training, and other noncode elements, such as the cutover and fallback procedures. The system must be fully supported during the pilot, and the procedures to be used by the support staff in the production environment must be adhered to.

A separate pilot test will check that the disaster recovery procedures for the database are adequate; the database should be "destroyed" in some manner, and then rebuilt using backups taken as part of the routine daily operations. The time taken to recover the system must be in line with the user acceptance criteria. If the database implements high availability, this test will verify the robustness of the solution (pilot users should be able to continue using the system while the broken parts are being repaired).

Creating Pilot Test Reports

Any issues that are raised and any bugs that are reported during the pilot must be recorded. You should also collect information on user feedback. Depending on the volume and type of bugs reported, it may be necessary to suspend the pilot or revert back to a previous release. Alternatively, minor bugs can be addressed by applying a software update to the existing code base and continuing with the pilot.

Interim Milestone: Pilot Complete

If the pilot has been completed successfully, the system can be considered stable. The final release candidate can be upgraded to a full production release.

Customer Acceptance Testing

To indicate that the migration is complete and ready for deployment, carry out a deployment review during which you check the deployment against its requirements. Depending on the circumstances, this may take place as part of a user acceptance testing exercise or before acceptance by operations. Depending on who is accepting the deployment, the criteria for acceptance may vary, but at the very least it should encompass the following items:

  • SQL Server and databases

  • User accounts

  • User applications

  • Audit policy and security management

  • Network configuration and traffic

An agreed-upon set of acceptance tests should form the basis for obtaining project sign-off from the customer. You should plan, execute, review, and sign off the user acceptance tests in much the same way as the other prerelease tests described in this chapter. Indeed, there may be a subset of prerelease tests that also make up part of the acceptance test suite. You should compile a complete checklist of all tests that will make up the user acceptance. The checklist should include the following types of acceptance tests:

  • Installation. Initial verification of the database installation and configuration.

  • User database. Deployment of user database is complete.

  • User account. Deployment and setup of SQL Server and Windows accounts.

  • Application. User applications perform as expected with SQL Server 2000.

  • Application. UNIX applications interoperate as expected with SQL Server 2000, if applicable to your migration.

  • Operations. Route operations and maintenance (for example, backup and restore, SQL Server jobs, and audit and activity logs).

  • User/business. To include ease of use, performance, and availability targets.

  • Support. To include verification that procedures are in place to resolve incidents and problems related to the deployed database systems.

A number of tools and facilities are available to support the acceptance testing activities. These include:

  • SQL Server activity/audit log

  • SQL Profiler

  • SQL Agent log

  • SQL Alert

  • Enterprise Manager, Current Activity panel

  • SQL Diagnostics (sqldiag.exe)

  • Stored procedures: sp_who, sp_lock, sp_monitor, sp_spaceused, sp_helpindex

  • Database Consistency Checker (DBCC) commands

  • Windows Performance Monitor

  • Windows event, security audit log files

  • Microsoft Operation Manager

Customer Acceptance Testing usually takes place on the Staging Hardware environment (see Chapter 6)

Closing the Stabilizing Phase

The Stabilizing Phase culminates with the Release Readiness Approved Milestone, which triggers the beginning of the deployment effort. From the migration project team's perspective, the project is about to enter its final phase. From the perspective of operations, the production environment, which they "own," is about to undergo a major change, and this may entail significant risk. Thus both groups are highly invested in the milestone meeting. (The operations group may refer to it as the Release Readiness Review.) This "go/no-go" meeting determines whether the new system is ready for deployment and whether all supporting migration activities have taken place. The purpose of this review is to ensure the quality of releases to the IT production environment.

The review meeting allows project stakeholders to indicate their final approval (or disapproval) and to raise any pertinent issues. This is the last chance to alter the deployment schedule except in the case of a crisis. All groups should be prepared for the deployment. It is important that operations staff members indicate that they are sufficiently knowledgeable and prepared to take on operational support of the new production system. Continuing beyond this point without a readiness review introduces large risks into the migration.

In preparing for the review, you should go over every deliverable produced during the project up to this point. Look for uncompleted tasks, missing deliverables, and unmet requirements. Ensure that the following circumstances are true:

  • The right people have been selected to the fill the appropriate roles for the deployment.

  • A complete Deployment Site Survey has been conducted, including a survey of physical space, power resources, and network connectivity.

  • You have confirmed that the installed hardware, storage systems, network systems, and network provisioning and connectivity match the requirement for deployment.

  • You have validated network connectivity to UNIX systems if deploying into a heterogeneous environment.

  • You have validated that the systems integration and network connectivity provide security levels acceptable to the data center's standards.

  • You have validated that the database and software to be deployed have been properly tested.

  • The migration and operations teams share integration and acceptance plans for allowing the deployed database platforms into the production environment.

  • The teams have agreed on contingency plans for rolling back to the previous environment if deployment fails.

  • The deployment has been scheduled to ensure staffing availability during the required time period. This may include "dry run" testing, where the goal is to validate that the team is ready to do a real deployment. Normally, a dry run is conducted on the days before the deployment to help team members become familiar with the process.

  • The team is equipped to evaluate and document the deployment on a moment-by-moment basis, allowing for review, triage, and redefining of the deployment plan, if necessary.

Key Milestone: Release Readiness Approved

The Release Readiness Approved Milestone meeting must be conducted so as to encourage the identification and exploration of risks and warning signs. The agenda should include the following:

  • Evaluation of pilot test results:

    • Determine which tests passed and which failed, and examine the issues raised.

    • Determine whether any parts of the pilot process need to be changed or improved for production deployment.

    • Determine whether the migrated system is ready for production.

  • Evaluation of consolidation guidelines:

    • Evaluate the guidelines defined in Chapter 4, "Planning Phase: Database," and Chapter 5, "Planning Phase: Clients."
  • Review of management processes:

    • Change management

    • Availability management

    • Capacity management

  • Readiness of resources:

    • Personnel

    • Hardware and software

    • Skill and ability to handle change management process

  • Potential impact on other systems:

    • Application

    • Network

    • Operation and support

  • Customer Acceptance Complete

You should maintain a list of concerns, issues, and risks identified while preparing for and conducting the review, together with potential approaches for their mitigation or resolution.

During the Release Readiness Approved Milestone meeting, the stakeholders must decide whether to proceed with the deployment, perform additional work or mitigation followed by another review, or cancel the deployment entirely. Achieving this project milestone means that the project team, customer, and key stakeholders agree that the

database and applications can be deployed in a full production environment. The documentation is complete, all operational procedures have been shown to be effective, and the database schema and applications are frozen. The project team, customer, and project stakeholders have agreed the project is ready for deployment.

Note  For more information about the Release Readiness Review from the MOF perspective, see "Release Readiness Review — Microsoft Operations Framework" at

For more information about how MSF approaches risk management, see "Microsoft Solutions Framework — MSF Risk Management Discipline v.1.1" at


Get the Solution Guide for Sybase/UNIX to SQL Server 2000: Database Engine Migration and Application Interoperation Guide

Update Notifications

Sign up to learn about updates and new releases


Send us your comments or suggestions