Minimizing DB Migration Downtime Using Log Shipping

A common and often unpleasant task for DBA involves moving databases between physically separate SQL instances.  This could be due to a data center move, hardware refresh, SQL Server version upgrade, server  onsolidation, company merger, and more.  Whether you have a small number of large databases or a large number of small databases, this task can be quite tedious and time consuming. To complicate matters, DBAs are getting pressed more and more to reduce downtime affecting their applications.  What options exist for DBAs to migrate continuously growing databases within the constraints of shrinking maintenance windows?

 Log shipping can be an effective approach to this problem.  Log shipping keeps a warm standby copy of your production database up to date by replaying transaction log backups on a secondary instance.  This allows you to move large chunks of data long before your maintenance window and keep the data up to date until it is time to switch over to your new SQL instance.  In essence,  you can nearly complete the data migration before the cutover event by using Log Shipping.  With this approach, the primary downtime required is the time to copy and restore a final transaction log backup and to bring the database online. This very well may be much less than the time required to point your applications to the new SQL instance anyway.


You may be asking yourself… why not use database mirroring instead?

Mirroring is another great option, but there are some limitations and risks with mirroring that may make log shipping a bit more desirable.   First of all, with SQL Server 2005 through 2008R2, you can only have one mirror.  If you are already using mirroring for HA/DR purposes, then this takes mirroring off the table for  data migration, unless you want to eliminate the HA/DR mirror during the migration time. However, you can combine both log
shipping and mirroring in an environment as well as log ship to more than one destination. 

 The other main reason I like to choose log shipping over mirroring for migrating data is log retention.  If mirroring gets disconnected or is unable to keep up due to network bandwidth, then the primary database log file can fill up disk space.  With log shipping only transaction log backup files are moved, therefore there is nothing that can prevent the production transaction log from clearing.  This should be a very low risk in most environments, but in large scale environments, it is often best to minimize all risks as much as possible.


There are 4 main milestones to ensure success at using this approach. Planning, Implementation, Monitoring, and Cutover. I will note a few tips and gotchas for each of these.



Determine how you will get the initial full backup to the destination instance, and how long it will take to backup, copy, and restore the full backup to the destination instance.  With this information, you can then determine how long you will need to retain transaction log backups while this is occurring.  Ask yourself “Do you have enough disk space to hold these transaction log backups on your existing instances?”

 Also, it is good to give yourself plenty of time to implement log shipping.  I personally like to make sure that I have enough time so that I could perform the full backup, copy, restore at least 2 or 3 times in the case there is an unexpected error.  There is little overhead on the primary instance when using log shipping, so having it running for a few weeks or even a month is typically not a problem. Any overhead would primarily be network bandwidth usage for copying transaction log backups. 

 Can your destination instance copy the transaction log backup files?  The domain account that runs the SQL Server Agent service will need rights to the share on the primary instance in order to copy the files. Be sure to obtain proper file system rights and firewall rules for this activity to happen (port 445 assuming no NetBT).  Also make sure that the servers hosting each SQL instance can make a SQL connection to the other instance.  This will be required for the log shipping monitor.

 Make sure that all server objects that are required for your database exist on the secondary instance.  You will need to recreate any logins, jobs, linked servers, certificates, or any other objects required by your database that are not specifically in the db. 

 The last thing I want to mention in planning is test, test, test, and test!  Make sure you not only plan to test your procedures for migrating the database, but also make sure that your applications have a test plan against your migrated target environment as well.



Implementing log shipping is relatively straight forward. I am not going to go into too much detail here as there is a lot of information in BOL as well as MSDN articles on setting up log shipping.  In my next blog post, I will provide some sample scripts that can help set up log shipping for a large number of databases which is a scenario where the logging shipping wizard quite simply falls short, unless you really enjoy going through the wizard for each database.



For the most part, using the Log Shipping Monitor and also keeping track of the SQL Agent jobs will keep an eye on log shipping for you. One "gotcha" here is that the SQL Agent restore jobs may complete successfully while not actually restoring any transaction log backup files.  This is by design, and the log shipping monitor should alarm you if the database is out of date past the predefined thresholds. I also like to keep an eye on the  og_shipping_secondary_databases  table in msdb as it shows the last restore date and time.  If you set up log shipping in standby mode, you can run a few validation queries against the secondary database over time to check that the data is current as well.



This is often the scariest time.  Even though it seems as though everything looks perfect…. You never know until you bring the database online! Make sure you have confidence that the application is no longer changing data in your primary database.  You can then take a final transaction log backup and restore it to your secondary instance WITH RECOVERY to bring the database online.  Warning: after recovery has been completed, you cannot apply any more logs, so this is the point of no return.  Be certain you have applied all necessary transaction log backups before bringing the database online.  I then suggest immediately taking a full backup followed by running checkdb.  These can be run with users in the database, but the safest thing to do is to run them while the application is offline.   If the secondary database is a higher version than the primary, then you may need to alter the compatibility mode for the database and run update stats (e.g. upgrading from SQL 2005 to SQL Server 2008 R2).  Part of this process also includes removing the log shipping configuration.  This can be performed at the primary instance.


Hopefully this gives you another option to consider when tasked with migrating large, highly-available databases from one instance to another.   In my next post I will outline a way that you can have a scripted implementation of log shipping on an instance with a large number of databases.  I will include some sample scripts as well.