SQL Q&A64-Bit Installations, Allocating Cluster Memory, and More

Edited by Nancy Michell

BOL Performance

Q Whenever I open SQL Server™ Books Online (BOL) on my desktop system it seems really slow, even though I have a fast machine. This doesn’t happen on other systems I use. What could be causing the delay?

A The holdup probably has to do with how the option for loading help content is set. Books Online has three options for displaying such content. The first is to try online first and then to use local help, the second tries locally first and then online, and the third only checks online help. This selection is made when you use Books Online for the first time, but you can change it.

Open Books Online, and then click the Tools item in the menu bar. Open the Options item and then point to Help | Online. Select one of the first two options to load help locally.

While you’re in Books Online, it never hurts to check the version to ensure you’ve got the latest and greatest release. Look in the title bar at the top of the screen. If the date is earlier than February 2007 or there’s no date at all, then you’re out of date and you should get the latest copy at microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx. Books Online is updated several times a year at this site.

32-Bit and 64-Bit Installations

Q I’m confused about which types and versions of SQL Server I can install on x86, x64/EMT64, and IA64 servers. I’ve been told SQL Server 2005 has different types of installations than SQL Server 2000. Is this true?

A SQL Server 2000 comes in two types of binary installations (and various editions): 32-bit and 64-bit. You can install it on x86 and on x64/EMT64 where it will run as a 32-bit application. The SQL Server 2000 64-bit version runs on the IA64 platform.

SQL Server 2005 comes in three types of binary installations (and various editions). You can install it on x86 or x64/EMT64 as a 32-bit application, on x64/EMT64 as a 64-bit application if you’re running an x64/EMT64 Windows operating system, and on IA64 as a 64-bit application.

Finding SQL Server Information

Q Is there an easy way to find SQL Server-related information? I often find myself searching in a number of places and would like a better option.

A The Microsoft Web sites definitely have a lot of information on SQL Server, and there’s a great new resource that the SQL Server User Education team put together to help you find it. It’s a customized Windows Live™ search, that scopes results to SQL Server Books Online (see Figure 1). Check it out at search.live.com/macros/sql_server_user_education/booksonline, and don’t forget to send your feedback to the team from there.

Allocating Cluster Memory

Q For my active three-way cluster (a fourth node is idle), is it best to set SQL Server max server memory to one-third the total memory, or is it OK to max out the SQL Server memory and then during failover, let the failing instances fight for memory?

It seems better to plan ahead for the failing instances since, for example, two of the four servers could go down. In such a case, I could allocate my five SQL Server instances and one Analysis Services instance to each and still have two other servers to fail over to.

Figure 1 SQL Server Books Online search in Windows Live

Figure 1** SQL Server Books Online search in Windows Live **(Click the image for a larger view)

But, let’s say SQL Server has all the RAM on a server, and a new instance of SQL Server starts. What happens to the original SQL Server instance’s memory? It seems like I should plan for the worst case scenario—that only one node stays up and all instances of SQL Server end up running on that node.

A It is generally agreed that it’s best to set a max server memory value to something less than the total memory available in a machine. However, it’s doubtful that one-third the total memory value would be the best configuration in your situation.

You’re concerned that three of the four servers in your cluster might go down at the same time and that the three SQL Server instances would end up living on the fourth node. However, this scenario is unlikely.

In the majority of cases, you’re better off configuring each instance to use most of the memory available on each node. Let’s say that each node has 32GB of RAM. Setting max server memory to something like 28GB for each instance would be reasonable.

Now, in the event of a node failure, the instance running on that node would fail over to the passive node and could potentially gain access to 28GB of memory. The other two instances would remain unaffected; they keep running on the other two nodes within the cluster.

You indicated a four-node failover cluster with five SQL Server instances and one Analysis Services instance. If you’re running SQL Server 2000 with address windowing extensions (AWE), you’re going to have to set some carefully selected memory caps for your relational engine instances as they won’t give up memory they’ve acquired. It’s a different story with the more flexible situation of SQL Server 2005 and 64-bit instances that have access to a large address space with or without AWE; even when you do use AWE in this situation, it’s dynamic and can release memory when requested. So, the answer to the question of what happens to the original SQL Server instance and the memory it has acquired in a failover situation in which a new instance is starting up on the box is that it depends on whether you’re running SQL Server 2000 or SQL Server 2005 and whether you’re using AWE.

Planning for the worst case scenario is a good idea in principle, but in practical terms, unless you’ve really over-engineered the system, it’s unlikely you’re going to survive three of four nodes going down in a cluster running five SQL Server relational engine instances and one Analysis Services instance. The remaining node won’t have enough CPU resources to service the original instances at a comparable performance level. Since that’s the case, you’ll generally be better off engineering the solution such that optimal performance is maintained with the loss of one node but might require a bit of manual intervention with the loss of more than one. This would likely deliver a good combination of high availability (HA) support while allowing you to get decent utilization of expensive resources.

Finally, with SQL Server 2005 the end user can actually scale out the cluster to eight nodes. In SQL Server 2000 a four-node cluster was the maximum possible.

Mirroring Limits

Q In reading up on SQL Server 2005 database mirroring, I found a limitation that database mirroring can support a maximum of only 10 databases per server instance. My requirements are to have two separate SQL Server instances with separate storage, automatic failover between the two, and data redundancy.

My plan was to use peer-to-peer (P2P) SQL Server 2005 transaction replication combined with Microsoft® Cluster Server (MSCS), where I would have a cluster IP that’s sourced to one of the two nodes, so all insert, update, delete, and read traffic goes to a single node in the cluster. I’d treat the secondary node as passive but it would have the transactions replicated to it. In the event of a failure, MSCS would fail that IP over to the other node and since it’s configured to be P2P and bidirectional, there would be no manual process to begin publishing transactions to the other node (once it’s back up). I would use the majority node set (MNS) configuration within MSCS to mitigate the need for a shared disk resource for the quorum.

The mirroring option is still on the table. We’ve dropped the number from 1000 to 100 databases per SQL Server cluster. Still, should I put that type of resource load on the systems?

A For information or guidance around SQL Server mirroring, check out the BOL or the SQL Server Tech Center at microsoft.com/technet/prodtechnol/sql, and msdn2.microsoft.com/sql.

Your question prompted a lot of discussion. After sifting through the banter, here are the best responses regarding your scenario:

Response number one is that there is no enforced limit to the number of databases that can be mirrored in a given instance. I assume you are referring to an article that mentions that a best practice is to mirror no more than 10 databases per instance, but that’s simply an approximation and no limit is enforced anywhere in particular; it all depends on resources available on a given system.

Mirroring could be an option, but 1,000 databases is most definitely going to be pushing it unless you really have a really high-powered server. If you don’t want to use shared storage and want to use built-in SQL Server technologies, you can try mirroring, log shipping, or replication. The pros and cons of each vary.

Response number two is that if you have a large number of databases without using shared storage and you want a disaster recovery solution provided by Microsoft, you should investigate log shipping using a set of custom small helper programs to ease administration overhead. A program can back up and restore each database in order and would thus keep you from having to run multiple log shipping agents. Note that this has some chance of data loss. There is no built-in solution that won’t require either shared storage or a DBMS to ensure no data loss.

Of course, there’s the issue of how MSCS is going to detect and fail over. MNS configurations in a cluster don’t typically remove the need for shared storage altogether, they simply remove the necessity of having a dedicated shared LUN for the quorum. The only way the plan could work would be if the cluster is configured to manage nothing but the IPs and network names. SQL Server can’t be clustered without shared storage, so you couldn’t cluster the SQL service, which means the cluster wouldn’t be able to detect and monitor SQL Server services in a typical configuration.

You could cluster the SQL Server service by adding it as a generic service to be monitored, but then the problem arises on failover: given that the service would be clustered, MSCS would try to bring the service online on the passive node on failover, but some SQL service would already be running there. Of course, you could use it to simply cluster the IP and network names, and then manually fail them over when needed. You could do the same thing using Network Load Balancing (NLB), which is done elsewhere. Take a look at the paper at microsoft.com/technet/prodtechnol/sql/2000/deploy/hasog04.mspx for more information.

Given the number of databases you’re trying to support, a log shipping solution might work out better for you. Of course, you would need to automate the process using a Data Transformation Services/SQL Server Integration Services (DTS/SSIS) package, otherwise it would probably take too long to back up manually, move, and apply the active logs during a failure.

Thanks to the following Microsoft IT pros for their technical expertise: Boris Baryshnikov, Chad Boyd, Alan Doby, Patrick Gallucci; Cindy Gross, Chuck Ladd, Pat Martin, Vipul Shah, Nelson Taggart, Kartik Tamhane, Ken Tanner, Chuck Timon, George Walters, Kevin Warren, and especially Buck Woody.

© 2008 Microsoft Corporation and CMP Media, LLC. All rights reserved; reproduction in part or in whole without permission is prohibited.