SQL Q&AFinding Locks, Large Queries, I/O Statistics, and More

Edited by Nancy Michell

Download the code for this article: SQLQandA2007_08.exe (151KB)

Q I need to view the names of the objects in all databases that have locks on them. How can I do this?

A In SQL Server™ 2000, you can query the syslocks system table in Master or run sp_lock to get current lock information, like so:

SELECT * FROM master..syslocks
EXEC sp_lock

However, let's say you want to translate the actual object NAMES from the ObjID from the OUTPUT of the sp_lock stored procedure (or from the id column in master..syslocks).

In SQL Server 2005 SP1 and earlier, the OBJECT_NAME function only let you pass one parameter: object_id. So, in order to properly get the object name, you had to be working in the CURRENT database before running OBJECT_NAME. This made monitoring the current locks difficult, as you had to go to the effort of writing custom code to loop through each database to get the correct OBJECT_NAME.


SQL Server 2005 SP2 improves on this by adding a second parameter, database_id. This new parameter allows you to query object names regardless of which CURRENT database you're connected to.

OBJECT_NAME ( object_id [, database_id ] )

Now you can query sys.dm_tran_locks and retrieve the object name for each database, like this:

OBJECT_NAME(resource_associated_entity_id, resource_database_id)
FROM sys.dm_tran_locks
WHERE resource_type='OBJECT'

Note, however, that this only works on SQL Server 2005 SP2 and later. If you run this in an earlier version of SQL Server 2005, you'll get this error:

Msg 174, Level 15, State 1, Line 1
The object_name function requires 1 argument(s).

Another example, shown in Figure 1, joins sys.dm_exec_sessions to get information about the service process IDs (SPIDS) involved in the locks. For more information, see SQL Server Books Online at technet.microsoft.com/library/ms130214(sql.90).aspx.

Figure 1 Finding server process IDs

DB_NAME(resource_database_id) as DBName,
OBJECT_NAME(resource_associated_entity_id, resource_database_id) AS ObjectName,
sys.dm_tran_locks tl
INNER JOIN sys.dm_exec_sessions es
ON tl.request_session_id=es.session_id
WHERE resource_type='OBJECT'

Q I have a large, dynamic SQL query that seems to sometimes exceed the length of NVARCHAR(max). Is there any way around this that still permits me to execute one big string?

A If you're exceeding the length of NVARCHAR(max), you've got a 2GB query! You probably need to convert all of the strings you concatenate to NVARCHAR(max) as well. However, a more convenient workaround, which also has the advantage of being effective in versions of SQL Server before SQL Server 2005, is to concatenate a number of smaller strings together. Here's an example:

DECLARE @q1 NVARCHAR(4000), @q2 NVARCHAR(4000), @q3 NVARCHAR(4000)
SET @q1 = 'SELECT...'
SET @q2 = 'FROM...'
SET @q3 = 'WHERE...'
EXEC (@q1 + @q2 + @q3)

TIP: Using the OUTPUT Clause

You can now audit changes you make using Data Manipulation Language (DML) statements without even using triggers. SQL Server 2005 introduced an OUTPUT clause as a part of DML statements that can help you in tracking changes made during any DML operation. The OUTPUT clause can save the resultset in a table or table variable.

The functionality is similar to what triggers had with INSERTED and DELETED tables, which used to access the rows that have been modified during the DML operation. To see how this works, let's change the address from the address table to the reverse of the original value, as shown in the figure below. You may also use the logic shown in this code to track any changes you make to the data and store it in a table.

The output from the query will look like this:

Original Value:'1234 One Microsoft Way, Redmond, Wa.' has been changed to: '.aW,dnomdeR 
,yaW tfosorciM enO 4321'

Changing an address

--Create the address table
Create Table Address (ProductID Int, SupplierID Int, Address Varchar(255))

--Insert data
Insert into Address Values (234,567,'1234 One Microsoft Way, Redmond, Wa.')

--Declare a table variable
Declare @Recordchanges table (change Varchar(255))

--Update the address
Update Supplier.Address
Set Address=reverse(address)

--Record the updates into the table variable
OUTPUT 'Original Value:' + DELETED.Address+' has been changed to: '+ INSERTED.Address+'' 
into @RecordChanges

--Query the changes from table variable
Select * from @RecordChanges

Q I have a mission-critical SQL Server 2005 cluster installation on Windows Server® 2003. The Microsoft® Distributed Transaction Coordinator (MS DTC) has been "clustered" on the same group with the Cluster Quorum, but is using a dedicated disk resource (MS DTC is sharing the same network name and IP address of the Quorum Cluster Group). I want to align this configuration with Microsoft best practices. Therefore, I need advice on moving the MS DTC to a dedicated cluster group. Is it enough to remove the MS DTC services using the Cluster Admin Tool and recreate it on a dedicated group?

A Since you already have a dedicated disk resource for MS DTC, you should be able to simply remove the resource and create it in a new group. You need to create a new network name and virtual IP address in the new group too.

Alternatively, you could create the new network name and IP address in the cluster group and change the dependencies to the new resources. You can then drag MS DTC to a new group and it will take the dedicated disk and new resources with it.

Q I need to be able to view I/O statistics on the physical database files in a database. What should I use?

A The system function fn_virtualfilestats, available in both SQL Server 2000 and SQL Server 2005, or sys.dm_io_virtual_file_stats (in SQL Server 2005 only) allow you to do exactly what you're looking for. The function returns statistical information gathered since the last time the instance of SQL Server was started. Sample results are shown in Figure 2.

Figure 2 Viewing I/O statistics on a database

DbId FileId TimeStamp NumberReads NumberWrites BytesRead BytesWritten IoStallMS
20 1 250765718 381 0 3350528 0 951
20 2 250765718 12 8 409600 491520 0
20 3 250765718 5 0 40960 0 16

Understanding the I/O impact on the underlying data files can help you better plan such things as the physical placement of files and file groups on data volumes, detecting possible I/O bottlenecks, performing file-level database maintenance, and other such tasks. This function is particularly useful for examining the I/O impact for large databases, where you may have multiple files and file groups.

The query for displaying file I/O info for SQL Server 2000 looks like this:

FROM ::fn_virtualfilestats(default,default)

To view a specific databaseID, pass the ID for the database, like so:

FROM ::fn_virtualfilestats(7,default)

Here's the SQL Server 2005 code that shows file statistics for all databases on the server:

FROM ::fn_virtualfilestats(NULL,NULL)

The following query returns file statistics for only the current database:

FROM ::fn_virtualfilestats(NULL,NULL)
WHERE DBID=db_id()

For SQL Server 2005, there is also a new system function called sys.dm_io_virtual_file_stats, which is intended to eventually replace the legacy function fn_virtualfilestats:

{ database_id | NULL },
{ file_id | NULL }

Here's how to use it:

SELECT * FROM sys.dm_io_virtual_file_stats(NULL,NULL)

If you want to generate a more readable report that shows the actual database names and file names from the output, you can use the code in the accompanying download on either SQL Server 2000 or SQL Server 2005. The code is available from the TechNet MagazineWeb site.

Q I need an easy way to see if a delete transaction has fired a trigger. Do you have any ideas for how I can get this information?

A When dealing with triggers that handle delete, insert, and update actions, people commonly use a number of different techniques for determining whether a trigger was fired by a delete transaction. A popular method is to compare the counts of the inserted and deleted virtual tables to see if they match. However, there is a simpler method: you can use the Columns_Updated function instead.

That's because when a trigger is fired by a delete transaction, Columns_Updated always returns the varbinary value 0x. The following check will verify that the trigger was fired by a delete:

IF Columns_Updated() = 0x

TIP: Initializing Data and Log Files

Did you know that data and log files are initialized to overwrite any existing data left on the disk from previously deleted files? Data and log files are first initialized by filling the files with zeros when you create a database; add files, log or data, to an existing database; increase the size of an existing file (including autogrow operations); or restore a database or file group. File initialization causes these operations to take longer. However, when data is written to the files for the first time, the operating system does not have to fill the files with zeros.

In SQL Server 2005, data files can be initialized instantaneously. This capability allows for fast execution of the previously mentioned file operations. Instant file initialization reclaims used disk space without filling that space with zeros. Instead, disk content is overwritten as new data is written to the files. Log files cannot be initialized instantaneously. Instant file initialization is available only on Windows XP Professional and Windows Server 2003 or later versions.

Because the deleted disk content is overwritten only as new data is written to the files, the deleted content might be accessed by an unauthorized principal. While the database file is attached to the instance of SQL Server, this information disclosure threat is reduced by the discretionary access control list (DACL) on the file. This DACL allows file access only to the SQL Server service account and the local administrator. However, when the file is detached, it may be accessed by a user or service that does not have SE_MANAGE_VOLUME_NAME. A similar threat exists when the database is backed up. The deleted content can become available to an unauthorized user or service if the backup file is not protected with an appropriate DACL.

If the potential for disclosing deleted content is a concern, you should always make sure that any detached data files and backup files have restrictive DACLs. Also, disable instant file initialization for the instance of SQL Server by revoking SE_MANAGE_VOLUME_NAME from the SQL Server service account.

**Thanks to the following Microsoft IT pros for their technical expertise:**Christian Bolton, Dan Carollo, Robert Davis, Jorge Guzman, Saleem Hakani, Ward Pond, Kalyan Yella, and Paolo Zavatarelli.

© 2008 Microsoft Corporation and CMP Media, LLC. All rights reserved; reproduction in part or in whole without permission is prohibited.