다음을 통해 공유


A faster CHECKDB – Part IV (SQL CLR UDTs)

I have been working on the various aspects of DBCC performance and SQL CLR based User Defined Data Types. I encountered a few issues that I have outlined below. 

1. Memory Grant Bug

There is a bug, prior to SQL Server 2014, causing the memory grant for the DBCC operations (checktable or checkdb per table) to be improperly estimated. Placing the session, running the DBCC command, in a specific resource workload group allows you to limit the memory grant size and increase the DBCC performance. Reference the following link for more details” https://blogs.msdn.com/b/psssql/archive/2014/11/10/a-faster-checkdb-part-iii.aspx I used a grant cap of 20GB on a 128 CPU, 512GB system with success rather than accepting the 90GB default grant request.

Note: This issue has been corrected in SQL 2012 (https://support.microsoft.com/en-us/kb/3007556) SQL Server 2012 SP2 - CU4

2. Blob Handle Factory

Fix released (Microsoft Bug Id: 3939015). Before the fix the DBCC command(s) created an internal structure (Blob Handles for each SQL CLR UDT based column as rows were processed) and failed to mark it for reuse when done processing the row. Each spatial row/column would look over the blob handle list, create a new entry and add it to the list. This resulted in wasted memory and CPU as the list continued to get larger and larger and the entries could not be reused. The fix allows proper reuse of the BHF.

The customer indicated it took 22 days to complete on the current SQL Server 2012 build. The QFE now completes DBCC checkdb in 15.5 hours. (DL980 G7 with 512GB RAM)

Download: https://support.microsoft.com/kb/3007556/en-us (SQL Server 2012 SP2 – CU4)
Article: https://support.microsoft.com/kb/3029825

3. Parallelism

DBCC does use parallel, internal queries when possible. In studying this closer I found that running individual checktable(s), from multiple sessions can decrease the overall DBCC maintenance window.

DBCC checkdb loops over each table and executes the fact queries. The SQL engine many elect to execute the fact queries in parallel. However, CheckDB only processes a single table at a time. On a larger system you may be able to take advantage of multiple checktable invocations on different schedulers.

The fact queries used by DBCC command are prevented from using parallelism if a large UDT column is present. Spatial is somewhat analogous to varbinary(max) and falls into this limitation. This means the internal, DBCC, fact query against your largest table runs with a serial plan.

If you manually shard the table, creating a covering view it may allow you to execute concurrent dbcc, index rebuild and other operations faster.

Parallel Check Table Example:

DBCC CHECKALLOC
DBCC CHECKCATALOG
Parallel.ForEach(table in sys.tables) -- Make sure the number of parallel executions is appropriate for resource consumption on the SQL Server
{
DBCC CHECKTABLE(…) with …
}

Shard Example:

create view vwMyTable
as

select <<column list>> from MyTable_1
union all
select <<column list>> from MyTable_2 …..

4. Trace Flag 2566 (Ignore Data Purity Checks)

Once data purity checks have been successful on upgraded databases or the database was created on newer versions the DATA_PURITY check becomes ‘On By Default.’ DBCC DBINFO(<<DBNAME>>) shows the dbi_dbccFlags. The 0x2 bit indicates data purity will be run by default when checkdb or checktable is executed. You can avoid these checks using –T2566 or dbcc traceon(2566) in the same batch as checkdb or checktable execution.

The trace flag skips the purity checks as long as the checkdb or checktable command does not specify WITH DATA_PURITY, overriding the trace flag behavior.

My testing only shows a nominal performance gain.

Additional DBCC References

Part 1: https://blogs.msdn.com/b/psssql/archive/2011/12/20/a-faster-checkdb-part-i.aspx 
Part 2: https://blogs.msdn.com/b/psssql/archive/2012/02/23/a-faster-checkdb-part-ii.aspx 
Part 3: https://blogs.msdn.com/b/psssql/archive/2014/11/10/a-faster-checkdb-part-iii.aspx  

Spatial

Over the last 18 months Microsoft addressed various spatial performance issues. The QFE build, mentioned above, contains the following corrections that you should consider enabling to improve overall spatial performance on the servers.  

In a nutshell: Apply latest Service Pack and latest CU for SQL Server and in addition, enable STARTUP trace flags (-T8048 and -T6531)

Details:

Reference link

Title

Comments (Warning: mileage will vary based on data pattern)

https://support.microsoft.com/kb/2887888

https://support.microsoft.com/kb/2887899

https://support.microsoft.com/kb/2896720

 

 

https://blogs.msdn.com/b/psssql/archive/2013/11/19/spatial-indexing-from-4-days-to-4-hours.aspx

FIX: Slow performance in SQL Server when you build an index on a spatial data type of a large table in a SQL Server 2012 or SQL Server 2014 instance

Took an index build from 72 hours to 4 hours and requires trace flag –T8048.

https://support.microsoft.com/kb/2786212

FIX: Access violation occurs when you run a spatial query over a linked server in SQL Server 2008 R2 or in SQL Server 2012

 

https://support.microsoft.com/kb/3005300

FIX: High CPU consumption when you use spatial data type and associated methods in SQL Server 2012 or SQL Server 2014

May improve performance by 10% or more for spatial methods. Requires trace flag –T6531

https://support.microsoft.com/kb/2977271

FIX: Performance improvement for SQL Server Spatial data access in SQL Server 2012

Query that used to take 20+ hours < 2 hours.

Blob Handle QFE KB: 3029825

 

 

https://social.technet.microsoft.com/wiki/contents/articles/9694.tuning-spatial-point-data-queries-in-sql-server-2012.aspx

Tuning Spatial Point Data Queries in SQL Server 2012

 

Bob Dorr - Principal SQL Server Escalation Engineer