Dynamics NAV Performance Tooling, Part 1

This is the first in the series of posts to help you monitor and understand your system's performance indicators.
There are several areas to look into when monitoring your system for performance: hardware, setup and configuration, database maintenance, indexing and queries, business logic. We will leave out the traditional topics of monitoring your hardware performance and will not venture in great details to general sql performance monitoring, as you will find many tools and posts about these topics already available.
Rather, we will focus on tools we use in Dynamics NAV daily world, and in this first post we will start with the longest running queries.

'Long running queries' is a script that gives you a 'snapshot' of how tuned your database is for your current usage and volume.
Along with other statistical data, query statistics can be retrieved from sql dynamics management views. There are number of ways and tools to retrieve these and what tooling you use is just a matter of personal preference.  The query below was created and improved over the time by engineers in Dynamics NAV Support Team, and even though it is a standard SQL tool, it is frequently used by Dynamics NAV support team. To get long running queries, you simply need to run the script from your SQL Server Management Studio and direct output to grid.

use master go SELECT TOP 200 st.text, SUBSTRING(st.text, (qs.statement_start_offset/2) + 1, ((CASE statement_end_offset WHEN -1 THEN DATALENGTH(st.text) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) + 1) as statement_text, execution_count, case when execution_count = 0 then null else total_logical_reads/execution_count end as avg_logical_reads, last_logical_reads, min_logical_reads, max_logical_reads, max_elapsed_time, case when min_logical_reads = 0 then null else max_logical_reads / min_logical_reads end as diff_quota, ph.query_plan, case when ph .query_plan.exist('declare namespace ns="http://schemas.microsoft.com/sqlserver/2004/07/showplan";(/ns:ShowPlanXML/ns:BatchSequence/ns:Batch/ns:Statements/ns:StmtCursor/ns:CursorPlan/@CursorRequestedType)[1]') = 0 then '' else ph .query_plan.value('declare namespace ns="http://schemas.microsoft.com/sqlserver/2004/07/showplan";(/ns:ShowPlanXML/ns:BatchSequence/ns:Batch/ns:Statements/ns:StmtCursor/ns:CursorPlan/@CursorRequestedType)[1]','nvarchar (max)') end FROM sys.dm_exec_query_stats as qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as st CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) as ph ORDER BY max_logical_reads DESC

What the output shows is a list of top 500 (a configurable number) of most expensive queries run on your SQL Server, and their execution plans. Statistics apply within SQL uptime - meaning they show top 500 worst queries since SQL Server was last restarted. If you want to 'reset' the cache you can run
DBCC FREEPROCCACHE at the end, but keep in mind the short term performance penalty that clearing the cache has, until the cache is built up again.

So once you get the output, what does it tell us? To continue, we can sort it by execution count, duration and reads, all in descending order (the original query is sorted by reads only). This can be useful because there is little point with starting with queries that are rarely ever executed on your system. You want to focus on frequently run queries and their execution time & reads. Normally, many of the top queries will turn out to be our regular list- (and occasionally card-) page queries.

As an example, a (list-) page query would typically look like this:

SELECT TOP (50) ISNULL(" Sales Header"."timestamp",@1) AS "timestamp",ISNULL("Sales Header"."Document Type",@2) AS "Document Type",.........
..........OUTER APPLY ( SELECT TOP (1) ISNULL(MIN("Completely Shipped$Sales Line"."Completely Shipped"),@7) AS "Completely Shipped$Sales Line$MIN$Completely Shipped" FROM "Demo Database NAV (10-0)".dbo."test,$Sales Line" AS "Completely Shipped$Sales Line" WITH(READUNCOMMITTED) WHERE ("Completely Shipped$Sales Line"."Document Type"="Sales Header"."Document Type" AND "Completely Shipped$Sales Line"."Document No_"="Sales Header"."No_" AND ISNULL("Completely Shipped$Sales Line"."Type",@2)<>@11)) AS "SUB$Completely Shipped" WHERE ("Sales Header"."Document Type"=@0) AND (ISNULL("Sales Header"."Document Type",@2)=@12 AND ISNULL("Sales Header"."No_",@3)>@13 OR ISNULL("Sales Header"."Document Type",@2)>@12) ORDER BY "Document Type" ASC,"No_" ASC OPTION(OPTIMIZE FOR UNKNOWN, FAST 50)

It is easily recognisable by SELECT TOP (50) .... and the 'joins' (OUTER APPLY ...) added for each flowfield on the page, provided the page has any.
These queries would retrieve the entire record selection and all the flowfields included in one roundtrip (or 3 to be precise), and for that reason they are expected to return a relatively high number of reads (proportional to your data volume). So finding these on the top 500 long queries list is to be expected, and only worth looking into if it is accompanied by long execution time. The avg execution time of these will be somewhat higher then any other avg query (due to data volume they need to retrieve), but overly high values could be worth attention. If you run into one of these, we will show an example of how to deal with this in one of coming posts.

Moving along, you will also notice typical server-side queries in this list. For example:

SELECT "timestamp","Type","Code","Sales Type","Sales Code" ......FROM "Demo Database NAV (10-0)".dbo."test$Sales Line Discount" WITH(READUNCOMMITTED) WHERE ("Type"=@0 ......

Here, high reads would warrant our attention. High execution count with high execution time alone would not necessarilly imply a poor query. After all, the long execution time could be due to a wait on lock release, for one. So it is when we also observe high reads that we are likely looking into a poor query. This can be further examined by looking at the query plan (the rightmost column of the resulting output).
So if you have isolated such a query, with high execution time, reads, and high execution count, you will want to optimize it, typically by adding an appropriate index.
If you use this in periodic intervals you can make sure your application is sufficiently indexed for optimal daily usage.