More Showplan enhancements – Row Goal
Cross post with http://aka.ms/sqlserverteam
As I shared before, we have been working for over a year to make showplan the one-stop-shop for query performance analysis and troubleshooting (as much as possible).
In this article I’ll talk about one of these showplan improvements, to assist in the discoverability of Optimizer row goal use, and its impact in query execution. A new operator property EstimateRowsWithoutRowGoal. This will also be available in the upcoming SQL Server 2016 SP2.
So what is row goal?
When the Query Optimizer estimates the cost of an execution plan, it usually assumes that all qualifying rows from all tables have to be processed. However, some query patterns cause the Optimizer to search for a plan that will return a smaller number of rows, with the purpose of doing it faster. So row goal is a very useful optimization strategy for certain query patterns.
How is row goal used?
This can occur if the query specifies a target number of rows (a.k.a. row goal) that may be expected at runtime. When a query uses a TOP, IN or EXISTS clause, the FAST query hint, or a SET ROWCOUNT statement, that row goal is used as part of the query optimization process. If the row goal plan is applied, the estimated number of rows in the query plan is reduced, because the Optimizer assumes that a smaller number of rows will have to be processed, in order to reach the row goal.
When row goal is very low, and a JOIN is required, then the Optimizer prefers a nested loops join, because its initial cost (the cost to produce the first row) is relatively low. But other types of JOIN may also be used if the row goal is larger:
- A hash join is usually a good choice for joining larger inputs. Although it has a higher initial cost, because it has to build a hash table before any rows can be returned, once built, the hash join is generally cheaper.
- But if the two join inputs are sorted on their join column (for example, if they were obtained by scanning sorted indexes), a merge join is the fastest join operation and can be chosen instead.
Example of row goal benefits
Let’s use the following query in Adventureworks2016, and look at the resulting query execution plan and execution metrics:
SELECT TOP (100) * FROM Sales.SalesOrderHeaderBulk AS s INNER JOIN Sales.SalesOrderDetailBulk AS d ON s.SalesOrderID = d.SalesOrderID WHERE s.TotalDue > 1000 OPTION (RECOMPILE); GO
Notice the nested loops plan driven by the low row goal (100):
This query executed in just over 1.6s with 40ms of CPU time. Let’s keep these in mind for later.
Looking at the properties of the outer input for the nested loops, notice this scan returned 1550 rows (Actual Number of Rows), and only read the 1550 rows that satisfy the pushed down predicate (Number of Rows Read). Good!
And we see row goal optimization was used to benefit performance, because without row goal, the estimated rows would be about 6M.
We can see this information in the new operator property EstimateRowsWithoutRowGoal. In this plan, the new property can be seen in this Clustered Index Scan, and also in other operators up the tree, like the Compute Scalars.
This new property is only added to a plan operator if row goal was evaluated and used – if not, this property is absent.
In fact, we can use a USE HINT to disable row goal optimization, and see how this really helped our query plan shape and execution metrics:
SELECT TOP (100) * FROM Sales.SalesOrderHeaderBulk AS s INNER JOIN Sales.SalesOrderDetailBulk AS d ON s.SalesOrderID = d.SalesOrderID WHERE s.TotalDue > 1000 OPTION (RECOMPILE, USE HINT('DISABLE_OPTIMIZER_ROWGOAL')); GO
Clearly, row goal was a benefit, as the execution time increased to just over 2.4s (60% worse). Row goal is a benefit most of the times it is used.
Example of row goal that we can improve upon
Most of the times it;’s a benefit, but not always clear. Which is why the new information can be useful. Let’s see an example using another query:
SELECT TOP 250 * FROM Production.TransactionHistory H INNER JOIN Production.Product P ON H.ProductID = P.ProductID OPTION (RECOMPILE) GO
The resulting query execution plan is using a nested loops, as expected for a low row goal. For reference, also notice the execution metrics:
This singleton query executes very fast, no question. But let’s imagine you were seeing these numbers x100, and this query executed many times per minute.
I can see in the Clustered Index Scan below, how the estimated rows without row goal (EstimateRowsWithoutRowGoal) is much larger than the number of rows read (Number of Rows Read): 113K to 250. Row goal drove some decisions in the Optimizer that seem beneficial, and produced a nested loops plan with the specific order of outer and inner tables – TransactionHistory and Product, respectively. On the inner table Product, SQL Server does just a seek for the required 250 lookups, as expected.
Keeping in mind the metrics above, and because we see row goal was used, I can try to disable row goal, and see if I get better performance.
SELECT TOP 250 * FROM Production.TransactionHistory H INNER JOIN Production.Product P ON H.ProductID = P.ProductID OPTION (RECOMPILE, USE HINT('DISABLE_OPTIMIZER_ROWGOAL')) GO
The resulting query execution plan now uses a hash join, and notice how the join inputs have changed order: Product is the build table, TransactionHistory the probe.
What about execution metrics? Better, no question.
The build table Product only has 504 rows (see below), and once built, it’s much cheaper than a nested loop, which is what I see in this example. Even though the probe table TransactionHistory is much larger, now we need only probe the required 250 rows for our TOP clause.
So disabling row goal is better in this case, yielding a 89% improvement in execution time (165ms to 18ms).
Before this showplan improvement, if you were tasked with analyzing a query using a pattern that may be using row goal, you could only guess if it was present. But with the EstimateRowsWithoutRowGoal property, it becomes possible to see if row goal was in fact used, and then engage in these tuning exercises.
Pedro Lopes (@sqlpto) – Senior Program Manager