Skip to main content

When To Helped The Cardinality Estimater


So the usual story, the query works well for small data sets but does not work optimally for large data. What are my options ?
  


 
  • Recompile the view/function/procedure
  • Update states
  • Add a optimize @Variable for unknown
  • Look for missing indexes
  • Rewrite the query

The low hanging fruits
Generally, the top two items would allow you to get over the hump but without a optimize for hint you can’t get over the parameter sniffing problem and without the index you would keep running into the same problem over and over again.

Long term
Rewrite the query to avoid the the index and if you get really lucky it may be possible to avoid the optimize for hint.

How i went about solving the problem
The fat line indicated where the problem existed, but what would it take to resolve the  issue ? The table (let’s call it DetailTransactionLineItem just because it held detail to detail table  ) had only fraction of the data for the @Pratmerter Used.

Currently it is pumping 195 million records (3.3 Gb of data) :), where in reality it only had 99K records.

Sample of the query ( is not the real query and lot less complex)

Select  T2 .TranT2ID
, Sum(T2.Amount) tranAmount
, Count(T3R.TranT3RetrunID) ReturnAmountCount
         
From  Customer C
JOIN Transaction T ON C.CustomerID = T.CustomerID
JOIN DetailTransaction T2 ON T2.TranID  = T.TranID
left JOIN DetailTransactionLineItem T3 ON T3.TranT2ID  = T2.TranT2ID and
C.CustomerID =T3 .CustomerID
left JOIN DetailTransactionLineItemReturn T3R ON T3R.TranT3RetrunID  = T3.TranT3RetrunID
where C.CustomerID = @CustomerID


Just to get the records right, i tried all five option and nothing helped.

Steps followed
  1. Did record count of “DetailTransactionLineItem “ and found the table had 99000 records
That suggested there was too many moving parts that caused the optimiser to not calculate the statistics correctly. This could be observed where the estimator thinks it had only 68 records.
  1. Rewrote the query to loop through the T3.TranT2ID to get the  ReturnAmountCount  ( anded the required index ) but no luck. it still continued to pull the 195 Million records.
It now looks like the table needs a redesign with little hope.  
  1. Then it occurred, the nest loop operator was definitely not the iterator for the job. Out of desperation the left join was changed to “Left hash join”. Wohooooo it worked in less than a half a second.
 




4. Now it was a matter of attempting get a good plan for data sets sitting a different percentiles.
and Obviously  the has hint won’t help smaller datasets as it was a sledgehammer approach.

5. Rewrite to the query

Select  T2 .TranT2ID
, Sum(T2.Amount) tranAmount
, Count(T3R.TranT3RetrunID) ReturnAmountCount
         
From  Customer C
JOIN Transaction T ON C.CustomerID = T.CustomerID
JOIN DetailTransaction T2 ON T2.TranID  = T.TranID
LEFT JOIN  
(
  SELECT  TOP 100 PERCENT TranT3RetrunID , TranT2ID
  FROM DetailTransactionLineItem T3
 WHERE T3 .CustomerID = @CustomerID
)  T3N ON T3N.TranT2ID  = T2.TranT2ID

left JOIN DetailTransactionLineItemReturn T3R ON T3R.TranT3RetrunID  = T3N.TranT3RetrunID
where C.CustomerID = @CustomerID

So it was about forcing and making the cardinality estimator get the correct information when it fails do things for it self. This was tested on SQL server 2012. Not sure if 2014 has a similar issue.  

 








Comments

Popular posts from this blog

High Watermarks For Incremental Models in dbt

The last few months it’s all been dbt. Dbt is a transform and load tool which is provided by fishtown analytics. For those that have created incremental models in dbt would have found the simplicity and easiness of how it drives the workload. Depending on the target datastore, the incremental model workload implementation changes. But all that said, the question is, should the incremental model use high-watermark as part of the implementation. How incremental models work behind the scenes is the best place to start this investigation. And when it’s not obvious, the next best place is to investigate the log after an test incremental model execution and find the implementation. Following are the internal steps followed for a datastore that does not support the merge statements. This was observed in the dbt log. - As the first step, It will copy all the data to a temp table generated from the incremental execution. - It will then delete all the data from the base table th

Create a dacpac To Compare Schema Differences

It's been some time since i added anything to the blog and a lot has happened in the last few months. I have run into many number of challenging stuff at Xero and spread my self to learn new things. As a start i want to share a situation where I used a dacpac to compare the differences of a database schema's. - This involves of creating the two dacpacs for the different databases - Comparing the two dacpacs and generating a report to know the exact differences - Generate a script that would have all the changes How to generate a dacbpac The easiest way to create a dacpac for a database is through management studio ( right click on the databae --> task --> Extract data-tier-application). This will work under most cases but will error out when the database has difffrent settings. ie. if CDC is enabled To work around this blocker, you need to use command line to send the extra parameters. Bellow is the command used to generate the dacpac. "%ProgramFiles

How To Execute A SQL Job Remotely

One of the clients needed its users to remotely execute a SQL job and as usual I picked this up hoping for a quick brownie point. Sure enough there was a catch and there was something to learn. Executing the job through SQLCMD was a no-brainer but getting it to execute on the remote machine was bit of challenge. On the coding Front 1    1.)     The bat file included the following code                 SQLCMD -S "[ServerName] " -E -Q "EXEC MSDB.dbo.sp_start_job @Job_Name = ' '[JobName]" 2    2.)     The Individual users were given minimum permissions  to execute the SQL job Ex. use msdb EXECUTE sp_addrolemember @rolename = 'SQLAgentOperatorRole', @membername = ' Domain\UserLogin ' At the client machine              This took a fair bit of time till our sysadmin got me an empty VM machine.  Thanks Michael G                   I’m just going to copy the exact instructions that I copied to OneNote and passed on