Skip to main content

Monitoring page splits with fn_dblog()




There was a query that was posted few weeks back where the performance of the application gradually deteriorated towards the end of the week but at the beginning of the week it was working optimally.

What does this mean from a technical perspective?
Towards the end of the week the indexes gradually get fragmented and during the weekend a optimization job reorganizes or rebuilds the index. Even though, from database maintenance perspective we saved our selves there seems to be room for improvement.

If we read between lines it does say the index get fragmented fairly soon and that’s clear indication
1.)  That there is notable amount of data written to the indexes and
-        2.) The index design is not optimal (or)
3    3.) The fill factor settings may need to be changed after monitoring

As we were not in a state to recommend any indexing strategy to the client( due to vendor ownership) we needed to identify best fragmentation level for the indexes.
Currently there is no SQL native tool that’s offered by Microsoft to assist with this matter; However with
1.) Sql server 2012, there is extended events to identify page splits
2.) The second best option is to use the undocumented function fn_dblog() ; which has been available since sql 2000


As part of the solution
1.)    A SQL job was created to schedule to collect the data for each database using the following
SELECT
DB_NAME()
,COUNT(1) AS NumberOfSplits
,AllocUnitName 
,Context
,GETDATE()

From fn_dblog(NULL,NULL) 
Where operation = 'LOP_DELETE_SPLIT' 
Group By AllocUnitName, Context 
Order by NumberOfSplits desc  
 
2     2.)  Collated the data to represent the page splits in the a following format
                FYI- The column allocUnitName has the following formulation owner.tableName.IndexName
                Ex dbo.sales.UC_Sales_SalesID_PK

                The rest of the results is fairly self-explanatory
  3.)   Monitored the page splits for the individual indexes after comparing the  before and after image of the above result set, and after which further recommendations were suggested.


Hope this tips help those in need.

Cheers

Comments

Popular posts from this blog

How To Execute A SQL Job Remotely

One of the clients needed its users to remotely execute a SQL job and as usual I picked this up hoping for a quick brownie point. Sure enough there was a catch and there was something to learn. Executing the job through SQLCMD was a no-brainer but getting it to execute on the remote machine was bit of challenge. On the coding Front 1    1.)     The bat file included the following code                 SQLCMD -S "[ServerName] " -E -Q "EXEC MSDB.dbo.sp_start_job @Job_Name = ' '[JobName]" 2    2.)     The Individual users were given minimum permissions  to execute the SQL job Ex. use msdb EXECUTE sp_addrolemember @rolename = 'SQLAgentOperatorRole', @membername = ' Domain\UserLogin ' At the client machine              This took a fair bit of time till our sysadmin got me an empty VM machine....

Create a dacpac To Compare Schema Differences

It's been some time since i added anything to the blog and a lot has happened in the last few months. I have run into many number of challenging stuff at Xero and spread my self to learn new things. As a start i want to share a situation where I used a dacpac to compare the differences of a database schema's. - This involves of creating the two dacpacs for the different databases - Comparing the two dacpacs and generating a report to know the exact differences - Generate a script that would have all the changes How to generate a dacbpac The easiest way to create a dacpac for a database is through management studio ( right click on the databae --> task --> Extract data-tier-application). This will work under most cases but will error out when the database has difffrent settings. ie. if CDC is enabled To work around this blocker, you need to use command line to send the extra parameters. Bellow is the command used to generate the dacpac. "%ProgramFiles...

High Watermarks For Incremental Models in dbt

The last few months it’s all been dbt. Dbt is a transform and load tool which is provided by fishtown analytics. For those that have created incremental models in dbt would have found the simplicity and easiness of how it drives the workload. Depending on the target datastore, the incremental model workload implementation changes. But all that said, the question is, should the incremental model use high-watermark as part of the implementation. How incremental models work behind the scenes is the best place to start this investigation. And when it’s not obvious, the next best place is to investigate the log after an test incremental model execution and find the implementation. Following are the internal steps followed for a datastore that does not support the merge statements. This was observed in the dbt log. - As the first step, It will copy all the data to a temp table generated from the incremental execution. - It will then delete all the data from the base table th...