Search This Blog

Tuesday, January 5, 2010

All about locking , blocking and deadlocking ....Part 5

This is the last one in this series .Let me know if you have any comments : Good or Bad (",)

How to reduce blocking and deadlocks:
I can give you one answer from my own and then we will see more : Do not commit the stupidity that I did in the example in the previous post (deadlock example).

As a DBA
-> Make sure that there are proper Indexes on tables and that they are actually being used.sometimes we feel relaxed after creating the indexes .But the query might be written in such a way that , that particular index might not be used and we might be doing scan instead of seek .One Such example is not using SARGable arguments (Now dont ask whats that ..we will discuss some other day or refer BOL).

-> Make sure you rebuild the indexes regularly based on the fragmentation level (you might be having so many scripts to do that).Do not bother to rebuild if the number of pages are less than 1000.rebuilding index will recompute the stats

-> Check if the Query is doing parallelism .If yes then Try everything else before running the query with maxdop hint.Many a times under CPU pressure the ECIDs have to wait on CPUs as its been taken by other threads.Thus causing blocking and or deadlocking .

-> Although the auto-update stats is ON by default.It does not mean that you should not update the stats .I will not cover this fact in detail here .Check auto-update statistics in BOL and you will know a lot me .

-> Any query that is going to do sorts in tempdb will surely take a lot of time as compared to in database sorting .For more on this check the facts on my Blog entry

-> make sure that you are not facing disk IOs contention even after following steps .Perfmon will show high Disk queue length .Normally DQL of 2 per spindle is OK

-> Check on what resource the query is waiting using sysprocesses .It might be Network IO as well .

-> When you do a bulk insert disable the indexes and enable them during the time of least activity so that it can catch up to the new rows and update the stats accordingly .Or better go for rebuilding it after the bulk insert or delete.

-> Do not play with the locks option in sp_configure .I would suggest to keep the default setting of 0.

-> Deadlocks can occur due to memory pressure .When concurrent requests are waiting for memory grants that cannot be satisfied with the available memory,a deadlock can occur.For example,two concurrent queries,Q1 and Q2,acquire 10MB and 20MB of memory respectively.If each query needs 30MB and the total available memory is 20MB,then Q1 and Q2 must wait for each other to release memory.This results in a deadlock.

As a Developer

-> Use Temp tables instead of temp variables .Or test both and see which one works better .There is a great debate outside on this topic.

-> Use as less Adhoc queries as possible .There might be situations where too many adhoc queries kicks off the cached plans of SPs and in turn those SPs will
again be recompiled.If you want to use Adhoc queries use EXEC or sp_executesql and the plans will be cached.E.g. exec ('select * from t2') .First time it took 10 seconds .Second time it took 5 seconds.

-> Keep Small batches and transactions .Do not run a lot of transactions from one connection or SPs and try to achieve everything in one go.

-> Try to avoid Sorts in tempdb by creating and calling the temp table from the same SP and not from other SP .Google or Bing on SORT IN TEMPDB

-> Its not always true to say that more tempdb files will improveperformance .Your SAN should be capable enough to do that .

-> Choose Isolation level as per requirement .If you can live with read-uncommitted , go for it .

-> Its not bad to go for Snapshot isolation level but it will stress your tempdb.

-> Keep your table normalized so that you use the DISTINCT clause as less as possible .But make sure you do not over normalize the tables else you will be
ending up making your query more complex to understand for optimizer as it will have more joins .

-> One of the most important reasons for blocking and deadlocking could be lock escalation .If you have a slightest doubt that your query is going for lock
escalation , use Lock:Escalation event in profiler and see if its actually happening .If you find it happening , check if the same SPID is the blocker or
being blocked as well .you can disable lock escalation by enabling trace flag 1211 But I would not suggest that.To avoid lock escalation modify your query
to take less locks.

Tip :
Mostly I have seen that, if the deadlocking issue comes right after the application goes live then its the application fault (the way it synchronizes the
access to the data) .But if the application owner says that everything was fine and he is facing the deadlocks all of a suden then some major data change
might have happened in the tables that might be causing blocking and deadlocking .One such cause is " Parameter Sniffing ".

Sometimes we do a select in such a way that we crosses the threshold of in-database sort and the sort goes to tempdb
( .In this case we need to keep on testing the query until we find the threshold where the query spills the sorts in the tempdb and reduce the value of the column in the where clause.

Both Developer and DBA has to work together in resolving such issues.


Sometimes , you might wish that in any case a particular transaction should not be treated as a victim in the event of deadlock.
In that case you can set the deadlock_priority of the trasnaction higher than others.An example could be a case of an update and select , you might be ready
to rollback the select but you want the update to be successful .

Which session is chosen as the deadlock victim depends on each session's deadlock priority:
-If both sessions have the same deadlock priority, the instance of SQL Server chooses the session that is less expensive to roll back as the deadlock victim.
For example, if both sessions have set their deadlock priority to HIGH, the instance will choose as a victim the session it estimates is less costly to roll

-If the sessions have different deadlock priorities, the session with the lowest
deadlock priority is chosen as the deadlock victim.

Unresolved Deadlocks
You need not to bother about it .If it happens open a case with Microsoft as a prooduct bug .Either they have the hotfix or they dont .
In the error log you will find the deadlock graph and the Dump information .

2007-09-12 11:43:53.51 spid4s *
2007-09-12 11:43:53.51 spid4s * BEGIN STACK DUMP:
2007-09-12 11:43:53.51 spid4s * 09/12/07 11:43:53 spid 4
2007-09-12 11:43:53.51 spid4s *
2007-09-12 11:43:53.51 spid4s * Unresolved deadlock
2007-09-12 11:43:53.51 spid4s *
2007-09-12 11:43:53.51 spid4s *
2007-09-12 11:43:53.51 spid4s *
You will also see Cost:(N/A) and deadlock victim=process0 in the deadlock graph.

Few references
Batch Compilation, Recompilation, and Plan Caching Issues in SQL Server 2005

How to resolve blocking problems that are caused by lock escalation in SQL Server

1 comment:

Khan said...

cool, dude. keep go on