Tag Archives : SQL Server


A very quick guide to deadlock diagnosis in SQL Server

Recently I was asked about diagnosing deadlocks in SQL Server – I’ve done a lot of work in this area way back in 2008, so I figure it’s time for a refresher.  If there’s a lot of interest in exploring SQL Server and deadlocks further, I’m happy to write an extended article going into far more detail.  Just let me know.

Before we get into diagnosis and investigation, it’s a good time to pose the question: “what is a deadlock?”:

From TechNet:

A deadlock occurs when two or more tasks permanently block each other by each task having a lock on a resource which the other tasks are trying to lock. The following graph presents a high level view of a deadlock state where:

  • Task T1 has a lock on resource R1 (indicated by the arrow from R1 to T1) and has requested a lock on resource R2 (indicated by the arrow from T1 to R2).

  • Task T2 has a lock on resource R2 (indicated by the arrow from R2 to T2) and has requested a lock on resource R1 (indicated by the arrow from T2 to R1).

  • Because neither task can continue until a resource is available and neither resource can be released until a task continues, a deadlock state exists.

Diagram showing tasks in a deadlock state

The SQL Server Database Engine automatically detects deadlock cycles within SQL Server. The Database Engine chooses one of the sessions as a deadlock victim and the current transaction is terminated with an error to break the deadlock.

Basically, it’s a resource contention issue which blocks one process or transaction from performing actions on resources within SQL Server.  This can be a serious condition, not just for SQL Server as processes become suspended, but for the applications which rely on SQL Server as well.

image

The T-SQL Approach

A fast way to respond is to execute a bit of T-SQL on SQL Server, making use of System Views.  The following T-SQL will show you the “victim” processes, much like activity monitor does:

select * from sys.sysprocesses where blocked > 0

Which is not particularly useful (but good to know, so you can see the blocked count).  To get to the heart of the deadlock, this is what you want (courtesy of this SO question/answer):

SELECT Blocker.text –, Blocker.*, *
FROM sys.dm_exec_connections AS Conns
INNER JOIN sys.dm_exec_requests AS BlockedReqs
    ON Conns.session_id = BlockedReqs.blocking_session_id
INNER JOIN sys.dm_os_waiting_tasks AS w
    ON BlockedReqs.session_id = w.session_id
CROSS APPLY sys.dm_exec_sql_text(Conns.most_recent_sql_handle) AS Blocker

This will show you line and verse (the actual statement causing the resource block) – see the attached screenshot for an example.

deadlock

However, the generally accepted way to determine and diagnose deadlocks is through the use of SQL Server trace flags. 

SQL Trace Flags

They are (usually) set temporarily, and they cause deadlocking information to be dumped to the SQL management logs.  The flags that are useful are flags 1204 and 1222.  From TechNet:  https://technet.microsoft.com/en-us/library/ms178104%28v=sql.105%29.aspx

Trace flags are set on or off by using either of the following methods:

· Using the DBCC TRACEON and DBCC TRACEOFF commands.

For example, DBCC TRACEON 2528: To enable the trace flag globally, use DBCC TRACEON with the -1 argument: DBCC TRACEON (2528, -1). To turn off a global trace flag, use DBCC TRACEOFF with the -1 argument.

· Using the -T startup option to specify that the trace flag be set on during startup.

The -T startup option enables a trace flag globally. You cannot enable a session-level trace flag by using a startup option.  So to enable or disable deadlock trace flags globally, you’d use the following T-SQL:

DBCC TRACEON (1204, -1)
DBCC TRACEON (1222, -1)

DBCC TRACEOFF (1204, -1)
DBCC TRACEOFF (1222, -1)

Due to the overhead, it’s best to enable the flag at runtime rather than on start up.  Note that the scope of a non-startup trace flag can be global or session-level.

Basic Deadlock Simulation

By way of a very simple scenario, you can make use of SQL Management Studio (and breakpoints) to roughly simulate a deadlock scenario.

Given the following basic table schema:

CREATE TABLE [dbo].[UploadedFile](
    [Id] [int] NOT NULL,
    [Filename] [nvarchar](50) NOT NULL,
    [DateCreated] [datetime] NOT NULL,
    [DateModified] [datetime] NULL,
CONSTRAINT [PK_UploadedFile] PRIMARY KEY CLUSTERED
(
    [Id] ASC
)WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF)
)

With some basic test data in it:

image

If you create two separate queries in SQL Management Studio, use the following transaction (Query #1) to lock rows in the table:

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
SELECT [Id],[Filename],[DateCreated],[DateModified]
FROM [dbo].[UploadedFile]
WHERE DateCreated > ‘2015-01-01′
ROLLBACK TRANSACTION

Now add a “victim” script (Query #2) in a separate query session:

UPDATE [dbo].[UploadedFile]
SET [DateModified] = ‘2014-12-31′
WHERE DateCreated > ‘2015-01-01′

As long as you set a breakpoint on the ROLLBACK TRANSACTION statement, you’ll block the second query due to the isolation level of the transaction which wraps query #1.

image

Now you can use the diagnostic T-SQL to examine the victim and the blocking transaction. Enjoy!


Data Modelling and the Entity Framework 1

Introduction

Last week I gave an internal presentation to my fellow consultants at CGI on the principals of data modelling/data architecture, modelling within Visual Studio 2013 and a history of the (ADO.NET) Entity Framework.

I’ve attached the slide deck to this article, and also in my presentations page.

Data Modelling – Concepts

imagesOnce we get past the initial introductions, I dove into some of the fundamental principles of data access design.  These are the key design considerations which every mature solution should take into consideration – particularly when building a design which marries to the ACID principles.

This part of the presentation wasn’t directly related to either data modelling in Visual Studio or implementing the Entity Framework ORM, but I don’t think it is a bad idea to restate some core data access design goals where possible.

Once we were past the concepts, we went straight into…

Visual Studio 2013 Database Projects

To be honest with you, I forced myself to use Visual Studio’s out-of-the-box database project as a design tool instead of jumping into SQL Management Studio as I normally would.  Partly, this was to give the tools some fair use – the designer support is still a bit sluggish – but there’s still some niceties to be had here.

The latest incarnation has some decent and attractive features, the SQL Compare functionality is simply superb for harmonizing T-SQL based on instances or other code repositories, and the T-SQL import wizard helps with getting projects up and running quickly.

Possibly the best feature is the publishing wizard, which you can use to easily deploy to SQL Azure or to instances; or to run as part of an automated build.

The Entity Framework

imagesThe second half of the presentation introduces the Entity Framework, and covers off a bit of history.  I’ve used the EF since the first version, so I have some experiences to share here.

Besides showing how the entity model is generated from the database schema, I wanted to impress upon the audience the costs vs. benefits of adopting an ORM solution – particularly focused on the quick wins against the limitations and potential performance problems.

Ultimately this lead into a review of a generic interface pattern which I’ve been working on for the past few weeks, and some of the power of consolidating common data access methods (e.g. Create, Read, Update and Delete) into a common implementation using generics.

The Surprise

At the end, I was planning to surprise the audience by “live switching” from accessing a local SQL instance to querying data from SQL Azure by simply changing a connection string, but due to having to move rooms at the last minute, the 4G connection I was using hadn’t been authorised on the SQL Azure Database, so the surprise failed.

The awesome takeaway (blown surprise aside) was that using the Entity Framework, there was no need to do any recompilation – the model worked seamlessly with local and Azure-based data stores.  I guess I’ll have to save that surprise for another audience at another time.

Summary

To be honest, I should have split this into two presentations.  There’s so much to discuss when it comes to decent data design principles, so I could have talked about those and data modelling in a single session.  The Entity Framework represents a large body of work in its own right, I could speak for hours about how it can be adapted and extended.

We didn’t even scratch the surface..  This may lead to a follow-up presentation potentially.  Here’s the slide deck from the day.