Category Archives : Tips and Tricks


A very quick guide to deadlock diagnosis in SQL Server

Recently I was asked about diagnosing deadlocks in SQL Server – I’ve done a lot of work in this area way back in 2008, so I figure it’s time for a refresher.  If there’s a lot of interest in exploring SQL Server and deadlocks further, I’m happy to write an extended article going into far more detail.  Just let me know.

Before we get into diagnosis and investigation, it’s a good time to pose the question: “what is a deadlock?”:

From TechNet:

A deadlock occurs when two or more tasks permanently block each other by each task having a lock on a resource which the other tasks are trying to lock. The following graph presents a high level view of a deadlock state where:

  • Task T1 has a lock on resource R1 (indicated by the arrow from R1 to T1) and has requested a lock on resource R2 (indicated by the arrow from T1 to R2).

  • Task T2 has a lock on resource R2 (indicated by the arrow from R2 to T2) and has requested a lock on resource R1 (indicated by the arrow from T2 to R1).

  • Because neither task can continue until a resource is available and neither resource can be released until a task continues, a deadlock state exists.

Diagram showing tasks in a deadlock state

The SQL Server Database Engine automatically detects deadlock cycles within SQL Server. The Database Engine chooses one of the sessions as a deadlock victim and the current transaction is terminated with an error to break the deadlock.

Basically, it’s a resource contention issue which blocks one process or transaction from performing actions on resources within SQL Server.  This can be a serious condition, not just for SQL Server as processes become suspended, but for the applications which rely on SQL Server as well.

image

The T-SQL Approach

A fast way to respond is to execute a bit of T-SQL on SQL Server, making use of System Views.  The following T-SQL will show you the “victim” processes, much like activity monitor does:

select * from sys.sysprocesses where blocked > 0

Which is not particularly useful (but good to know, so you can see the blocked count).  To get to the heart of the deadlock, this is what you want (courtesy of this SO question/answer):

SELECT Blocker.text –, Blocker.*, *
FROM sys.dm_exec_connections AS Conns
INNER JOIN sys.dm_exec_requests AS BlockedReqs
    ON Conns.session_id = BlockedReqs.blocking_session_id
INNER JOIN sys.dm_os_waiting_tasks AS w
    ON BlockedReqs.session_id = w.session_id
CROSS APPLY sys.dm_exec_sql_text(Conns.most_recent_sql_handle) AS Blocker

This will show you line and verse (the actual statement causing the resource block) – see the attached screenshot for an example.

deadlock

However, the generally accepted way to determine and diagnose deadlocks is through the use of SQL Server trace flags. 

SQL Trace Flags

They are (usually) set temporarily, and they cause deadlocking information to be dumped to the SQL management logs.  The flags that are useful are flags 1204 and 1222.  From TechNet:  https://technet.microsoft.com/en-us/library/ms178104%28v=sql.105%29.aspx

Trace flags are set on or off by using either of the following methods:

· Using the DBCC TRACEON and DBCC TRACEOFF commands.

For example, DBCC TRACEON 2528: To enable the trace flag globally, use DBCC TRACEON with the -1 argument: DBCC TRACEON (2528, -1). To turn off a global trace flag, use DBCC TRACEOFF with the -1 argument.

· Using the -T startup option to specify that the trace flag be set on during startup.

The -T startup option enables a trace flag globally. You cannot enable a session-level trace flag by using a startup option.  So to enable or disable deadlock trace flags globally, you’d use the following T-SQL:

DBCC TRACEON (1204, -1)
DBCC TRACEON (1222, -1)

DBCC TRACEOFF (1204, -1)
DBCC TRACEOFF (1222, -1)

Due to the overhead, it’s best to enable the flag at runtime rather than on start up.  Note that the scope of a non-startup trace flag can be global or session-level.

Basic Deadlock Simulation

By way of a very simple scenario, you can make use of SQL Management Studio (and breakpoints) to roughly simulate a deadlock scenario.

Given the following basic table schema:

CREATE TABLE [dbo].[UploadedFile](
    [Id] [int] NOT NULL,
    [Filename] [nvarchar](50) NOT NULL,
    [DateCreated] [datetime] NOT NULL,
    [DateModified] [datetime] NULL,
CONSTRAINT [PK_UploadedFile] PRIMARY KEY CLUSTERED
(
    [Id] ASC
)WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF)
)

With some basic test data in it:

image

If you create two separate queries in SQL Management Studio, use the following transaction (Query #1) to lock rows in the table:

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
SELECT [Id],[Filename],[DateCreated],[DateModified]
FROM [dbo].[UploadedFile]
WHERE DateCreated > ‘2015-01-01′
ROLLBACK TRANSACTION

Now add a “victim” script (Query #2) in a separate query session:

UPDATE [dbo].[UploadedFile]
SET [DateModified] = ‘2014-12-31′
WHERE DateCreated > ‘2015-01-01′

As long as you set a breakpoint on the ROLLBACK TRANSACTION statement, you’ll block the second query due to the isolation level of the transaction which wraps query #1.

image

Now you can use the diagnostic T-SQL to examine the victim and the blocking transaction. Enjoy!


Programmatically Reading Event Logs

Welcome, 2015 – may you be an improvement on your predecessor.

Today’s article focuses on the deceptively non-trivial task of reading from the Windows Event Logs using the Microsoft .NET Framework.  For those who haven’t looked there in a while, here’s a quick look at the Event Viewer:

image
The Windows Event Viewer

Now there are the usual suspects like the Application, Security and System logs of course, but on more recent editions of Windows, you might make note of the category beneath the standard “Windows Logs”, namely “Applications and Services Logs”.  We can read from those as well as the standard logs.

image

Windows Event Log Tree

My Scenario – Viewing Log File Content

At the moment, we don’t have a logging approach which consolidates system logs and rolling log files into a single location.  While we are waiting for that capability, I decided to quickly roll an ASP.NET MVC site which would effectively publish the content of local log files for remote users to view without the hassle of having to log on or RDP to the machine.

The “Log View” web application needed to do the following:

  • Through configuration, read log files contained in (one to many) specified local directories
  • Through configuration, read (one or more) log files based on a specific path/filename
  • Read the Security, System and Application system logs, displaying the most recent 100 entries
  • Through configuration, read the AD FS admin log when installed on an AD FS server
  • Allow anonymous authentication

This web application is meant for development/test environments, hence the anonymous authentication requirement.

index
Index page

Different Approaches – Reading Logs

The standard Windows Logs – a well beaten path – have special support in the .NET Framework.  Provided you have the appropriate permissions, reading log entries is relatively straightforward:

var eventLogItem = System.Diagnostics.EventLog(“Application”);
var numberOfItems = eventLogItem.Entries.Count;

Of course, reading from the log is just as simple:

foreach (EventLogEntry entry in eventLogItem.Entries)
{
    // read the individual event
}

read-system
Reading the system log

You don’t (seem) to require any special permissions as a local user to read from the Application and System logs, a standard user account seems to have read permissions by default – even on Windows Server 2012 R2.  This does not apply to the Security log, which would seem to require special permissions or policy – see more on this below.

However, things change when you want to read from a non-standard system log.  In my case, I wished to read from the AD FS/Admin log on a Windows Server 2012 R2 machine which had the Active Directory Federation Services (AD FS 3.0) role installed.

Reading Non-system Logs

Once we veer away from the standard ‘System’ and ‘Application’ logs, the implementation gets a tad trickier – and more brittle in terms of functionality.  You have to abandon the friendly EventLog class and instead have to use the EventLogQuery class, as below:

string LogName = “AD FS/Admin”;
var query = new EventLogQuery(LogName, PathType.LogName, “*[System/Level=2]”);
query.ReverseDirection = true;

Note that “log name” seems to need to match the “path” of the log if it resides under subfolders in the “Applications and Services Logs” section.  Note that I’ve used the “ReverseDirection” property to show the most recent log files first.  To actually read entries from the log, you invoke the tastefully named EventLogReader class, like so:

using (var reader = new EventLogReader(query))
{
    // implementation here
}

You might be wondering how one would consume the EventLogReader?  Happily, I can provide you with the implementation I’ve put together for my AD FS log reader:

var sb = new StringBuilder();
for (EventRecord eventInstance = logReader.ReadEvent();
       null != eventInstance; eventInstance = logReader.ReadEvent())
{
     sb.AppendLine(“<hr/>”);
     sb.AppendFormat(“Event ID: {0}<br/>”, eventInstance.Id);
     sb.AppendFormat(“Publisher: {0}<br/>”, eventInstance.ProviderName);
     sb.AppendFormat(“Time Created: {0}<br/>”, eventInstance.TimeCreated.Value);

     try
     {
        sb.AppendFormat(“Description: {0}<br/>”,  eventInstance.FormatDescription());
     }
     catch (EventLogException e)
     {
         // The event description contains parameters, and no parameters were
         // passed to the FormatDescription method, so an exception is thrown.
         sb.AppendFormat(“EventLogException: {0}”, e.Message);
     }
}

There are obviously more properties available, don’t limit yourself to what I’ve included above.  Also note that it’s possible to have an exception thrown when invoking the FormatDescription() function – it’s worth catching unless you want your logic to die when it can be reasonably anticipated.

Errors at Runtime

The first few times i deployed and ran my web application, I encountered some nasty exceptions being thrown.  I was running with the default ApplicationPool identity, which I decided I needed to replace with a dedicated local user.  I created a local user called ‘svc_adfs_logs’ and made it a member of the local IIS_IUSRS group as well as making it the identity of my web application’s application pool.

adfs-error-release

adfs-error-debug

The errors occurred when accessing the Security and the AD FS logs.  I had to dig deeper.

Permissions and Settings

This is where things get interesting – aside from the standard System and Application logs, pretty much every other log I tried to read from, I’d encounter a permissions – or registry – issue.

File Permissions (ACLS)

One place to check are file permissions themselves.  The logs are files residing under the Windows directory (by default) which is usually this path: C:\Windows\System32\winevt\Logs

If you’re unsure,in Event Viewer, just right click on the log name and select properties:

image
Log Properties in Event Viewer

In my case, I assigned basic read access to the app pool identity of my web application:

acls
Assigning read access to the web application identity

Group Membership

The next obvious step is to ensure that your process’s identity (the account which the application is running under) is a member of a local, built-in security group called (aptly) ‘Event Log Readers’.  You administer this membership via the local Groups in Computer Management:

user-properties
Ensure your application’s identity/account is a member of the ‘Event Log Readers’ group

Which should resolve the following exception (if you encounter it):

Attempted to perform an unauthorized operation.

at System.Diagnostics.Eventing.Reader.EventLogException.Throw(Int32 errorCode) at System.Diagnostics.Eventing.Reader.NativeWrapper.EvtQuery(EventLogHandle session, String path, String query, Int32 flags) at System.Diagnostics.Eventing.Reader.EventLogReader..ctor(EventLogQuery eventQuery, EventBookmark bookmark) at LogView.Controllers.LogsController.ReadNonSystemLog(LogModel item)

Summary

Well, aside from writing some pretty simple boiler plate code, it was really quite easy to put together a well articulated log file viewing web application.  I may consider publishing the source for this web application at a later time, once I’ve cleaned up the implementation a little bit (it’s a bit messy).

You should never be assigning local Administration rights when reading or writing to system logs – it’s worth the time investigating permissions and policies before going to those kinds of extremes.

There was one last avenue which I was exploring which involved setting SDDLs in the registry, but it turns out this was not necessary.  I’ve included the links though in case you’d like to find out more.

Further Reading/References

Basic “how to” query event messages – http://msdn.microsoft.com/en-us/library/bb671200(v=vs.110).aspx
Permissions to read event logs – http://geekswithblogs.net/TimH/archive/2005/10/05/56029.aspx 
How to set event log security – http://support.microsoft.com/kb/323076

Which leads us to….

Introduction to SDDL – http://msdn.microsoft.com/en-us/library/aa379570.aspx
The file ACL trick to get an SDDL: http://stackoverflow.com/questions/9907510/is-there-a-tool-to-generate-sddl-security-descriptor-definition-language-strin


Using alterative AD attributes to authenticate to AD FS 3.0

Introduction

We have a requirement at the moment to modify AD FS 3.0 (which is a role in Windows Server 2012 R2) to allow users to authenticate without having to specify the domain name. 

This is for two reasons – the current external system doesn’t have a requirement to prefix a domain (or to authenticate with UPN format), and the organisation would prefer users to not have to worry about knowing the domain name (which is a DMZ domain).

AD FS 3.0 supports this scenario – sort of – but the landing page which handles authentication has some hard coded forms validation logic which won’t let users authenticate with a username which doesn’t have the DOMAIN\\ prefix or that is not in UPN format.  Awkward.

The Problem

AD FS 3.0 is the first release which doesn’t run under IIS.  As a result, this self hosted solution doesn’t have web content directly available for customization.  However, using PowerShell commands, it is possible to customize the user interface, although only client-side elements, like scripts (.js).

Since our issue is with client side validation, we have a way forward.  This article will demonstrate how to remove the domain prefix (or UPN format) requirement without having to modify ADFS binaries directly (which as a general rule you should never do).

Assigning an Alternate AD attribute to use for identifying a user’s credential (i.e. ‘username’) is simplicity itself.  In a PowerSHell console (elevated permissions) execute the following where [AD ATTRIBUTE] is the schema field you want to use to identify users.

Set-AdfsClaimsProviderTrust -TargetIdentifier “AD AUTHORITY” –AlternateLoginID [AD ATTRIBUTE] –LookupForest <your forest>

You can pretty much use any AD schema which makes sense, e.g. CN which is what I’m using in this scenario.

ad

You don’t get this view by default, if you want to view AD schema attributes, you need to switch the AD management console to Advanced View:

ad-advanced

Fixing the client-side validation

This part is trickier.  You’re going to have to modify the out-of-the-box AD FS behaviour in order to modify the way ADFS validates the username field. 

You’re going to need to create a new AD FS theme (based on the default) and then dump the default web theme to the file system using PowerShell commands:

New-AdfsWebTheme -name Custom -SourceName default
Export-AdfsWebTheme -Name Custom -DirectoryPath c:\AdfsTheme

You’ll need to configure this on each ADFS host if you have multiple. For more information on customizing AD FS 3.0 have a look at this TechNet article.

Once you’ve completed this step, have a look at the contents of the folder you specified to the –DirectoryPath parameter (e.g. c:\AdfsTheme).  There should be a subfolder called script, and it will contain a file called onload.js. 

We’re going to edit that file.

The Concept

The out-of-the-box implementation adds some client side JavaScript which checks the username field when the user clicks the submit button, or on a keypress (e..g Enter key).  We need to hijack that script and replace it with a cut down implementation, removing the domain format checking.

We unfortunately can’t do this with a script injected only on logon pages (using the SignInPageDescriptionText location)!  That approach injects custom script above the out-of-the-box script, which means we can’t modify form validation behaviour.  We have to instead change the onload.js which is run on every ADFS web page (the downside).

Here’s the out-of-the-box validation script, which you can see by viewing the page source of the ADFS logon page.  Note that the Sign In page description text field is located above this (id=”introduction””.

  oob

What we will do is add an implementation to the onload.js file which replaces the OOB implementation – we do this by appending the following to the end of the onload.js file’s content:

// rewire form validation
function override_form_validation() {
    Login.submitLoginRequest = function () {
                var u = new InputUtil();
                var e = new LoginErrors();

                var userName = document.getElementById(Login.userNameInput);
                var password = document.getElementById(Login.passwordInput);

                if (!userName.value) {
                    u.setError(userName, “Username must be supplied”);
                    return false;
                }

                if (!password.value) {
                    u.setError(password, e.passwordEmpty);
                    return false;
                }

                document.forms[‘loginForm’].submit();
                return false;
            };
}

if(location.href.indexOf(‘SignOn’) > 0){
    override_form_validation();
}

The last part executes the overridden form validation only if the page’s URL contains the text “SignOn”.

Publishing the Changes

Once we’re done modifying the JavaScript, we use a PowerShell console to publish the updated file back to AD FS.  Note that you need to do this on each AD FS server if you have multiple.

Set-AdfsWebTheme -TargetName Custom –AdditionalFileResource @{Uri=”/adfs/portal/script/onload.js”; Path=”c:\AdfsTheme\script\onload.js”}

ps

This script will run on each ADFS page.

Adding additional script files and referencing them

If you would like to add separate script files to the custom theme, you can do this too.  Simply use PowerShell and the following command:

Set-AdfsWebTheme -TargetName Custom -AdditionalFileResource @{Uri=’/adfs/portal/script/yourfile.js’;path=”c:\AdfsTheme\script\yourfile.js“}

To reference the script, use another PowerShell command to inject a reference to load the script where appropriate:

Set-AdfsGlobalWebContent –SignInPageDescriptionText “<script type=””text/javascript”” src=””/adfs/portal/script/yourfile.js“”></script>”

There’s also –SignOutPageDescriptionText as an option as well.  Check out the command help documentation for more places to inject your own custom scripts.