Category Archives : Design Concepts and Code

This category is intended for code snippets, design or architecture entries or general programming thoughts.

Securing a Web API with ADFS 3.0 and JWT tokens


As APIs and web services become more and more prevalent, particularly in the Enterprise, there is an increasing need to look at ways to secure the more important interfaces, particularly if they enable access to sensitive data.

Recently, I’ve been investigating ways to secure ASP.NET Web APIs using Active Directory Federation Services (AD FS) version 3.0 (rather than Azure Active Directory) – which ships as a standard role in Windows Server 2012 R2.  Those who read this site regularly will not be surprised to find yet another ADFS article!

The Problem Defined

There are heaps of articles which explain how to secure a web application and a web API using Windows Identity Foundation (WIF), and with WS-Federation.  This suits scenarios where the user would authenticate in an interactive fashion with web applications and/or ADFS.

However, what if we want to cater for scenarios where interactive authentication (i.e. responding to a redirect to a Web Form or Windows Integrated Authentication) isn’t preferable – or possible.  I’m talking about software-to-server scenarios, or software to API where the software is manually (statically) configured, like setting a username and password a-la Basic Authentication.

What we want is for the API consumer to obtain a Json Web Token (JWT) using a SOAP request (over secure transport) and then pass that JWT in the header of subsequent REST calls to the target Web API. The Web API should validate the JWT and obtain the user’s credentials, and then complete the request as the authenticated user.

Although this would work over (unencrypted) HTTP, HTTPS is strongly recommended for all communications between a client and services you host.

Can we do it?  Yes we can. 

Here’s what the solution looks like in diagram form:


In order to properly understand how this all fits together, it would help immensely if you have some prior knowledge and experience in the following:

  • Configuring ADFS Relying Parties (and working with ADFS),
  • PowerShell, or equivalent,
  • SOAP and REST,
  • ASP.NET Web APIs/.NET Framework 4.6 (or later),
  • Visual Studio 2013 or 2015

For simplicity, we’ll authenticate identities in Active Directory (as illustrated above).

The Solution – Part 1: Obtain a JWT Token

So I’m  going to take serious liberties with an approach which is reasonably well documented in the following article:

The original article focuses on using a JWT with Azure AD, but the same approach works just fine as it turns out for on-premise ADFS Relying Parties (RPs).

You set up a relying party (RP) as per normal, although it doesn’t require WS-Fed or SAML configuration – because we’re not going to use either.  We’ll request a JWT token, C/- ADFS 3.0’s lightweight OAuth2 implementation.  The script accomplishes this by crafting a SOAP message and sends it to the appropriate ADFS endpoint specified to request a JWT token using the username and password specified.  Note that the endpoint specified in the $sendTo variable must be enabled.

# Originally found at
$ADFShost = "https://<your-adfs-server>"
$sendTo = "$ADFShost/adfs/services/trust/13/usernamemixed"
$username = "<domain>\<username>"
$password =  "<password>” 
$applyTo = "https://<rp-identifier>"
$tokenType = "urn:ietf:params:oauth:token-type:jwt"
$xml = @"
<s:envelope xmlns:s="" xmlns:a="" xmlns:u="">
    <a:action s:mustunderstand="1"></a:action>
    <a:to s:mustunderstand="1">$sendTo</a:to>
    <o:security s:mustunderstand="1" xmlns:o="">
      <o:usernametoken u:id=" uuid-00000000-0000-0000-0000-000000000000-0">
        <o:password type="">$password</o:password>
    <trust:requestsecuritytoken xmlns:trust="">
      <wsp:appliesto xmlns:wsp="">
$tokenresponse = [xml] ($xml | Invoke-WebRequest -uri $sendTo -Method Post -ContentType "application/soap+xml" -TimeoutSec 30 )
$tokenString = $tokenresponse.Envelope.Body.RequestSecurityTokenResponseCollection.RequestSecurityTokenResponse.RequestedSecurityToken.InnerText
$token = [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($tokenString))
$resource = "https://<your-rp-identifier><controller api>/<value>"
Invoke-RestMethod -Method Get -Uri $resource -Header @{ "Authorization" = 'Bearer '+ $token }

Assuming you properly configure the variable assignments at the start of this script, have configured the target RP and provide valid user credentials you ought to be able to run this script and obtain a valid JWT.  You may need to enable the OAuth2 endpoint in ADFS if it is disabled (possibly), and the credentials endpoint:


You’ll need to configure the Web API at the end to handle the ADFS issued JWT, which we’ll look into shortly.

The Solution – Part 2: Accept and validate a JWT Token

The next part took me a while, and then I somehow stumbled upon a dated official Microsoft sample which demonstrates exactly how to validate an ADFS-issued JWT token!  Here’s the address:

I strongly recommend you download and extract the sample linked above.  For one thing, it’ll save me from having to list the various NuGet packages you’ll need to get this solution working.

Note that although this sample related to Azure Active Directory, it works just fine with on-premise ADFS.  The key is in implementing functionality which strips the Authorization: Bearer <JWT> out of a request header.

If we look at the sample’s Web API implementation (TelemetryServiceWebAPI), all we really need to get working is the Global.asax.cs implementation of a global request handler.

Note that the samples are distributed under the following license:

Copyright 2013 Microsoft Corporation
//    Licensed under the Apache License, Version 2.0 (the "License");

The Application_Start configures a token handler:

GlobalConfiguration.Configuration.MessageHandlers.Add(new TokenValidationHandler());

Which invokes a token validation class as follows.  Note that I don’t believe you have to register the Relying Party with ADFS, i.e. you don’t require a client id or a client secret.  Now confirmed. 

internal class TokenValidationHandler : DelegatingHandler
        const string Audience = https://<rp-identifier>;
        // Domain name or Tenant name
        const string DomainName = https://<rp-identifier>;

        static string _issuer = string.Empty;
        static List<X509SecurityToken> _signingTokens = null;
        static DateTime _stsMetadataRetrievalTime = DateTime.MinValue;

        // SendAsync is used to validate incoming requests contain a valid access token, and sets the current user identity 
        protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
            string jwtToken;
            string issuer;
            string stsMetadataAddress = string.Format(CultureInfo.InvariantCulture, "https://<your-adfs-server>/federationmetadata/2007-06/federationmetadata.xml", DomainName);

            List<X509SecurityToken> signingTokens;
            using (HttpResponseMessage responseMessage = new HttpResponseMessage())

                if (!TryRetrieveToken(request, out jwtToken))
                    return Task.FromResult(new HttpResponseMessage(HttpStatusCode.Unauthorized));

                    // Get tenant information that's used to validate incoming jwt tokens
                    GetTenantInformation(stsMetadataAddress, out issuer, out signingTokens);
                catch (WebException)
                    return Task.FromResult(new HttpResponseMessage(HttpStatusCode.InternalServerError));
                catch (InvalidOperationException)
                    return Task.FromResult(new HttpResponseMessage(HttpStatusCode.InternalServerError));

                JwtSecurityTokenHandler tokenHandler = new JwtSecurityTokenHandler()
                    // for demo purposes certificate validation is turned off. Please note that this shouldn't be done in production code.
                    CertificateValidator = X509CertificateValidator.None

                TokenValidationParameters validationParameters = new TokenValidationParameters
                    AllowedAudience = Audience,
                    ValidIssuer = issuer,
                    SigningTokens = signingTokens

                    // Validate token
                    ClaimsPrincipal claimsPrincipal = tokenHandler.ValidateToken(jwtToken,

                    //set the ClaimsPrincipal on the current thread.
                    Thread.CurrentPrincipal = claimsPrincipal;

                    // set the ClaimsPrincipal on HttpContext.Current if the app is running in web hosted environment.
                    if (HttpContext.Current != null)
                        HttpContext.Current.User = claimsPrincipal;

                    return base.SendAsync(request, cancellationToken);
                catch (SecurityTokenValidationException)
                    responseMessage.StatusCode = HttpStatusCode.Unauthorized;
                    return Task.FromResult(responseMessage);
                catch (SecurityTokenException)
                    responseMessage.StatusCode = HttpStatusCode.Unauthorized;
                    return Task.FromResult(responseMessage);
                catch (ArgumentException)
                    responseMessage.StatusCode = HttpStatusCode.Unauthorized;
                    return Task.FromResult(responseMessage);
                catch (FormatException)
                    responseMessage.StatusCode = HttpStatusCode.Unauthorized;
                    return Task.FromResult(responseMessage);

        // Reads the token from the authorization header on the incoming request
        static bool TryRetrieveToken(HttpRequestMessage request, out string token)
            token = null;

            if (!request.Headers.Contains("Authorization"))
                return false;

            string authzHeader = request.Headers.GetValues("Authorization").First<string>();

            // Verify Authorization header contains 'Bearer' scheme
            token = authzHeader.StartsWith("Bearer ", StringComparison.Ordinal) ? authzHeader.Split(' ')[1] : null;

            if (null == token)
                return false;

            return true;

        /// <summary>
        /// Parses the federation metadata document and gets issuer Name and Signing Certificates
        /// </summary>
        /// <param name="metadataAddress">URL of the Federation Metadata document</param>
        /// <param name="issuer">Issuer Name</param>
        /// <param name="signingTokens">Signing Certificates in the form of X509SecurityToken</param>
        static void GetTenantInformation(string metadataAddress, out string issuer, out List<X509SecurityToken> signingTokens)
            signingTokens = new List<X509SecurityToken>();

            // The issuer and signingTokens are cached for 24 hours. They are updated if any of the conditions in the if condition is true.            
            if (DateTime.UtcNow.Subtract(_stsMetadataRetrievalTime).TotalHours > 24
                || string.IsNullOrEmpty(_issuer)
                || _signingTokens == null)
                MetadataSerializer serializer = new MetadataSerializer()
                    // turning off certificate validation for demo. Don't use this in production code.
                    CertificateValidationMode = X509CertificateValidationMode.None
                MetadataBase metadata = serializer.ReadMetadata(XmlReader.Create(metadataAddress));

                EntityDescriptor entityDescriptor = (EntityDescriptor)metadata;

                // get the issuer name
                if (!string.IsNullOrWhiteSpace(entityDescriptor.EntityId.Id))
                    _issuer = entityDescriptor.EntityId.Id;

                // get the signing certs
                _signingTokens = ReadSigningCertsFromMetadata(entityDescriptor);

                _stsMetadataRetrievalTime = DateTime.UtcNow;

            issuer = _issuer;
            signingTokens = _signingTokens;

        static List<X509SecurityToken> ReadSigningCertsFromMetadata(EntityDescriptor entityDescriptor)
            List<X509SecurityToken> stsSigningTokens = new List<X509SecurityToken>();

            SecurityTokenServiceDescriptor stsd = entityDescriptor.RoleDescriptors.OfType<SecurityTokenServiceDescriptor>().First();

            if (stsd != null && stsd.Keys != null)
                IEnumerable<X509RawDataKeyIdentifierClause> x509DataClauses = stsd.Keys.Where(key => key.KeyInfo != null && (key.Use == KeyType.Signing || key.Use == KeyType.Unspecified)).
                                                             Select(key => key.KeyInfo.OfType<X509RawDataKeyIdentifierClause>().First());

                stsSigningTokens.AddRange(x509DataClauses.Select(clause => new X509SecurityToken(new X509Certificate2(clause.GetX509RawData()))));
                throw new InvalidOperationException("There is no RoleDescriptor of type SecurityTokenServiceType in the metadata");

            return stsSigningTokens;

Once this code is in place, you can decorate ApiControllers and methods as per normal with the [Authorize] attribute to force the authentication requirement.

You can access the identity information from the User object at runtime, e.g. if you threw a standard out-of-the-box values controller into the sample: (I added this to a local copy, it is not part of the official sample)

namespace Microsoft.Samples.Adal.TelemetryServiceWebApi.Controllers
    public class ValuesController : ApiController
        // GET api/values
        public IEnumerable<string> Get()
            return new string[] { "value1", "value2" };

        // GET api/values/5
        public string Get(int id)
            return User.Identity.Name;

Assuming that you have mapped the user’s UPN (user principal name) to “name” in the Relying Party claim rules, you’d see the user’s FQDN in the response when invoking this GET request and passing a valid ADFS-issued JWT.


Here’s a PowerShell script successfully making a GET request with an ADFS issued JWT:


Note that in this particular case, I successfully tested the approach against ADFS vNext (4.0)!

The future approach may well lie in the following sample: which is listed as the future direction.

Tips on Troubleshooting

Always check the ADFS configuration, and ensure that your endpoints are correct.  Keep an eye on the ADFS event logs, as RP misconfigurations usually end up as failed requests there.


Even something as simple as a trailing forward slash in the RP identifier can ruin the token validation (above).

Ensure appropriate ADFS endpoints are enabled, and if you can, try to secure your identifiers using HTTPS for best results.  In our test lab we use internal DNS and an Enterprise CA over self-signed certificates to handle site bindings.

Lastly, don’t be afraid to debug the Web API (locally or remotely), remembering that you can configure an RP in ADFS to be localhost Smile


Well, I hope this has been an informative article for you.  I’m aiming to reproduce this solution in its entirety later this week in a Greenfields environment, so this article may be subject to further edits.

I was quite happy to see a complete end-to-end scenario working perfectly in our Development environment.  In theory, this approach should work without too much configuration overhead, but the usual disclaimers apply: it works on my machine.

Feel free to post comments if you have questions or would like to know more.

Further Reading

It took a lot of reading to get this far.  Here are some very helpful articles/links which provided much needed hints and pointers.


JWT in ADFS overview:

Helpful Links:

High Availability: MassTransit 2.x with Clustered MSMQ – Part 1

So this article isn’t going to be for everyone, however I suspect it will be somewhat appealing for anyone who is looking at Windows Server 2012 R2’s Failover Clustering capability. 


I’m going to write this in a series of posts, as I think there’s also some merit in looking at diagnostic approaches to finding out what the heck is going wrong with a Failover Cluster, rather than focusing on an ideal end state in isolation.

If you’re not interested in the MassTransit parts, skip this introduction and check out Part 2 (coming soon!) which will focus on Clustered MSMQ roles and diagnosis.


MassTransit: In a Nutshell

We’re taking a view of Failover Clustering from the point of view of MassTransit, which is an open source implementation of a lightweight message queue-backed Service Bus (of sorts).  Here’s the official blurb from the GitHub page:

MassTransit is a free, open-source distributed application framework for .NET. MassTransit makes it easy to create applications and services that leverage message-based, loosely-coupled asynchronous communication for higher availability, reliabililty, and scalability.

Some documentation is also available here.

What are we focusing on?

Well, MassTransit, from version 3.x onwards, only supports RabbitMQ and the Azure Service Bus.  As we had initiated implementation in late 2014 with version 2, we flaunted the abandonment of MSMQ and boldly decided to use it anyway; mainly because it is an OOB first class service as standard with Windows Server 2012 R2 (and earlier versions).  So this article won’t feature RabbitMQ or Azure Service Bus, but I might tackle that topic at a later time.

To eliminate a ton of extremely complex code, support for MSMQ was completely ripped out [1]

Therefore, this series of articles pertains only to MassTransit version 2.x and MSMQ.

We’re also using subscriptions, which means we are using the MassTransit subscriptions queue, and message consumers & subscribers register with a runtime service before interacting with the message bus.  This is important to note, because the objective of using a clustered queue is to mitigate service outages by shifting the active queue.

How does MassTransit work, in basic terms?

We’re working with two categories of (.NET) application; message publishers and message consumers.  An application can do one or both, in other words you can publish, subscribe or publish and subscribe.


There’s one caveat: both need to use a local MSMQ, which is used for two reasons essentially – a local staging location for retrying the publishing of messages and also to hold unprocessed and processed messages for consumers (in case the consumer is offline but has a persistent subscription).  There’s also an error queue which messages will land in if they are unable to be successfully processed.

What is your High Availability goal?

Essentially, we want to have the subscription queue highly available.  In the event of an unplanned outage, the queue will move to the next available node in the cluster.  The clustered MSMQ role also has a dedicated network name & IP address which means that it acts as a central address point – no matter which machine acts as the active host – i.e. no need for publishers or subscribers to be aware of the cluster itself.

Since the majority of message consumers in the design also reside on the same server, the act of failing over the HA queue would also failover the message consumers too. 

This approach doesn’t rule out scaling horizontally at a later stage if we need to, there’s a plethora of options which can be made available, including some home brew options from MassTransit itself in the form of something called the Distributor.

Solution Outline

So if you look at my approach, the intention is to cluster two or more Windows Server hosts, and then stick a bunch of Windows Services on each node in the cluster, making them Cluster Resources. The simplistic model is to have the services started only on the active node.

Here’s an illustration of the target solution, with a two node Failover Cluster:

Conceptual Design

The MassTransit Runtime Service manages message subscriptions, from the clustered MSMQ role.  Now none of this really should make sense until you see the deployment in context.  The following is a sample diagram of a typical DMZ/LAN architecture:



So that’s the essential scope of my MassTransit HA deployment.  What will come in the next article is a closer inspection of how High Availability failover will function, and the mechanics behind it.

If you were looking for Clustered MSMQ guidance, Part 2 is for you!

Programmatically Reading Event Logs

Welcome, 2015 – may you be an improvement on your predecessor.

Today’s article focuses on the deceptively non-trivial task of reading from the Windows Event Logs using the Microsoft .NET Framework.  For those who haven’t looked there in a while, here’s a quick look at the Event Viewer:

The Windows Event Viewer

Now there are the usual suspects like the Application, Security and System logs of course, but on more recent editions of Windows, you might make note of the category beneath the standard “Windows Logs”, namely “Applications and Services Logs”.  We can read from those as well as the standard logs.


Windows Event Log Tree

My Scenario – Viewing Log File Content

At the moment, we don’t have a logging approach which consolidates system logs and rolling log files into a single location.  While we are waiting for that capability, I decided to quickly roll an ASP.NET MVC site which would effectively publish the content of local log files for remote users to view without the hassle of having to log on or RDP to the machine.

The “Log View” web application needed to do the following:

  • Through configuration, read log files contained in (one to many) specified local directories
  • Through configuration, read (one or more) log files based on a specific path/filename
  • Read the Security, System and Application system logs, displaying the most recent 100 entries
  • Through configuration, read the AD FS admin log when installed on an AD FS server
  • Allow anonymous authentication

This web application is meant for development/test environments, hence the anonymous authentication requirement.

Index page

Different Approaches – Reading Logs

The standard Windows Logs – a well beaten path – have special support in the .NET Framework.  Provided you have the appropriate permissions, reading log entries is relatively straightforward:

var eventLogItem = System.Diagnostics.EventLog(“Application”);
var numberOfItems = eventLogItem.Entries.Count;

Of course, reading from the log is just as simple:

foreach (EventLogEntry entry in eventLogItem.Entries)
    // read the individual event

Reading the system log

You don’t (seem) to require any special permissions as a local user to read from the Application and System logs, a standard user account seems to have read permissions by default – even on Windows Server 2012 R2.  This does not apply to the Security log, which would seem to require special permissions or policy – see more on this below.

However, things change when you want to read from a non-standard system log.  In my case, I wished to read from the AD FS/Admin log on a Windows Server 2012 R2 machine which had the Active Directory Federation Services (AD FS 3.0) role installed.

Reading Non-system Logs

Once we veer away from the standard ‘System’ and ‘Application’ logs, the implementation gets a tad trickier – and more brittle in terms of functionality.  You have to abandon the friendly EventLog class and instead have to use the EventLogQuery class, as below:

string LogName = “AD FS/Admin”;
var query = new EventLogQuery(LogName, PathType.LogName, “*[System/Level=2]”);
query.ReverseDirection = true;

Note that “log name” seems to need to match the “path” of the log if it resides under subfolders in the “Applications and Services Logs” section.  Note that I’ve used the “ReverseDirection” property to show the most recent log files first.  To actually read entries from the log, you invoke the tastefully named EventLogReader class, like so:

using (var reader = new EventLogReader(query))
    // implementation here

You might be wondering how one would consume the EventLogReader?  Happily, I can provide you with the implementation I’ve put together for my AD FS log reader:

var sb = new StringBuilder();
for (EventRecord eventInstance = logReader.ReadEvent();
       null != eventInstance; eventInstance = logReader.ReadEvent())
     sb.AppendFormat(“Event ID: {0}<br/>”, eventInstance.Id);
     sb.AppendFormat(“Publisher: {0}<br/>”, eventInstance.ProviderName);
     sb.AppendFormat(“Time Created: {0}<br/>”, eventInstance.TimeCreated.Value);

        sb.AppendFormat(“Description: {0}<br/>”,  eventInstance.FormatDescription());
     catch (EventLogException e)
         // The event description contains parameters, and no parameters were
         // passed to the FormatDescription method, so an exception is thrown.
         sb.AppendFormat(“EventLogException: {0}”, e.Message);

There are obviously more properties available, don’t limit yourself to what I’ve included above.  Also note that it’s possible to have an exception thrown when invoking the FormatDescription() function – it’s worth catching unless you want your logic to die when it can be reasonably anticipated.

Errors at Runtime

The first few times i deployed and ran my web application, I encountered some nasty exceptions being thrown.  I was running with the default ApplicationPool identity, which I decided I needed to replace with a dedicated local user.  I created a local user called ‘svc_adfs_logs’ and made it a member of the local IIS_IUSRS group as well as making it the identity of my web application’s application pool.



The errors occurred when accessing the Security and the AD FS logs.  I had to dig deeper.

Permissions and Settings

This is where things get interesting – aside from the standard System and Application logs, pretty much every other log I tried to read from, I’d encounter a permissions – or registry – issue.

File Permissions (ACLS)

One place to check are file permissions themselves.  The logs are files residing under the Windows directory (by default) which is usually this path: C:\Windows\System32\winevt\Logs

If you’re unsure,in Event Viewer, just right click on the log name and select properties:

Log Properties in Event Viewer

In my case, I assigned basic read access to the app pool identity of my web application:

Assigning read access to the web application identity

Group Membership

The next obvious step is to ensure that your process’s identity (the account which the application is running under) is a member of a local, built-in security group called (aptly) ‘Event Log Readers’.  You administer this membership via the local Groups in Computer Management:

Ensure your application’s identity/account is a member of the ‘Event Log Readers’ group

Which should resolve the following exception (if you encounter it):

Attempted to perform an unauthorized operation.

at System.Diagnostics.Eventing.Reader.EventLogException.Throw(Int32 errorCode) at System.Diagnostics.Eventing.Reader.NativeWrapper.EvtQuery(EventLogHandle session, String path, String query, Int32 flags) at System.Diagnostics.Eventing.Reader.EventLogReader..ctor(EventLogQuery eventQuery, EventBookmark bookmark) at LogView.Controllers.LogsController.ReadNonSystemLog(LogModel item)


Well, aside from writing some pretty simple boiler plate code, it was really quite easy to put together a well articulated log file viewing web application.  I may consider publishing the source for this web application at a later time, once I’ve cleaned up the implementation a little bit (it’s a bit messy).

You should never be assigning local Administration rights when reading or writing to system logs – it’s worth the time investigating permissions and policies before going to those kinds of extremes.

There was one last avenue which I was exploring which involved setting SDDLs in the registry, but it turns out this was not necessary.  I’ve included the links though in case you’d like to find out more.

Further Reading/References

Basic “how to” query event messages –
Permissions to read event logs – 
How to set event log security –

Which leads us to….

Introduction to SDDL –
The file ACL trick to get an SDDL: