Category Archives : Programming

This category is designed for entries which relate to software development

Securing a Web API with ADFS 3.0 and JWT tokens


As APIs and web services become more and more prevalent, particularly in the Enterprise, there is an increasing need to look at ways to secure the more important interfaces, particularly if they enable access to sensitive data.

Recently, I’ve been investigating ways to secure ASP.NET Web APIs using Active Directory Federation Services (AD FS) version 3.0 (rather than Azure Active Directory) – which ships as a standard role in Windows Server 2012 R2.  Those who read this site regularly will not be surprised to find yet another ADFS article!

The Problem Defined

There are heaps of articles which explain how to secure a web application and a web API using Windows Identity Foundation (WIF), and with WS-Federation.  This suits scenarios where the user would authenticate in an interactive fashion with web applications and/or ADFS.

However, what if we want to cater for scenarios where interactive authentication (i.e. responding to a redirect to a Web Form or Windows Integrated Authentication) isn’t preferable – or possible.  I’m talking about software-to-server scenarios, or software to API where the software is manually (statically) configured, like setting a username and password a-la Basic Authentication.

What we want is for the API consumer to obtain a Json Web Token (JWT) using a SOAP request (over secure transport) and then pass that JWT in the header of subsequent REST calls to the target Web API. The Web API should validate the JWT and obtain the user’s credentials, and then complete the request as the authenticated user.

Although this would work over (unencrypted) HTTP, HTTPS is strongly recommended for all communications between a client and services you host.

Can we do it?  Yes we can. 

Here’s what the solution looks like in diagram form:


In order to properly understand how this all fits together, it would help immensely if you have some prior knowledge and experience in the following:

  • Configuring ADFS Relying Parties (and working with ADFS),
  • PowerShell, or equivalent,
  • SOAP and REST,
  • ASP.NET Web APIs/.NET Framework 4.6 (or later),
  • Visual Studio 2013 or 2015

For simplicity, we’ll authenticate identities in Active Directory (as illustrated above).

The Solution – Part 1: Obtain a JWT Token

So I’m  going to take serious liberties with an approach which is reasonably well documented in the following article:

The original article focuses on using a JWT with Azure AD, but the same approach works just fine as it turns out for on-premise ADFS Relying Parties (RPs).

You set up a relying party (RP) as per normal, although it doesn’t require WS-Fed or SAML configuration – because we’re not going to use either.  We’ll request a JWT token, C/- ADFS 3.0’s lightweight OAuth2 implementation.  The script accomplishes this by crafting a SOAP message and sends it to the appropriate ADFS endpoint specified to request a JWT token using the username and password specified.  Note that the endpoint specified in the $sendTo variable must be enabled.

# Originally found at
$ADFShost = "https://<your-adfs-server>"
$sendTo = "$ADFShost/adfs/services/trust/13/usernamemixed"
$username = "<domain>\<username>"
$password =  "<password>” 
$applyTo = "https://<rp-identifier>"
$tokenType = "urn:ietf:params:oauth:token-type:jwt"
$xml = @"
<s:envelope xmlns:s="" xmlns:a="" xmlns:u="">
    <a:action s:mustunderstand="1"></a:action>
    <a:to s:mustunderstand="1">$sendTo</a:to>
    <o:security s:mustunderstand="1" xmlns:o="">
      <o:usernametoken u:id=" uuid-00000000-0000-0000-0000-000000000000-0">
        <o:password type="">$password</o:password>
    <trust:requestsecuritytoken xmlns:trust="">
      <wsp:appliesto xmlns:wsp="">
$tokenresponse = [xml] ($xml | Invoke-WebRequest -uri $sendTo -Method Post -ContentType "application/soap+xml" -TimeoutSec 30 )
$tokenString = $tokenresponse.Envelope.Body.RequestSecurityTokenResponseCollection.RequestSecurityTokenResponse.RequestedSecurityToken.InnerText
$token = [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($tokenString))
$resource = "https://<your-rp-identifier><controller api>/<value>"
Invoke-RestMethod -Method Get -Uri $resource -Header @{ "Authorization" = 'Bearer '+ $token }

Assuming you properly configure the variable assignments at the start of this script, have configured the target RP and provide valid user credentials you ought to be able to run this script and obtain a valid JWT.  You may need to enable the OAuth2 endpoint in ADFS if it is disabled (possibly), and the credentials endpoint:


You’ll need to configure the Web API at the end to handle the ADFS issued JWT, which we’ll look into shortly.

The Solution – Part 2: Accept and validate a JWT Token

The next part took me a while, and then I somehow stumbled upon a dated official Microsoft sample which demonstrates exactly how to validate an ADFS-issued JWT token!  Here’s the address:

I strongly recommend you download and extract the sample linked above.  For one thing, it’ll save me from having to list the various NuGet packages you’ll need to get this solution working.

Note that although this sample related to Azure Active Directory, it works just fine with on-premise ADFS.  The key is in implementing functionality which strips the Authorization: Bearer <JWT> out of a request header.

If we look at the sample’s Web API implementation (TelemetryServiceWebAPI), all we really need to get working is the Global.asax.cs implementation of a global request handler.

Note that the samples are distributed under the following license:

Copyright 2013 Microsoft Corporation
//    Licensed under the Apache License, Version 2.0 (the "License");

The Application_Start configures a token handler:

GlobalConfiguration.Configuration.MessageHandlers.Add(new TokenValidationHandler());

Which invokes a token validation class as follows.  Note that I don’t believe you have to register the Relying Party with ADFS, i.e. you don’t require a client id or a client secret.  Now confirmed. 

internal class TokenValidationHandler : DelegatingHandler
        const string Audience = https://<rp-identifier>;
        // Domain name or Tenant name
        const string DomainName = https://<rp-identifier>;

        static string _issuer = string.Empty;
        static List<X509SecurityToken> _signingTokens = null;
        static DateTime _stsMetadataRetrievalTime = DateTime.MinValue;

        // SendAsync is used to validate incoming requests contain a valid access token, and sets the current user identity 
        protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
            string jwtToken;
            string issuer;
            string stsMetadataAddress = string.Format(CultureInfo.InvariantCulture, "https://<your-adfs-server>/federationmetadata/2007-06/federationmetadata.xml", DomainName);

            List<X509SecurityToken> signingTokens;
            using (HttpResponseMessage responseMessage = new HttpResponseMessage())

                if (!TryRetrieveToken(request, out jwtToken))
                    return Task.FromResult(new HttpResponseMessage(HttpStatusCode.Unauthorized));

                    // Get tenant information that's used to validate incoming jwt tokens
                    GetTenantInformation(stsMetadataAddress, out issuer, out signingTokens);
                catch (WebException)
                    return Task.FromResult(new HttpResponseMessage(HttpStatusCode.InternalServerError));
                catch (InvalidOperationException)
                    return Task.FromResult(new HttpResponseMessage(HttpStatusCode.InternalServerError));

                JwtSecurityTokenHandler tokenHandler = new JwtSecurityTokenHandler()
                    // for demo purposes certificate validation is turned off. Please note that this shouldn't be done in production code.
                    CertificateValidator = X509CertificateValidator.None

                TokenValidationParameters validationParameters = new TokenValidationParameters
                    AllowedAudience = Audience,
                    ValidIssuer = issuer,
                    SigningTokens = signingTokens

                    // Validate token
                    ClaimsPrincipal claimsPrincipal = tokenHandler.ValidateToken(jwtToken,

                    //set the ClaimsPrincipal on the current thread.
                    Thread.CurrentPrincipal = claimsPrincipal;

                    // set the ClaimsPrincipal on HttpContext.Current if the app is running in web hosted environment.
                    if (HttpContext.Current != null)
                        HttpContext.Current.User = claimsPrincipal;

                    return base.SendAsync(request, cancellationToken);
                catch (SecurityTokenValidationException)
                    responseMessage.StatusCode = HttpStatusCode.Unauthorized;
                    return Task.FromResult(responseMessage);
                catch (SecurityTokenException)
                    responseMessage.StatusCode = HttpStatusCode.Unauthorized;
                    return Task.FromResult(responseMessage);
                catch (ArgumentException)
                    responseMessage.StatusCode = HttpStatusCode.Unauthorized;
                    return Task.FromResult(responseMessage);
                catch (FormatException)
                    responseMessage.StatusCode = HttpStatusCode.Unauthorized;
                    return Task.FromResult(responseMessage);

        // Reads the token from the authorization header on the incoming request
        static bool TryRetrieveToken(HttpRequestMessage request, out string token)
            token = null;

            if (!request.Headers.Contains("Authorization"))
                return false;

            string authzHeader = request.Headers.GetValues("Authorization").First<string>();

            // Verify Authorization header contains 'Bearer' scheme
            token = authzHeader.StartsWith("Bearer ", StringComparison.Ordinal) ? authzHeader.Split(' ')[1] : null;

            if (null == token)
                return false;

            return true;

        /// <summary>
        /// Parses the federation metadata document and gets issuer Name and Signing Certificates
        /// </summary>
        /// <param name="metadataAddress">URL of the Federation Metadata document</param>
        /// <param name="issuer">Issuer Name</param>
        /// <param name="signingTokens">Signing Certificates in the form of X509SecurityToken</param>
        static void GetTenantInformation(string metadataAddress, out string issuer, out List<X509SecurityToken> signingTokens)
            signingTokens = new List<X509SecurityToken>();

            // The issuer and signingTokens are cached for 24 hours. They are updated if any of the conditions in the if condition is true.            
            if (DateTime.UtcNow.Subtract(_stsMetadataRetrievalTime).TotalHours > 24
                || string.IsNullOrEmpty(_issuer)
                || _signingTokens == null)
                MetadataSerializer serializer = new MetadataSerializer()
                    // turning off certificate validation for demo. Don't use this in production code.
                    CertificateValidationMode = X509CertificateValidationMode.None
                MetadataBase metadata = serializer.ReadMetadata(XmlReader.Create(metadataAddress));

                EntityDescriptor entityDescriptor = (EntityDescriptor)metadata;

                // get the issuer name
                if (!string.IsNullOrWhiteSpace(entityDescriptor.EntityId.Id))
                    _issuer = entityDescriptor.EntityId.Id;

                // get the signing certs
                _signingTokens = ReadSigningCertsFromMetadata(entityDescriptor);

                _stsMetadataRetrievalTime = DateTime.UtcNow;

            issuer = _issuer;
            signingTokens = _signingTokens;

        static List<X509SecurityToken> ReadSigningCertsFromMetadata(EntityDescriptor entityDescriptor)
            List<X509SecurityToken> stsSigningTokens = new List<X509SecurityToken>();

            SecurityTokenServiceDescriptor stsd = entityDescriptor.RoleDescriptors.OfType<SecurityTokenServiceDescriptor>().First();

            if (stsd != null && stsd.Keys != null)
                IEnumerable<X509RawDataKeyIdentifierClause> x509DataClauses = stsd.Keys.Where(key => key.KeyInfo != null && (key.Use == KeyType.Signing || key.Use == KeyType.Unspecified)).
                                                             Select(key => key.KeyInfo.OfType<X509RawDataKeyIdentifierClause>().First());

                stsSigningTokens.AddRange(x509DataClauses.Select(clause => new X509SecurityToken(new X509Certificate2(clause.GetX509RawData()))));
                throw new InvalidOperationException("There is no RoleDescriptor of type SecurityTokenServiceType in the metadata");

            return stsSigningTokens;

Once this code is in place, you can decorate ApiControllers and methods as per normal with the [Authorize] attribute to force the authentication requirement.

You can access the identity information from the User object at runtime, e.g. if you threw a standard out-of-the-box values controller into the sample: (I added this to a local copy, it is not part of the official sample)

namespace Microsoft.Samples.Adal.TelemetryServiceWebApi.Controllers
    public class ValuesController : ApiController
        // GET api/values
        public IEnumerable<string> Get()
            return new string[] { "value1", "value2" };

        // GET api/values/5
        public string Get(int id)
            return User.Identity.Name;

Assuming that you have mapped the user’s UPN (user principal name) to “name” in the Relying Party claim rules, you’d see the user’s FQDN in the response when invoking this GET request and passing a valid ADFS-issued JWT.


Here’s a PowerShell script successfully making a GET request with an ADFS issued JWT:


Note that in this particular case, I successfully tested the approach against ADFS vNext (4.0)!

The future approach may well lie in the following sample: which is listed as the future direction.

Tips on Troubleshooting

Always check the ADFS configuration, and ensure that your endpoints are correct.  Keep an eye on the ADFS event logs, as RP misconfigurations usually end up as failed requests there.


Even something as simple as a trailing forward slash in the RP identifier can ruin the token validation (above).

Ensure appropriate ADFS endpoints are enabled, and if you can, try to secure your identifiers using HTTPS for best results.  In our test lab we use internal DNS and an Enterprise CA over self-signed certificates to handle site bindings.

Lastly, don’t be afraid to debug the Web API (locally or remotely), remembering that you can configure an RP in ADFS to be localhost Smile


Well, I hope this has been an informative article for you.  I’m aiming to reproduce this solution in its entirety later this week in a Greenfields environment, so this article may be subject to further edits.

I was quite happy to see a complete end-to-end scenario working perfectly in our Development environment.  In theory, this approach should work without too much configuration overhead, but the usual disclaimers apply: it works on my machine.

Feel free to post comments if you have questions or would like to know more.

Further Reading

It took a lot of reading to get this far.  Here are some very helpful articles/links which provided much needed hints and pointers.


JWT in ADFS overview:

Helpful Links:

Preparing for ASP.NET vNext and Visual Studio 2015

Happy Thanksgiving to folks in the USA.

I’ve finally taken the plunge and decided to get stuck into the recently released Release Candidate (RC) of ASP.NET 5.  Prior to today, I’d stuck with the RTM version of Visual Studio 2015 which insulated me from some of the changes which are on the horizon.

A few months ago, I’d managed to put together a working (live) solution using VS 2015 and the new Web Projects, and you can see it here at

Whilst this was handy experience, it barely prepared me for the massive changes to the development environment which ASP.NET 5 RC requires.  This article contains my experiences in getting a Web API project compiled and run when consuming ASP.NET 5 RC packages.

Git Support

Whether you use Github, Team Foundation Server Source Control or no source control, you’ll want Git support in your dev environment anyway.  A lot of PowerShell scripts and commands pull and clone from Git repositories, and command line integration, IMHO is essential.  If you haven’t installed Git support with Visual Studio 2015, now’s the time to do so.

Install Git/GitHub support when you install Visual Studio 2015 (or modify your install)


Also you can download Git tools for Windows from and support for Git in PowerShell here:

Speaking of PowerShell….

Preparing PowerShell

Enable PowerShell script execution.  You’ll probably be working with PowerShell more than you have in the past, even if you aren’t writing the script.  You’ll certainly be using PowerShell commands, at a minimum inside the Package Manager Console inside VS 2015.

Open a PowerShell console as Administrator, then: Set-ExecutionPolicy Unrestricted

If you get the following error when loading the Package Manager Console inside Visual Studio 2015:

“”Windows PowerShell updated your execution policy successfully, but the setting is overridden by a policy defined at a more specific scope. Due to the override, your shell will retain its current effective execution policy of Unrestricted. Type “Get-ExecutionPolicy -List” to view your execution policy settings. For more information please see “Get-Help Set-ExecutionPolicy”.”””

Here’s my PowerShell Execution Policy on a Workgroup-based computer:


..and on a Domain-joined machine with a Group Policy Object applied:


It’s likely caused by a Group Policy Object (GPO) which is setting a domain-policy on PowerShell restrictions.  Even if you modify and update group policy, this error condition may persist.  Based on an article here:

A registry hack will get you past this annoying issue:

Windows Registry Editor Version 5.00



Working with DNX, DNU and DNVM

To manage different versions of the .NET Runtime environments, you’ll need to get familiar with dnx (Microsoft .NET Execution environment), dnu (.NET Development Utility) and dnvm (.NET Version Manager).  Screenshots below.  You should be able to execute them from the Visual Studio 2015 Command Line Tool:








If you get the message:

“’dnx’ is not recognized as an internal or external command, operable program or batch file.”

You can fix this issue by running the following command:dnvm use default –p

which will persist the changes to the environment variable for the current user.


On another machine, I was warned about a deprecated environment variable:


Which might beg the question….

What are KRE, KVM, KPM?

In short, KRE/KVM and KPM are management bits for ASP.NET 5.  K-bits were named to DNX/DNVM.  I’m Including this info in case it leads you to this article.

From the link above:

K has three components:

  1. KRE – K Runtime Environment is the code required to bootstrap and run an ASP.NET vNext application. This includes things like the compilation system, SDK tools, and the native CLR hosts.
  2. KVM – K Version Manager is for updating and installing different versions of KRE. KVM is also used to set default KRE version.
  3. KPM – K Package Manager manages packages needed by applications to run. Packages in this context are NuGet packages.

Microsoft ASP.NET and Web Tools 2015 (RC) – Visual Studio 2015

Lastly, before you get too excited, there’s a couple of hundred megabytes of updates you’ll need to the supporting tooling for the RC (RTM differs too much, some important things were renamed since then).

The latest version, naturally, requires updated tooling.  If you only have Visual Studio 2015 RTM, then prepare for some fun.  You can download the RC bits here:

Which leads me to installing all of the following on my Development machine:



The net result is that when I now open Visual Studio 2015, and I create a new project – I select .NET Framework 4.6 and when I create a new ASP.NET Web project, the options include:




Here’s some infuriating error messages you might stumble across in trying to compile a simple Web API…..

“DNX 4.5.1 error NU1002: The dependency <Assembly> in project <Project> does not support framework DNX,Version=v4.5.1.” e.g.

“DNX 4.5.1 error NU1002: The dependency System.Runtime 4.0.0 in project Asp5Api does not support framework DNX,Version=v4.5.1.”


System.IO.FileNotFoundException: Could not load file or assembly ‘Microsoft.DNX.PackageManager’ or one of its dependencies. The system cannot find the file specified.

Means you probably haven’t installed the latest Web Tools.  The PackageManager assembly apparently has been renamed, and is reflected in the later (post-RTM) versions.

High Availability: MassTransit 2.x with Clustered MSMQ – Part 1

So this article isn’t going to be for everyone, however I suspect it will be somewhat appealing for anyone who is looking at Windows Server 2012 R2’s Failover Clustering capability. 


I’m going to write this in a series of posts, as I think there’s also some merit in looking at diagnostic approaches to finding out what the heck is going wrong with a Failover Cluster, rather than focusing on an ideal end state in isolation.

If you’re not interested in the MassTransit parts, skip this introduction and check out Part 2 (coming soon!) which will focus on Clustered MSMQ roles and diagnosis.


MassTransit: In a Nutshell

We’re taking a view of Failover Clustering from the point of view of MassTransit, which is an open source implementation of a lightweight message queue-backed Service Bus (of sorts).  Here’s the official blurb from the GitHub page:

MassTransit is a free, open-source distributed application framework for .NET. MassTransit makes it easy to create applications and services that leverage message-based, loosely-coupled asynchronous communication for higher availability, reliabililty, and scalability.

Some documentation is also available here.

What are we focusing on?

Well, MassTransit, from version 3.x onwards, only supports RabbitMQ and the Azure Service Bus.  As we had initiated implementation in late 2014 with version 2, we flaunted the abandonment of MSMQ and boldly decided to use it anyway; mainly because it is an OOB first class service as standard with Windows Server 2012 R2 (and earlier versions).  So this article won’t feature RabbitMQ or Azure Service Bus, but I might tackle that topic at a later time.

To eliminate a ton of extremely complex code, support for MSMQ was completely ripped out [1]

Therefore, this series of articles pertains only to MassTransit version 2.x and MSMQ.

We’re also using subscriptions, which means we are using the MassTransit subscriptions queue, and message consumers & subscribers register with a runtime service before interacting with the message bus.  This is important to note, because the objective of using a clustered queue is to mitigate service outages by shifting the active queue.

How does MassTransit work, in basic terms?

We’re working with two categories of (.NET) application; message publishers and message consumers.  An application can do one or both, in other words you can publish, subscribe or publish and subscribe.


There’s one caveat: both need to use a local MSMQ, which is used for two reasons essentially – a local staging location for retrying the publishing of messages and also to hold unprocessed and processed messages for consumers (in case the consumer is offline but has a persistent subscription).  There’s also an error queue which messages will land in if they are unable to be successfully processed.

What is your High Availability goal?

Essentially, we want to have the subscription queue highly available.  In the event of an unplanned outage, the queue will move to the next available node in the cluster.  The clustered MSMQ role also has a dedicated network name & IP address which means that it acts as a central address point – no matter which machine acts as the active host – i.e. no need for publishers or subscribers to be aware of the cluster itself.

Since the majority of message consumers in the design also reside on the same server, the act of failing over the HA queue would also failover the message consumers too. 

This approach doesn’t rule out scaling horizontally at a later stage if we need to, there’s a plethora of options which can be made available, including some home brew options from MassTransit itself in the form of something called the Distributor.

Solution Outline

So if you look at my approach, the intention is to cluster two or more Windows Server hosts, and then stick a bunch of Windows Services on each node in the cluster, making them Cluster Resources. The simplistic model is to have the services started only on the active node.

Here’s an illustration of the target solution, with a two node Failover Cluster:

Conceptual Design

The MassTransit Runtime Service manages message subscriptions, from the clustered MSMQ role.  Now none of this really should make sense until you see the deployment in context.  The following is a sample diagram of a typical DMZ/LAN architecture:



So that’s the essential scope of my MassTransit HA deployment.  What will come in the next article is a closer inspection of how High Availability failover will function, and the mechanics behind it.

If you were looking for Clustered MSMQ guidance, Part 2 is for you!