Category Archives : Featured Articles

Featured Articles

Securing a Web API with ADFS 3.0 and JWT tokens


As APIs and web services become more and more prevalent, particularly in the Enterprise, there is an increasing need to look at ways to secure the more important interfaces, particularly if they enable access to sensitive data.

Recently, I’ve been investigating ways to secure ASP.NET Web APIs using Active Directory Federation Services (AD FS) version 3.0 (rather than Azure Active Directory) – which ships as a standard role in Windows Server 2012 R2.  Those who read this site regularly will not be surprised to find yet another ADFS article!

The Problem Defined

There are heaps of articles which explain how to secure a web application and a web API using Windows Identity Foundation (WIF), and with WS-Federation.  This suits scenarios where the user would authenticate in an interactive fashion with web applications and/or ADFS.

However, what if we want to cater for scenarios where interactive authentication (i.e. responding to a redirect to a Web Form or Windows Integrated Authentication) isn’t preferable – or possible.  I’m talking about software-to-server scenarios, or software to API where the software is manually (statically) configured, like setting a username and password a-la Basic Authentication.

What we want is for the API consumer to obtain a Json Web Token (JWT) using a SOAP request (over secure transport) and then pass that JWT in the header of subsequent REST calls to the target Web API. The Web API should validate the JWT and obtain the user’s credentials, and then complete the request as the authenticated user.

Although this would work over (unencrypted) HTTP, HTTPS is strongly recommended for all communications between a client and services you host.

Can we do it?  Yes we can. 

Here’s what the solution looks like in diagram form:


In order to properly understand how this all fits together, it would help immensely if you have some prior knowledge and experience in the following:

  • Configuring ADFS Relying Parties (and working with ADFS),
  • PowerShell, or equivalent,
  • SOAP and REST,
  • ASP.NET Web APIs/.NET Framework 4.6 (or later),
  • Visual Studio 2013 or 2015

For simplicity, we’ll authenticate identities in Active Directory (as illustrated above).

The Solution – Part 1: Obtain a JWT Token

So I’m  going to take serious liberties with an approach which is reasonably well documented in the following article:

The original article focuses on using a JWT with Azure AD, but the same approach works just fine as it turns out for on-premise ADFS Relying Parties (RPs).

You set up a relying party (RP) as per normal, although it doesn’t require WS-Fed or SAML configuration – because we’re not going to use either.  We’ll request a JWT token, C/- ADFS 3.0’s lightweight OAuth2 implementation.  The script accomplishes this by crafting a SOAP message and sends it to the appropriate ADFS endpoint specified to request a JWT token using the username and password specified.  Note that the endpoint specified in the $sendTo variable must be enabled.

# Originally found at
$ADFShost = "https://<your-adfs-server>"
$sendTo = "$ADFShost/adfs/services/trust/13/usernamemixed"
$username = "<domain>\<username>"
$password =  "<password>” 
$applyTo = "https://<rp-identifier>"
$tokenType = "urn:ietf:params:oauth:token-type:jwt"
$xml = @"
<s:envelope xmlns:s="" xmlns:a="" xmlns:u="">
    <a:action s:mustunderstand="1"></a:action>
    <a:to s:mustunderstand="1">$sendTo</a:to>
    <o:security s:mustunderstand="1" xmlns:o="">
      <o:usernametoken u:id=" uuid-00000000-0000-0000-0000-000000000000-0">
        <o:password type="">$password</o:password>
    <trust:requestsecuritytoken xmlns:trust="">
      <wsp:appliesto xmlns:wsp="">
$tokenresponse = [xml] ($xml | Invoke-WebRequest -uri $sendTo -Method Post -ContentType "application/soap+xml" -TimeoutSec 30 )
$tokenString = $tokenresponse.Envelope.Body.RequestSecurityTokenResponseCollection.RequestSecurityTokenResponse.RequestedSecurityToken.InnerText
$token = [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($tokenString))
$resource = "https://<your-rp-identifier><controller api>/<value>"
Invoke-RestMethod -Method Get -Uri $resource -Header @{ "Authorization" = 'Bearer '+ $token }

Assuming you properly configure the variable assignments at the start of this script, have configured the target RP and provide valid user credentials you ought to be able to run this script and obtain a valid JWT.  You may need to enable the OAuth2 endpoint in ADFS if it is disabled (possibly), and the credentials endpoint:


You’ll need to configure the Web API at the end to handle the ADFS issued JWT, which we’ll look into shortly.

The Solution – Part 2: Accept and validate a JWT Token

The next part took me a while, and then I somehow stumbled upon a dated official Microsoft sample which demonstrates exactly how to validate an ADFS-issued JWT token!  Here’s the address:

I strongly recommend you download and extract the sample linked above.  For one thing, it’ll save me from having to list the various NuGet packages you’ll need to get this solution working.

Note that although this sample related to Azure Active Directory, it works just fine with on-premise ADFS.  The key is in implementing functionality which strips the Authorization: Bearer <JWT> out of a request header.

If we look at the sample’s Web API implementation (TelemetryServiceWebAPI), all we really need to get working is the Global.asax.cs implementation of a global request handler.

Note that the samples are distributed under the following license:

Copyright 2013 Microsoft Corporation
//    Licensed under the Apache License, Version 2.0 (the "License");

The Application_Start configures a token handler:

GlobalConfiguration.Configuration.MessageHandlers.Add(new TokenValidationHandler());

Which invokes a token validation class as follows.  Note that I don’t believe you have to register the Relying Party with ADFS, i.e. you don’t require a client id or a client secret.  Now confirmed. 

internal class TokenValidationHandler : DelegatingHandler
        const string Audience = https://<rp-identifier>;
        // Domain name or Tenant name
        const string DomainName = https://<rp-identifier>;

        static string _issuer = string.Empty;
        static List<X509SecurityToken> _signingTokens = null;
        static DateTime _stsMetadataRetrievalTime = DateTime.MinValue;

        // SendAsync is used to validate incoming requests contain a valid access token, and sets the current user identity 
        protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
            string jwtToken;
            string issuer;
            string stsMetadataAddress = string.Format(CultureInfo.InvariantCulture, "https://<your-adfs-server>/federationmetadata/2007-06/federationmetadata.xml", DomainName);

            List<X509SecurityToken> signingTokens;
            using (HttpResponseMessage responseMessage = new HttpResponseMessage())

                if (!TryRetrieveToken(request, out jwtToken))
                    return Task.FromResult(new HttpResponseMessage(HttpStatusCode.Unauthorized));

                    // Get tenant information that's used to validate incoming jwt tokens
                    GetTenantInformation(stsMetadataAddress, out issuer, out signingTokens);
                catch (WebException)
                    return Task.FromResult(new HttpResponseMessage(HttpStatusCode.InternalServerError));
                catch (InvalidOperationException)
                    return Task.FromResult(new HttpResponseMessage(HttpStatusCode.InternalServerError));

                JwtSecurityTokenHandler tokenHandler = new JwtSecurityTokenHandler()
                    // for demo purposes certificate validation is turned off. Please note that this shouldn't be done in production code.
                    CertificateValidator = X509CertificateValidator.None

                TokenValidationParameters validationParameters = new TokenValidationParameters
                    AllowedAudience = Audience,
                    ValidIssuer = issuer,
                    SigningTokens = signingTokens

                    // Validate token
                    ClaimsPrincipal claimsPrincipal = tokenHandler.ValidateToken(jwtToken,

                    //set the ClaimsPrincipal on the current thread.
                    Thread.CurrentPrincipal = claimsPrincipal;

                    // set the ClaimsPrincipal on HttpContext.Current if the app is running in web hosted environment.
                    if (HttpContext.Current != null)
                        HttpContext.Current.User = claimsPrincipal;

                    return base.SendAsync(request, cancellationToken);
                catch (SecurityTokenValidationException)
                    responseMessage.StatusCode = HttpStatusCode.Unauthorized;
                    return Task.FromResult(responseMessage);
                catch (SecurityTokenException)
                    responseMessage.StatusCode = HttpStatusCode.Unauthorized;
                    return Task.FromResult(responseMessage);
                catch (ArgumentException)
                    responseMessage.StatusCode = HttpStatusCode.Unauthorized;
                    return Task.FromResult(responseMessage);
                catch (FormatException)
                    responseMessage.StatusCode = HttpStatusCode.Unauthorized;
                    return Task.FromResult(responseMessage);

        // Reads the token from the authorization header on the incoming request
        static bool TryRetrieveToken(HttpRequestMessage request, out string token)
            token = null;

            if (!request.Headers.Contains("Authorization"))
                return false;

            string authzHeader = request.Headers.GetValues("Authorization").First<string>();

            // Verify Authorization header contains 'Bearer' scheme
            token = authzHeader.StartsWith("Bearer ", StringComparison.Ordinal) ? authzHeader.Split(' ')[1] : null;

            if (null == token)
                return false;

            return true;

        /// <summary>
        /// Parses the federation metadata document and gets issuer Name and Signing Certificates
        /// </summary>
        /// <param name="metadataAddress">URL of the Federation Metadata document</param>
        /// <param name="issuer">Issuer Name</param>
        /// <param name="signingTokens">Signing Certificates in the form of X509SecurityToken</param>
        static void GetTenantInformation(string metadataAddress, out string issuer, out List<X509SecurityToken> signingTokens)
            signingTokens = new List<X509SecurityToken>();

            // The issuer and signingTokens are cached for 24 hours. They are updated if any of the conditions in the if condition is true.            
            if (DateTime.UtcNow.Subtract(_stsMetadataRetrievalTime).TotalHours > 24
                || string.IsNullOrEmpty(_issuer)
                || _signingTokens == null)
                MetadataSerializer serializer = new MetadataSerializer()
                    // turning off certificate validation for demo. Don't use this in production code.
                    CertificateValidationMode = X509CertificateValidationMode.None
                MetadataBase metadata = serializer.ReadMetadata(XmlReader.Create(metadataAddress));

                EntityDescriptor entityDescriptor = (EntityDescriptor)metadata;

                // get the issuer name
                if (!string.IsNullOrWhiteSpace(entityDescriptor.EntityId.Id))
                    _issuer = entityDescriptor.EntityId.Id;

                // get the signing certs
                _signingTokens = ReadSigningCertsFromMetadata(entityDescriptor);

                _stsMetadataRetrievalTime = DateTime.UtcNow;

            issuer = _issuer;
            signingTokens = _signingTokens;

        static List<X509SecurityToken> ReadSigningCertsFromMetadata(EntityDescriptor entityDescriptor)
            List<X509SecurityToken> stsSigningTokens = new List<X509SecurityToken>();

            SecurityTokenServiceDescriptor stsd = entityDescriptor.RoleDescriptors.OfType<SecurityTokenServiceDescriptor>().First();

            if (stsd != null && stsd.Keys != null)
                IEnumerable<X509RawDataKeyIdentifierClause> x509DataClauses = stsd.Keys.Where(key => key.KeyInfo != null && (key.Use == KeyType.Signing || key.Use == KeyType.Unspecified)).
                                                             Select(key => key.KeyInfo.OfType<X509RawDataKeyIdentifierClause>().First());

                stsSigningTokens.AddRange(x509DataClauses.Select(clause => new X509SecurityToken(new X509Certificate2(clause.GetX509RawData()))));
                throw new InvalidOperationException("There is no RoleDescriptor of type SecurityTokenServiceType in the metadata");

            return stsSigningTokens;

Once this code is in place, you can decorate ApiControllers and methods as per normal with the [Authorize] attribute to force the authentication requirement.

You can access the identity information from the User object at runtime, e.g. if you threw a standard out-of-the-box values controller into the sample: (I added this to a local copy, it is not part of the official sample)

namespace Microsoft.Samples.Adal.TelemetryServiceWebApi.Controllers
    public class ValuesController : ApiController
        // GET api/values
        public IEnumerable<string> Get()
            return new string[] { "value1", "value2" };

        // GET api/values/5
        public string Get(int id)
            return User.Identity.Name;

Assuming that you have mapped the user’s UPN (user principal name) to “name” in the Relying Party claim rules, you’d see the user’s FQDN in the response when invoking this GET request and passing a valid ADFS-issued JWT.


Here’s a PowerShell script successfully making a GET request with an ADFS issued JWT:


Note that in this particular case, I successfully tested the approach against ADFS vNext (4.0)!

The future approach may well lie in the following sample: which is listed as the future direction.

Tips on Troubleshooting

Always check the ADFS configuration, and ensure that your endpoints are correct.  Keep an eye on the ADFS event logs, as RP misconfigurations usually end up as failed requests there.


Even something as simple as a trailing forward slash in the RP identifier can ruin the token validation (above).

Ensure appropriate ADFS endpoints are enabled, and if you can, try to secure your identifiers using HTTPS for best results.  In our test lab we use internal DNS and an Enterprise CA over self-signed certificates to handle site bindings.

Lastly, don’t be afraid to debug the Web API (locally or remotely), remembering that you can configure an RP in ADFS to be localhost Smile


Well, I hope this has been an informative article for you.  I’m aiming to reproduce this solution in its entirety later this week in a Greenfields environment, so this article may be subject to further edits.

I was quite happy to see a complete end-to-end scenario working perfectly in our Development environment.  In theory, this approach should work without too much configuration overhead, but the usual disclaimers apply: it works on my machine.

Feel free to post comments if you have questions or would like to know more.

Further Reading

It took a lot of reading to get this far.  Here are some very helpful articles/links which provided much needed hints and pointers.


JWT in ADFS overview:

Helpful Links:

Getting to know Cross-Origin Resource Sharing (CORS)

Hello there.  I’ve been spending a lot of time of late trying to develop a solution to a very obscure problem scenario.  The entire problem itself is outside the scope of this article – and to be honest, probably wouldn’t be terribly relevant to many – however, I felt there was value in articulating my recent experiences with Cross-Origin Resource Sharing, or CORS for short.

So what is CORS? 


To quote from Wikipedia:

Cross-origin resource sharing (CORS) is a mechanism that allows many resources (e.g., fonts, JavaScript, etc.) on a web page to be requested from another domain outside the domain from which the resource originated.[1] In particular, JavaScript’s AJAX calls can use the XMLHttpRequest mechanism.

Such “cross-domain” requests would otherwise be forbidden by web browsers, per the same-origin security policy. CORS defines a way in which the browser and the server can interact to determine whether or not to allow the cross-origin request.[2] It is more useful than only allowing same-origin requests, but it is more secure than simply allowing all such cross-origin requests.”

Now that we’ve cleared that up…  my take – a web site makes use of resources which are hosted on another site outside of its domain.  This is an important distinction owing primarily due to cross-site scripting (XSS) and cross-site request forgery (CSRF) vulnerabilities.


What does something like CORS address exactly?

Well, the concept is fairly straightforward.  What CORS aims to do is have two websites (sites, pages, APIs etc.) agree on what kind of resources and types of requests one website will provide to another.  Both must agree exactly on what is being shared and how.


How is this accomplished?

There’s a few parties who need to participate to enable CORS – the two parties involved, of course, and the user’s browser.  Both sites need to request and respond to each other in an expected manner, and browsers need to be aware of, and in some cases make special requests to ensure CORS works correctly.

In essence, what happens is that both websites agree on how resources will be shared.  The requesting site must be known as an “allowed origin” by the site providing the resources.  The response also must contain headers which contain scope for acceptable resource sharing, e.g. naming allowable methods (e.g. GET, PUT) and whether credentials are supported.  Browsers themselves are the last key – they must respect the restrictions established by the requesting site and the resource site.


What is a “pre-flight request”?

In some cases, a browser might make a special type of request known as an OPTIONS request, which is sort of like an initial handshake before performing the actual request specified (e.g. a GET request). 

In essence, an OPTIONS request attempts to determine what supported methods and other information is available from a resource sharing server.  In browser terms, this is known as a “pre-flight” request and is often attempted automatically by the browser.

The first time a cross-site request might fail (and in subsequent attempts) the browser’s JavaScript console might log something similar to the following error:

XMLHttpRequest cannot load https://<TARGETSERVER>/p?ReadViewEntries&outputformat=json. The request was redirected to ‘https://<ANOTHERSERVER>’, which is disallowed for cross-origin requests that require preflight.

Here’s an example of a browser (Chrome) attempting an OPTIONS pre-flight request and failing:


Let’s take a look at a pre-flight HTTP(S) request example.

Remote Address:<IPADDRESS>:443

Request Headers URL:https://<TARGETSERVER>/p?ReadViewEntries&outputformat=json

Request Method:OPTIONS

Status Code:200 OK

Request Headers

OPTIONS /p?ReadViewEntries&outputformat=json


Connection: keep-alive

Access-Control-Request-Method: GET

Origin: https://<ORIGINSERVER>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.116 Safari/537.36

Access-Control-Request-Headers: accept, content-type

Accept: */*

Referer: https://<ORIGINSERVER>/Home/Test

Accept-Encoding: gzip,deflate,sdch

Accept-Language: en-US,en;q=0.8

Query String Parametersview sourceview URL encoded







Under normal circumstances, a target server which honours this request would respond with something similar to this:

HTTP/1.1 200 OK

Date: Tue, 11 Nov 2014 03:35:05 GMT

Access-Control-Allow-Origin: https://<ORIGINSERVER>

Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept,Access-Control-Request-Headers,Access-Control-Allow-Methods,Access-Control-Allow-Origin,Access-Control-Allow-Credentials

Access-Control-Allow-Methods: POST, GET, OPTIONS, DELETE, PUT, HEAD

Access-Control-Allow-Credentials: true

Content-Length: 495

Keep-Alive: timeout=10, max=100

Connection: Keep-Alive

Content-Type: text/html; charset=iso-8859-1


The most important thing in this response isn’t probably what you’d expect it to be.  It’s actually the HTTP STATUS CODE (i.e. 200).  For CORS pre-flight to work, the resource target must respond with a Status Code of 200 to a HTTP OPTIONS request – and it must do so for unauthenticated requests!


Why must the pre-flight/OPTIONS requests be unauthenticated?

It’s actually a requirement direct from the respective W3C specification regarding pre-flight requests:

Otherwise, make a preflight request. Fetch the request URL from origin source origin with the manual redirect flag and the block cookies flag set, using the method OPTIONS, and with the following additional constraints:


Therefore, a response status code of 302 (Found – Redirect, usually to authenticate) or 401 (unauthorised) will clearly fail pre-flight.  Note that the resource (target) server doesn’t have to honour all OPTIONS requests, you could lock down the server’s security to only respond (Status 200) to requests on certain paths, for example.

In the example screenshot earlier, the pre-flight was failing because the resource server had authentication enabled for the OPTIONS request, and it was redirecting to a security token service (STS) for authentication.


Now that that’s all cleared up, how about some sample code? 

Here’s a pretty straightforward request which sets the appropriate header values for cross-domain and with credentials for a GET request, which is expecting a JSON response:

var aQuery = function()


     var url = ‘https://<TARGETSERVER>/p?ReadViewEntries&outputformat=json';

$.ajax(url, {

            type: “GET”,

            contentType: “application/json; charset=utf-8″,

            success: function(data, status, xhr) {



            xhrFields: {

                withCredentials: true


            crossDomain: true



In many cases though, this is not enough.  Some browsers, particularly older versions for example Internet Explorer 8 and 9 simply don’t support CORS well enough.  A more version sensitive approach might be something like this:

var basicQuery = function ()


      var url = ‘https://<TARGETSERVER>/p?ReadViewEntries&outputformat=json';

      var method = ‘GET';

      var xhr = new XMLHttpRequest();


      if (“withCredentials” in xhr) {           

           // Most browsers.       

  , url, true);

            xhr.setRequestHeader(“Access-Control-Allow-Origin”, “https://<ORIGINSERVER>)”);

            xhr.setRequestHeader(“Access-Control-Allow-Credentials”, “true”);

            xhr.setRequestHeader(“Access-Control-Allow-Methods”, “GET”);

      } else if (typeof XDomainRequest != “undefined”) {

                 // IE8 & IE9      

           xhr = new XDomainRequest();

 , url);

      } else {

           // CORS not supported.

           xhr = null;



      if (xhr != null)





            xhr.onreadystatechange  = function() {



                        if(xhr.status == 200)














Finally, let’s see that gratifying successful CORS request:



Some notes about setting headers

  • CORS header values in a request must be supported on the resource server for a given request to succeed, i.e. for a request which specifies Access-Control-Allow-Credentials, the target server must respond with Access-Control-Allow-Credentials listed in the Access-Control-Allow-Headers. 
  • The same applies to methods, e.g. a GET method must be listed in the response Access-Control-Allow-Methods
  • When you set the Access-Control-Allow-Origin value, it should be the name of the origin server (the origin of the request) it really shouldn’t be asterisk (*) which is often used in sample code.  The target resource server also must recognise/honour the origin.

A quick and dirty Rules Engine using Windows Workflow (Part 2) 7

Hi there and happy new year.  2011 promises to be quite an interesting year, and I hope that I can continue to contribute here at Sanders Technology.  To kick off the new year, I decided to revisit the Windows Workflow article I started late last year.

A little while ago I wrote a post at entitled “A quick and dirty Rules Engine using Windows Workflow (Part 1)” which has evidently been fairly popular.  Unfortunately, it seems that I forgot to follow it up with a part 2!  Now, welcoming in the new year, I’m putting together the second part.

Honestly though, folks, this could easily be a multi part mini project, because the uses of this Windows Workflow Foundation (WF) rules engine are immense!

I’ve managed to extend the scope of the code displayed in part 1 to include some dummy data items and I’ve crafted some more reusable and general purpose code (for example purposes), but you really ought to be able to see for yourselves how powerful and multi-purpose this really is.

I was going to write a quick and dirty WinForms UI, but I ended up ditching it in favour of a bunch of unit tests instead.  You really should be able to see the potential here, I don’t want to spoil the magic by adding an inept user interface.

Let’s take a look at the sample solution.  I’ve added some terribly (and perhaps insultingly) simple “objects” which, of course, you would substitute for your own DTOs/Entities/BusinessObjects.  It’s a basic class with some public properties, nothing terribly complex (it’s a demo after all).  You can see it uses an Enum just for fun on one of the properties.  I’ve also included a screenshot of the Solution structure – nothing too scary here.


The Class View for the “BusinessObjects” | The Solution Structure

Basically the entire solution consists of two class libraries and a Unit Test project.  I’m trying to keep this very simple.  You could plug a WinForms UI or a website or a WCF Web Service Application underneath this very easily!

The main fun is in the “RuleManager” class, which is basically just a wrapper for the main WF workflow engine parts.  I’ve put in an extremely vanilla implementation which allows a few interesting parts of functionality.  I think if you use your imagination, you’ll be able to come up with some much more interesting ways to play with the options.

So why don’t we have a look at the RuleManager class?  It is defined to take a generic type, so you can work upon different source object types.

For the purpose of this post, we have just the one main object defined “Employee”.  It also doesn’t so anything to ensure the rules loaded are explicit for the data type – I’ll do an expanded implementation later to show how we can account for this.

My sincere apologies for the crappiness of the format of the posted code!  I’m having a bit of a fight with my copy of Live Writer and the plugins for inserting code snippets are not working very well with the site layout theme.  I’ll try and get it looking right.. soon.

#region Using Directives
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Workflow.Activities.Rules.Design;
using System.Workflow.Activities.Rules;
using System.Windows.Forms;
using System.Workflow.ComponentModel.Serialization;
using System.Xml;
using System.IO;
using BusinessObjects;
using System.Collections.ObjectModel;

namespace WorkFlowProvider
    /// Implements a wrapper around the Windows Workflow Foundation Rules Engine
    /// A Data Object type to process
    public static class RulesManager<T> where T : new()
        #region Rules Editor Support

        /// Launch the Rules Form to create a new rule
        public static RuleSet LaunchNewRulesDialog(string ruleName, string outputPath)
            return LaunchRulesDialog(null, ruleName, outputPath);

        /// Launch the Rules Editor with an existing rule (for editing),        
        /// or to create a new rule (pass NULL to create a new rule)
        /// The rule name (for the file name)
        /// The path to save rules to
        /// A rule (if one is saved/edited)
        public static RuleSet LaunchRulesDialog(RuleSet ruleSet, string ruleName, string outputPath)
            // You could pass in an existing ruleset object for editing if you 

// wanted to, we're creating a new rule, so it's set to null RuleSetDialog ruleSetDialog = new RuleSetDialog(typeof(T), null, ruleSet); if (ruleSetDialog.ShowDialog() == DialogResult.OK) { // grab the ruleset ruleSet = ruleSetDialog.RuleSet; // We're going to serialize it to disk so it can be reloaded

WorkflowMarkupSerializer serializer = new WorkflowMarkupSerializer(); string fileName = String.Format("{0}.rules", ruleName); string fullName = Path.Combine(outputPath, fileName); if (File.Exists(fullName)) { File.Delete(fullName); //delete existing rule } using (XmlWriter rulesWriter = XmlWriter.Create(fullName)) { serializer.Serialize(rulesWriter, ruleSet); rulesWriter.Close(); } } return ruleSet; } #endregion #region Rule Processing /// Applies a set of rules to a specified data object public static T ProcessRules(T objectToProcess, ReadOnlyCollection rules) { RuleValidation validation = new RuleValidation(typeof(T), null); RuleExecution execution = new RuleExecution(validation, objectToProcess); foreach (RuleSet rule in rules) { rule.Execute(execution); } return objectToProcess; } /// Execute a single rule on a single data object public static T ProcessRule(T objectToProcess, RuleSet rule) { RuleValidation validation = new RuleValidation(typeof(T), null); RuleExecution execution = new RuleExecution(validation, objectToProcess); rule.Execute(execution); return objectToProcess; } #endregion #region Rules Management /// Loads a single rule given a path and file name public static RuleSet LoadRule(string rulesLocation, string fileName) { RuleSet ruleSet = null; // Deserialize from a .rules file. using (XmlTextReader rulesReader = new XmlTextReader(Path.Combine(rulesLocation, fileName))) { WorkflowMarkupSerializer serializer = new WorkflowMarkupSerializer(); ruleSet = (RuleSet)serializer.Deserialize(rulesReader); } return ruleSet; } /// Loads a set of rules from disk public static ReadOnlyCollection LoadRules(string rulesLocation) { RuleSet ruleSet = null; List rules = new List(); foreach (string fileName in Directory.GetFiles(rulesLocation, "*.rules")) { // Deserialize from a .rules file. using (XmlTextReader rulesReader = new XmlTextReader(fileName)) { WorkflowMarkupSerializer serializer = new WorkflowMarkupSerializer(); ruleSet = (RuleSet)serializer.Deserialize(rulesReader); rules.Add(ruleSet); rulesReader.Close(); } } return rules.AsReadOnly(); } #endregion } }

This one class pretty much gives you all you need to create, load and save rules.  It’s a bit basic at this point in time, I will try to create a more robust and tolerant class in subsequent posts on this topic.  For now though, I think it adequately demonstrates the sort of functionality which can be gleaned from the Rules Engine.

You can create or edit a rule by using the LaunchNewRulesDialog or LaunchRulesDialog methods with minimal user input.  I’ve written a very basic Unit Test which proves how efficient this can be, but I’m sure you’ll be able to have some fun with it.

Next up, there are some functions to load existing rule files from disk, the aptly named LoadRule and LoadRules methods.  They are pretty self explanatory, I don’t think we need to go into too much detail about the loading of rules files.

Finally, there are some functions which can be called to execute rules against data objects.  At this stage I’m supporting the execution of a single rule against a single data object, or a collection of rules against a single data object.  Obviously you could easily expand upon this.  You may wish to consider a multi-threaded approach, I may be persuaded to implement a more robust solution which allows for concurrent multiple item/multiple rule processing if you leave a comment for me.

Finally, here’s the Unit Test which allows you to create a new rule and apply it to the test data defined in the test:

public void CreateNewRule()
    Employee testEmployee = new Employee();    
    testEmployee.FirstName = "Joe";    
    testEmployee.Surname = "Smith";
    testEmployee.Location = StateEnum.ACT;
    testEmployee.Manager = null;
    testEmployee.DateHired = DateTime.Now.AddYears(-1);
    testEmployee.EmployeeNumber = 99;

    string ruleName = String.Format("{0}UnitTestRule", DateTime.Now.Millisecond);
    string path = Assembly.GetExecutingAssembly().Location.Replace(Assembly.GetExecutingAssembly().ManifestModule.Name, String.Empty);

    RuleSet newRule = RulesManager.LaunchNewRulesDialog(ruleName, path);
    testEmployee = RulesManager.ProcessRule(testEmployee, newRule);


So, in this post we’ve had a look at a very basic solution structure which demonstrates a reusable rules design.  At the moment it is as close to useless as a demo usually starts off looking like.  I’m only getting started, once you are familiar with the ‘RulesManager’ wrapper concept, we’ll be ready to expand upon it significantly.

This is part 2 of a multi-part series.  I’ll be expanding upon the concepts shown here in subsequent posts.

Check back soon!

Solution Files