Category Archives : Programming

This category is designed for entries which relate to software development


Getting to know Cross-Origin Resource Sharing (CORS)

Hello there.  I’ve been spending a lot of time of late trying to develop a solution to a very obscure problem scenario.  The entire problem itself is outside the scope of this article – and to be honest, probably wouldn’t be terribly relevant to many – however, I felt there was value in articulating my recent experiences with Cross-Origin Resource Sharing, or CORS for short.

So what is CORS? 

 

To quote from Wikipedia:

Cross-origin resource sharing (CORS) is a mechanism that allows many resources (e.g., fonts, JavaScript, etc.) on a web page to be requested from another domain outside the domain from which the resource originated.[1] In particular, JavaScript’s AJAX calls can use the XMLHttpRequest mechanism.

Such “cross-domain” requests would otherwise be forbidden by web browsers, per the same-origin security policy. CORS defines a way in which the browser and the server can interact to determine whether or not to allow the cross-origin request.[2] It is more useful than only allowing same-origin requests, but it is more secure than simply allowing all such cross-origin requests.”

Now that we’ve cleared that up…  my take – a web site makes use of resources which are hosted on another site outside of its domain.  This is an important distinction owing primarily due to cross-site scripting (XSS) and cross-site request forgery (CSRF) vulnerabilities.

 

What does something like CORS address exactly?

Well, the concept is fairly straightforward.  What CORS aims to do is have two websites (sites, pages, APIs etc.) agree on what kind of resources and types of requests one website will provide to another.  Both must agree exactly on what is being shared and how.

 

How is this accomplished?

There’s a few parties who need to participate to enable CORS – the two parties involved, of course, and the user’s browser.  Both sites need to request and respond to each other in an expected manner, and browsers need to be aware of, and in some cases make special requests to ensure CORS works correctly.

In essence, what happens is that both websites agree on how resources will be shared.  The requesting site must be known as an “allowed origin” by the site providing the resources.  The response also must contain headers which contain scope for acceptable resource sharing, e.g. naming allowable methods (e.g. GET, PUT) and whether credentials are supported.  Browsers themselves are the last key – they must respect the restrictions established by the requesting site and the resource site.

 

What is a “pre-flight request”?

In some cases, a browser might make a special type of request known as an OPTIONS request, which is sort of like an initial handshake before performing the actual request specified (e.g. a GET request). 

In essence, an OPTIONS request attempts to determine what supported methods and other information is available from a resource sharing server.  In browser terms, this is known as a “pre-flight” request and is often attempted automatically by the browser.

The first time a cross-site request might fail (and in subsequent attempts) the browser’s JavaScript console might log something similar to the following error:

XMLHttpRequest cannot load https://<TARGETSERVER>/p?ReadViewEntries&outputformat=json. The request was redirected to ‘https://<ANOTHERSERVER>’, which is disallowed for cross-origin requests that require preflight.

Here’s an example of a browser (Chrome) attempting an OPTIONS pre-flight request and failing:

failed

Let’s take a look at a pre-flight HTTP(S) request example.

Remote Address:<IPADDRESS>:443

Request Headers URL:https://<TARGETSERVER>/p?ReadViewEntries&outputformat=json

Request Method:OPTIONS

Status Code:200 OK

Request Headers

OPTIONS /p?ReadViewEntries&outputformat=json
HTTP/1.1

Host: <TARGETSERVER>

Connection: keep-alive

Access-Control-Request-Method: GET

Origin: https://<ORIGINSERVER>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.116 Safari/537.36

Access-Control-Request-Headers: accept, content-type

Accept: */*

Referer: https://<ORIGINSERVER>/Home/Test

Accept-Encoding: gzip,deflate,sdch

Accept-Language: en-US,en;q=0.8

Query String Parametersview sourceview URL encoded

ReadViewEntries:

outputformat:json

count:4000

r:0.88333622

charset:utf-8

 

Under normal circumstances, a target server which honours this request would respond with something similar to this:

HTTP/1.1 200 OK

Date: Tue, 11 Nov 2014 03:35:05 GMT

Access-Control-Allow-Origin: https://<ORIGINSERVER>

Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept,Access-Control-Request-Headers,Access-Control-Allow-Methods,Access-Control-Allow-Origin,Access-Control-Allow-Credentials

Access-Control-Allow-Methods: POST, GET, OPTIONS, DELETE, PUT, HEAD

Access-Control-Allow-Credentials: true

Content-Length: 495

Keep-Alive: timeout=10, max=100

Connection: Keep-Alive

Content-Type: text/html; charset=iso-8859-1

 

The most important thing in this response isn’t probably what you’d expect it to be.  It’s actually the HTTP STATUS CODE (i.e. 200).  For CORS pre-flight to work, the resource target must respond with a Status Code of 200 to a HTTP OPTIONS request – and it must do so for unauthenticated requests!

 

Why must the pre-flight/OPTIONS requests be unauthenticated?

It’s actually a requirement direct from the respective W3C specification regarding pre-flight requests:

Otherwise, make a preflight request. Fetch the request URL from origin source origin with the manual redirect flag and the block cookies flag set, using the method OPTIONS, and with the following additional constraints:

[1] https://dvcs.w3.org/hg/cors/raw-file/tip/Overview.html#preflight-request

Therefore, a response status code of 302 (Found – Redirect, usually to authenticate) or 401 (unauthorised) will clearly fail pre-flight.  Note that the resource (target) server doesn’t have to honour all OPTIONS requests, you could lock down the server’s security to only respond (Status 200) to requests on certain paths, for example.

In the example screenshot earlier, the pre-flight was failing because the resource server had authentication enabled for the OPTIONS request, and it was redirecting to a security token service (STS) for authentication.

 

Now that that’s all cleared up, how about some sample code? 

Here’s a pretty straightforward request which sets the appropriate header values for cross-domain and with credentials for a GET request, which is expecting a JSON response:

var aQuery = function()

{

     var url = ‘https://<TARGETSERVER>/p?ReadViewEntries&outputformat=json';

    
$.ajax(url, {

            type: “GET”,

            contentType: “application/json; charset=utf-8″,

            success: function(data, status, xhr) {

                alert(data);

            },

            xhrFields: {

                withCredentials: true

            },

            crossDomain: true

        });

}

In many cases though, this is not enough.  Some browsers, particularly older versions for example Internet Explorer 8 and 9 simply don’t support CORS well enough.  A more version sensitive approach might be something like this:

var basicQuery = function ()

{    

      var url = ‘https://<TARGETSERVER>/p?ReadViewEntries&outputformat=json';

      var method = ‘GET';

      var xhr = new XMLHttpRequest();

       

      if (“withCredentials” in xhr) {           

           // Most browsers.       

            xhr.open(method, url, true);

            xhr.setRequestHeader(“Access-Control-Allow-Origin”, “https://<ORIGINSERVER>)”);

            xhr.setRequestHeader(“Access-Control-Allow-Credentials”, “true”);

            xhr.setRequestHeader(“Access-Control-Allow-Methods”, “GET”);

      } else if (typeof XDomainRequest != “undefined”) {

                 // IE8 & IE9      

           xhr = new XDomainRequest();

           xhr.open(method, url);

      } else {

           // CORS not supported.

           xhr = null;

      }

 

      if (xhr != null)

      {                

            xhr.followsRedirect=false;          

            xhr.withCredentials=true;

 

            xhr.onreadystatechange  = function() {

                  if(xhr.readyState==4)

                  {

                        if(xhr.status == 200)

                        {

                              alert(xhr.responseText);                 

                        }

                        else

                        {

                              alert(“error”);

                        }

                  }

            }

            xhr.send();

      }

}

 

Finally, let’s see that gratifying successful CORS request:

success

 

Some notes about setting headers

  • CORS header values in a request must be supported on the resource server for a given request to succeed, i.e. for a request which specifies Access-Control-Allow-Credentials, the target server must respond with Access-Control-Allow-Credentials listed in the Access-Control-Allow-Headers. 
  • The same applies to methods, e.g. a GET method must be listed in the response Access-Control-Allow-Methods
  • When you set the Access-Control-Allow-Origin value, it should be the name of the origin server (the origin of the request) it really shouldn’t be asterisk (*) which is often used in sample code.  The target resource server also must recognise/honour the origin.

Using alterative AD attributes to authenticate to AD FS 3.0

Introduction

We have a requirement at the moment to modify AD FS 3.0 (which is a role in Windows Server 2012 R2) to allow users to authenticate without having to specify the domain name. 

This is for two reasons – the current external system doesn’t have a requirement to prefix a domain (or to authenticate with UPN format), and the organisation would prefer users to not have to worry about knowing the domain name (which is a DMZ domain).

AD FS 3.0 supports this scenario – sort of – but the landing page which handles authentication has some hard coded forms validation logic which won’t let users authenticate with a username which doesn’t have the DOMAIN\\ prefix or that is not in UPN format.  Awkward.

The Problem

AD FS 3.0 is the first release which doesn’t run under IIS.  As a result, this self hosted solution doesn’t have web content directly available for customization.  However, using PowerShell commands, it is possible to customize the user interface, although only client-side elements, like scripts (.js).

Since our issue is with client side validation, we have a way forward.  This article will demonstrate how to remove the domain prefix (or UPN format) requirement without having to modify ADFS binaries directly (which as a general rule you should never do).

Assigning an Alternate AD attribute to use for identifying a user’s credential (i.e. ‘username’) is simplicity itself.  In a PowerSHell console (elevated permissions) execute the following where [AD ATTRIBUTE] is the schema field you want to use to identify users.

Set-AdfsClaimsProviderTrust -TargetIdentifier “AD AUTHORITY” –AlternateLoginID [AD ATTRIBUTE] –LookupForest <your forest>

You can pretty much use any AD schema which makes sense, e.g. CN which is what I’m using in this scenario.

ad

You don’t get this view by default, if you want to view AD schema attributes, you need to switch the AD management console to Advanced View:

ad-advanced

Fixing the client-side validation

This part is trickier.  You’re going to have to modify the out-of-the-box AD FS behaviour in order to modify the way ADFS validates the username field. 

You’re going to need to create a new AD FS theme (based on the default) and then dump the default web theme to the file system using PowerShell commands:

New-AdfsWebTheme -name Custom -SourceName default
Export-AdfsWebTheme -Name Custom -DirectoryPath c:\AdfsTheme

You’ll need to configure this on each ADFS host if you have multiple. For more information on customizing AD FS 3.0 have a look at this TechNet article.

Once you’ve completed this step, have a look at the contents of the folder you specified to the –DirectoryPath parameter (e.g. c:\AdfsTheme).  There should be a subfolder called script, and it will contain a file called onload.js. 

We’re going to edit that file.

The Concept

The out-of-the-box implementation adds some client side JavaScript which checks the username field when the user clicks the submit button, or on a keypress (e..g Enter key).  We need to hijack that script and replace it with a cut down implementation, removing the domain format checking.

We unfortunately can’t do this with a script injected only on logon pages (using the SignInPageDescriptionText location)!  That approach injects custom script above the out-of-the-box script, which means we can’t modify form validation behaviour.  We have to instead change the onload.js which is run on every ADFS web page (the downside).

Here’s the out-of-the-box validation script, which you can see by viewing the page source of the ADFS logon page.  Note that the Sign In page description text field is located above this (id=”introduction””.

  oob

What we will do is add an implementation to the onload.js file which replaces the OOB implementation – we do this by appending the following to the end of the onload.js file’s content:

// rewire form validation
function override_form_validation() {
    Login.submitLoginRequest = function () {
                var u = new InputUtil();
                var e = new LoginErrors();

                var userName = document.getElementById(Login.userNameInput);
                var password = document.getElementById(Login.passwordInput);

                if (!userName.value) {
                    u.setError(userName, “Username must be supplied”);
                    return false;
                }

                if (!password.value) {
                    u.setError(password, e.passwordEmpty);
                    return false;
                }

                document.forms[‘loginForm’].submit();
                return false;
            };
}

if(location.href.indexOf(‘SignOn’) > 0){
    override_form_validation();
}

The last part executes the overridden form validation only if the page’s URL contains the text “SignOn”.

Publishing the Changes

Once we’re done modifying the JavaScript, we use a PowerShell console to publish the updated file back to AD FS.  Note that you need to do this on each AD FS server if you have multiple.

Set-AdfsWebTheme -TargetName Custom –AdditionalFileResource @{Uri=”/adfs/portal/script/onload.js”; Path=”c:\AdfsTheme\script\onload.js”}

ps

This script will run on each ADFS page.

Adding additional script files and referencing them

If you would like to add separate script files to the custom theme, you can do this too.  Simply use PowerShell and the following command:

Set-AdfsWebTheme -TargetName Custom -AdditionalFileResource @{Uri=’/adfs/portal/script/yourfile.js’;path=”c:\AdfsTheme\script\yourfile.js“}

To reference the script, use another PowerShell command to inject a reference to load the script where appropriate:

Set-AdfsGlobalWebContent –SignInPageDescriptionText “<script type=””text/javascript”” src=””/adfs/portal/script/yourfile.js“”></script>”

There’s also –SignOutPageDescriptionText as an option as well.  Check out the command help documentation for more places to inject your own custom scripts.


WordPress in a can

Introduction

Recently I was asked to look into solutions for moving some WordPress sites in-house for a client.  At first this looked fairly straightforward, until I realised that they wanted the ability to spin up new self contained VM sites with little effort.

Naturally, I pursued the logical step of building a “base” virtual machine with a clean install of the latest copy of Ubuntu Server 14.04, configuring it with the LAMP (Linux/Apache/MySQL/PHP) stack and Mail support.  At one pojnt at friend of mine, Craig Harvey, asked if I’d considered a pre-built distribution image such as the ones available from Bitnami.

As it happened, I hadn’t gone that route at the time – but I’m glad I did.

image

Enter Bitnami virtual machine images

Suppose you want a baseline application platform with a sizable array of applications and a close to zero configuration effort?  Bitnami provides – for free – two awesome VMWare or VirtualBox virtual machines which are pre-configured to support single or multi-site instances of the latest version of WordPress (3.9.1 as of writing).

Can it be that simple?

Yes, it can.  You simply download the image of choice (using or registering an account) and all you need to do is unzip the contents and attach to VMWare/Virtual Box – then start the VM.

The version of Ubuntu is a little out of date (version 12.04) but is pre-configured.  Bitnami images are built from open source software and distributed for free.

As of the time of writing, the Bitnami WordPress stack ships with the following software versions:

  • WordPress 3.9.1
  • Apache 2.4.9
  • Varnish 3.0.5
  • MySQL 5.5.36
  • PHP 5.4.29
  • phpMyAdmin 4.2.2

One obvious advantage is that the Bitnami template virtual machine could be updated when newer versions of WordPress are released.

Understanding the Bitnami template

The Bitnami template provides a number of pre-installed applications, some of which may not necessarily be used for each WordPress installation.

clip_image002 
Figure 1 – The Bitnami Console

The default root of the hosted site provides access to a range of applications:

image 
Figure 2 – Default page of the out-of-the-box template

Adapting the Bitnami template for each WordPress site automatically provisions a pre-configured copy of WordPress 3.9.1:

image
Figure 3 – Default WordPress site

When you authenticate for the first time, you are forced to change the default password (which is always a good idea!).  From here you may roam the operating system at your leisure. 

One quick tip for those not familiar with Ubuntu – there’s no “root”, to perform administrative functions you use the command “sudo” (as opposed to “su”) before the commands you need to execute.  There’s a compelling console/text editor as standard called nano which you’ll likely get used to.

Summary

It’s still early days for me, as I navigate the murky waters of Ubuntu.  I’ll be taking this image for a spin to determine whether it is fit for purpose, but at this stage it looks very promising.  I’ll most likely be posting a follow-up article to this one, so stay tuned for more updates.