Mar 202014
 

The Open GroupWell, this has been a busy week.  Today, I finished day four of a four day course on The Open Group Architecture Forum (TOGAF)’s v9.1 Level 1 & 2 Enterprise Architecture certification.

The course has given me a very decent foundation of learning from which to build upon, and hopefully sit (and pass?) the Level 1 & 2 exams, probably next week.  It’s a tough load of work, and the framework specification is massive – even overwhelming.

In all honesty, I wasn’t expecting more than to be in a position to sit for the certification afterwards, instead I found an even bigger reward – some sense of understanding of several principles which have surrounded or underpinned various projects I’ve been involved with over the past 10 years.

The framework isn’t prescriptive to the point of telling you how to architect, but rather presents as a methodology and a set of processes to follow in order to approach business capability needs and strategic goals from an architectural point of view.

There will be more TOGAF insights to come – once my brain recovers from the intensive four day makeover.  So stay tuned.

Exam Update

Today I sat the TOGAF 9.1 Levels 1 & 2 exams (back-to-back) and I’m proud to say I achieved a score of 81, and passed.  This means I’m now TOGAF 9.1 certified, and at least in theory could start practicing TOGAF-style Enterprise Architecture.

The exam was, honestly, fairly terrifying.  The TOGAF 9.1 specification is very broad and contains quite a level of depth throughout the ADM, Enterprise Continuum and Governance Framework.  Therefore, Level 1 – 40 multiple choice questions – can theoretically ask almost anything.

To prepare, I spent about a day doing practice exams and then boning up on areas where I got the answers wrong.  Unfortunately, a lot of the online practice exams are based on older versions of the specification, and led me down the garden path a few times.

Level 2 is tough.  The questions ask you to apply your knowledge of TOGAF to correctly identify the “best fit” answer from four multiple choice answers.  The scenarios in the actual exam, I found, were a little harder than in the practice exams.

I’ll update this article with more insights into how to prepare for the exam in a short while.

Jan 022014
 

If you’ve spent any amount of time in the IT industry – and particularly in software development – you’ve probably been asked to forecast the effort associated with some kind of work, outcome or deliverable.

Software development is tricky as it involves some degree of guess work – requirements can often be vague or ill-defined; the solution(s) may not be clear or obvious, you you may be dealing with too many unknowns.

There are two main ways to estimate project/effort costs – “top down estimating” and “bottom-up estimating”.  I’ve a lot of experience at both, but more so with bottom-up estimation.  Bottom-up estimation establishes effort forecasts based on the breakdown of work to be undertaken, with numbers given to granular units of work, which are then combined to form a ‘total effort’ estimate.

As someone with a long history in software development (15+ years), I’ve become more comfortable working with tasks and specifically defined units of work – particularly since they tend to lend themselves to being applied to solution design in a more meaningful way.

When I worked at Microsoft from 2005 to 2007, I was introduced to a very handy methodology to develop bottom-up costing and effort estimations.  This methodology involved the application of handwritten utilities, such as Excel and Visio which combined spread sheet calculations with a visual approach (called ‘feature maps’) to help guide clients in understanding their own requirements.

Using this methodology as a starting point, I’ve developed my own simple estimation tool which takes a more simplified view of the more complex tools I’ve used in the past.  To understand how to leverage this utility, I need to explain the concepts which underpin the numbers.

The Estimation Spread sheet

The spread sheet is split into a number of work sheets.  Each sheet contains information and values pertinent to the calculation of estimated effort (in single person hours/weeks/years). 

The idea is to produce a raw gross effort estimate, which can be divided by the number and type of resources.  This would compliment an MS Project style project plan or help to underpin effort estimates against tasks in Team Foundation Server, for example.

The work sheets are as follows:

image

Work Sheet Usage
Deliverables

Assigns effort to outcomes based on discipline responsible for primary work

Effort Calculations Shows work breakdown (read only)
Complexity Allows the calibration of the distribution of effort and the number of hours per complexity
Requirement Mapping Assigns complexity estimates to requirements

 

Essentially, you just need to configure the Complexity page first, then assign complexity estimates to requirements.  This will produce all the effort calculations.  Sounds easy, right?

Step 1: Configure the effort complexities

The idea behind my own approach is to define a set of “complexities” which experienced subject matter experts (SMEs) can apply to units of work.  This isn’t rocket science, the definitions and their application to requirements is based on the discretion (and experience) of those participating in the forecasting exercise.

Conceptually, the idea is to define a range of values and balance them against the type of work which would need to be undertaken to finish a unit of work.  These definitions are defined in the spread sheet on the Complexity work sheet:

image

The idea here is that for each unit of work, the effort will be split across a number of disciplines, i.e. “Design”, “build”, “Test” and “Document”.  The weighting affects how the total effort for a complexity will be split between these disciplines.  To make this clearer, here are my rough definitions for each discipline:

Discipline Definition
Design Plan, design and document a solution for a requirement/feature/implementation – including solution design and any related artefacts
Build The task of actually building or supporting a unit of work, including unit testing and documentation
Test The task of fully testing the unit of work, could be automated/manual or include system integration testing
Document This is strictly intended to capture effort for end user documentation, user guides/administration and build notes/release notes etc.

 

Note that many different team members could play a role in each of these disciplines.  In some cases, a discipline may not even apply – for example something estimated as “Easy” may have no “Document” effort, as it would either not require explicit documentation, or could be potentially covered by a parent task/unit of work.

The important numbers on this tab are the “Total Hours” and the weight distribution between the disciplines.  Toggle these values until you get definitions for each complexity that you’re comfortable with.

Step 2: Assign complexity to requirements

The idea behind the requirement mapping and effort estimation worksheet is to allow subject matter experts (typically a team of people who would perform the work indicated) can assign an estimated complexity rating to units of work.

Complexity is assigned to areas of design and implementation:

Area Definition
UI (User Interface) Any work involving the development, documentation of or delivery of applications or programs which users will use to interact with a solution.  May include workflow, validation rules etc.
Interface/Integration Any interface which is defined between systems or applications, or between tiers.  May include workflow, validation rules, system integration etc..
Schema/DB This represents the definition of data or data storage which may be required, including supporting tasks like ETL.  May include reporting, maintenance etc..

 

This also allows for traceability from requirements (functional/non-functional) to effort, and can help the team define logical packages of work.  The complexity estimates are just so – an estimate provided by a person (or people) who have a background in developing and delivering work stated in the requirements.

image

What this does is effectively assign gross averages of hours against units of work.  The definition of the complexities can be modified to suit the experience/expertise and performance of the team or people who might actually perform the work.

In this example, if parallel (peer) tasks are related and the team feels comfortable, a complexity rating can be assigned to the parent task/requirement instead of having to specify a complexity for each unit of work – or indeed, even for each part of a solution.

The idea is to continually review the tasks and complexity estimates (about three or four full passes) and to complete a few tweaks to the complexity breakdown before you’ll arrive at numbers which might look halfway accurate.  Remember, this is quite flexible, but the idea is to try and account for all the effort.

This approach lets you take work “in and out of scope” by simply not assigning a complexity to a task or requirement.  That way you can come up with deltas which can be used in negotiations (e.g. X days effort without requirements Y and Z).

Result: Outcomes

Once you have defined the work, set complexity estimates, the spread sheet will start to produce some numbers for you.  This information does not need to be manually tweaked (consider it read-only).

image

The Effort Calculations work sheet shows you how the numbers are calculated.  Essentially, the idea is to count up the number of rows which feature a complexity rating, and then multiply the number by the total hours which has been set for each complexity.

This also allows for a calculation of the total effort by discipline, e.g. total number of hours for design, development (build) and so on. 

Note that the numbers calculated are gross hours, and do not factor in the size of a team, etc.  It’s up to you to make a determination from the raw total (gross) hours, based on the size and makeup of your team.

Result: Deliverables

If you have a list of high level deliverables which you need to provide indicative effort forecasts for, you can use the total effort numbers (by discipline) to derive some halfway decent estimates.  The key is to assign each deliverable as a percentage of the total effort from one discipline (typically, the discipline which would be most responsible for delivery of the item).

image

Summary

This is obviously a very simple spread sheet, and the calculations made aren’t in any way close to being hyper-accurate.  The idea is that this provides a team with a handy mechanism to document and trace effort against requirements, and provide a bit of a framework for distributing effort to project tasks (like documentation, testing etc.) across the total effort.

It’s by no means a comprehensive tool, and it doesn’t factor in a number of different considerations, e.g. the skills and discrete role definitions on a delivery team, whether the project is Greenfields (new development) and also doesn’t distinctly include contingency/buffer.

I hope that this provides you with some inspiration or at least some food for thought the next time you need to build some effort estimations.  All feedback welcomed!

Oh – and Happy New Year – 2014.

R

May 092013
 

If you’ve ever had any involvement with an Agile project (whether it was “pure” Agile or not), you’ll likely have encountered the beast which is effort forecasting and analysis.  This drives the initial estimate of the amount of work which your team thinks it can deliver within a given period.

Agile sprint
Example of a scrum style sprint
[source]

It doesn’t really matter how big your project is, sizing up the amount of work which can be produced is a time honoured tradition, but how do you know if you’re even in the ballpark of getting your estimates right?

Over at ThoughtWorks Studios, Martin Fowler (and others) have spent significant time and effort in trying to document some conclusions about this very topic and a PDF white paper can be found on the ThoughtWords Studios website.

It’s hardly a light read – at 32 pages – but can you really afford to take estimation lightly?  In a world of commercial agreements, balancing customer or client expectations and attempting to meet tight delivery timelines, getting your estimations accurate is a key step in delivery.

However, in my mind going into this document, it really helps to have a decent view of what it is you are trying to build.  The more uncertainty going into any kind of sizing or storyboarding exercise, the rougher the estimation or analysis is going to be. 

There’s no silver bullet, one-size-fits-all methodology at play here, however this document is a really good read if you are looking to canvass different views and opinions about how to set expectations around Agile delivery.  Be prepared to have your designs challenged, and to field changes as they can (and do) present themselves!

If you aren’t quite ready to dip into the minefield which is Agile planning and forecasting, perhaps you’d find value in another e-book from ThoughtWorks – “How do you develop a shared understanding on an Agile project?”.  Remember, for an Agile project to succeed, everyone needs to play their part – the methodology isn’t just for programmers!

For those who haven’t already gone to visit the ThoughtWorks website, here’s a direct link to the PDF.

Further reading: