Sep 182013


Recently I began writing a series of articles demonstrating how you could take a blank Database schema and bring it all together with a fully integrated object relational mapping access model (Entity Framework) and then exposing the entire object graph via WCF web services in a completely detached mode. 

A brief note: Generally, it’s a good idea to check back with the source of this article in case there have been any updates.  I have a habit of finding a few problems from time to time, and need to publish some minor modifications.

Note: the data access code has been updated since this article was first published.  Refer to this article for the latest code.

The last article was titled ‘Basic Data Operations’.  This article also follows on logically from the last article ‘’Solving the detached many-to-many problem with the Entity Framework”.’.

In this article, we’re going to go a lot deeper, and I’m going to introduce some techniques (in C#) for abstracting the EDMX/EF Entity Model away from consuming code but keeping a highly flexible access pattern available for more complex data access scenarios.

If you haven’t read the previous articles, they generally prepare you for the content of this article, but provided you have a sold grounding in the Entity Framework you should be able to follow this article in isolation.

Lastly – no warranty.  I’ve tried my best to anticipate a number of design and environmental factors, but I’ve found the odd issue from time to time.  I don’t warrant that this will necessarily be a solution for your specific scenario(s) and is therefore provided ‘AS-IS’.  I will post updates if I find a problem which can be overcome, so if you have any issues, come back and see if there’s an update or email me directly.

Now that we’ve gotten that out of the way…

The Current Solution

Presently, the solution is still fairly flat, it contains a Class Library assembly which houses the main data access logic, with a Web Service Application consuming it.  There’s two sets of unit test projects providing partial test coverage and a Database project for convenient schema management.



Data Access

This library contains the Entity Framework model, accompanied by various degrees of data access scaffolding.  The general design of this class has been explained in previous articles, and the focus of this article will be on drilling down further on developments to this project.

There are a number of key classes contained within the library, beginning with the basic POCO classes exposed by the Entity Framework model:


These classes are complimented with a series of ‘accessor’ classes which are sort of modelled after the repository pattern.  My main goal with these classes was to provide a flexible and for the better part generic approach to consuming the Entity Framework model.  Some of the approach was influenced by this article.


Note that most of the work is performed by the DataAccesor<T> class, of which, type-specific accessors are derived from.  This main class implements a standardised interface so that additional functionality could be added with a different persistence store behind it (although we only support database persistence at the moment).

Each model within the Entity Framework model derives from a base class called EntityBase.  There’s a public property in the base class exposed which is an Enum type called ObjectState.  This property is used by consuming code to manually set the state of each object.

public enum ObjectState
    Unchanged = 0,
    Added     = 1,
    Modified  = 2,
    Deleted   = 3,
    Processed = 4

The Data Access classes

My main objective was to provide as much flexibility to consuming code without having the Entity Framework’s Data Context object passed around too frequently or too far from the data access assembly.  The aim of the base generic class was to provide full CRUD support whilst restricting how much access the consuming code had to the underlying EF scaffolding.

A Generic Interface

The following interface defines a specific set of functionality which will be exposed by any base data accessor.  This allows for implementation of other abstract providers, assuming we wanted to support other persisted storage options with the same EF model.


The Generic Data Access Approach

Each of the above interface functions are implemented within the generic DataAccessor<T> class which derives from a base class (BaseAccessor).  The base class is now responsible for management of the EF Data Context, and implements the IDisposable interface, as well as controls the usage of database transactions.


The idea is that individual type-specific classes can derive from the generic base class and provide type-specific queries, pre-defined (and reusable).  The capabilities of the base classes are also available:

image   image

Generic Implementation Design

I’m not going to go through the class function by function (too time consuming) however I do want to walk through the design decisions I made when considering this approach.

  1. Reusable

    The original intention was to conserve and protect the usage of the EF’s DbContext.  However, I also wanted an ability to encapsulate common queries in classes deriving from the generic implementation.  In the end I came up with the solution presented here.  Chances are high that there’s a more elegant approach, and I’m happy to hear from folks with suggestions.

  2. Generic

    Another key was to try and encapsulate as much of the common ‘CRUD’ functionality as possible without having to do things that were entity or type-specific.  Generally, with the exception of schema details and relationships, the approach to data access should be as uniform as possible, and so it should be with the mechanisms controlling such access.

  3. Flexible

    As always, providing a useful and flexible interface is a design goal.  There’s not much point introducing a repository or interface based design if consumers will write hacky workarounds to do something they need to do.  Hence, the exposure of IQueryable<T> return types.

  4. Extendable

    Chances are you’ll never fully anticipate all needs, and this is certainly true with persistence.  The aim here is that the generic approach can be extended fairly easily to prove realistically any capability that might be required down the track.  For example, a type-specific accessor (repository) could be implemented on top of the generic class to provide execution of stored procedures.

Using the Generic Interface – Queries

For flexibility, it’s not necessary to create a concrete class for every model type in the entity model.  The generic DataAccessor<T> class can be constructed directly by specifying the type:

using(DataAccessor<Catalog> acc = new DataAccessor<Catalog>()) { var entity = acc.GetEntity(x => x.CatalogId == 1); Assert.IsNotNull(entity, "There should exist an entity with ID = 1"); var entities = acc.GetEntities(x => x.CatalogId < 10); Assert.IsTrue(entities.Count > 0,
"Should be at least one entity returned"); }

This provides all of the functionality defined in the IDataAccessor interface and more.  This simple example shows how easy it is to query for a single or multiple entities, and makes use of lambda/LINQ expressions to help build the query.

If that’s not enough flexibility for more detailed queries, you can have an IQueryable<T> returned, and from here you can specify almost any combination of filters and inclusions:

using (DataAccessor<Catalog> acc = new DataAccessor<Catalog>()) { var entities = acc.CreateQuery().Where(x => x.CatalogId < 10)
Assert.IsTrue(entities.Count > 0,
"Should be at least one entity returned"); IQueryable<Catalog> extendedQuery = acc.CreateQuery(); extendedQuery = extendedQuery.Include("Genres"); extendedQuery = extendedQuery.Where(x => !String.IsNullOrEmpty(x.Title)); extendedQuery = extendedQuery.Where(x => x.Genres.Count > 1); entities = extendedQuery.ToList(); //executes query Assert.IsTrue(entities.Count > 0,
"Should be at least one entity returned"); }

Using the Generic Interface – Insert/Modify

We can use the same interface to apply changes to an entity or entities contained within collections (i.e. within the object graph).  As mentioned in the previous article, the client side needs to set the ObjectState property for modified or added entities, for example:

using (CatalogDataAccessor a = new CatalogDataAccessor())
    string originalDescription = String.Empty;

    IQueryable<Catalog> query = a.CreateQuery();
    query = query.Take(5);                                
    query = query.Include("Sizes");
    var result = query.ToList();
    Assert.IsTrue(result.Count > 2, "Should find at least 2 results");
    Catalog item = result[1];
    Assert.IsTrue(item.Sizes.Count > 0, "Should be at least one size");
    Size editItem = item.Sizes.First();
    originalDescription = editItem.Description;
    editItem.Description = "Updated By Unit Test";
    editItem.State = ObjectState.Modified;


    editItem.Description = originalDescription;



Using the Generic Interface – Delete

We can also use the data access interface to remove entities (in reality, one or more rows from a database table), but as noted earlier, the client can set the entity state to Deleted (although the implementation sets the status to deleted anyway).

using (SizeDataAccessor a = new SizeDataAccessor())
    Size obj = new Size();
    obj.Height = 0;
    obj.Width = 0;
    obj.SizeId = 100; //out of the standard range
    obj.IsCustom = false;
    obj.Description = "Unit Test";
    obj.Dimensions = DateTime.Now.ToString("yyyy-MM-dd hh:mm:ss");
    obj.State = ObjectState.Added;
using (SizeDataAccessor a = new SizeDataAccessor())
    var verify = a.GetEntity(x => x.SizeId == 100);
    Assert.IsNotNull(verify, "Should be saved to the database");


    verify = a.GetEntity(x => x.SizeId == 100);
    Assert.IsNull(verify, "Should be removed from the database");

Managing Entity Relationships

The main benefit of an ORM comes from relationships between entities, but it’s only useful if you can manipulate them succinctly.  Returning the full object graph is useful, but being able to manipulate the structure is better. 

Here’s how we can do this with disconnected POCO objects.

Adding or Updating Many-To-Many Relationships

Mainly the support for adding or updating related entities is as close to the experience you might find with connected entities, with one key exception: the client is responsible for setting the object state of each entity being modified.

Here’s an example of updating a related, many-to-many entity after retrieving a base entity.  Note that in this particular example, since the entities are not subsequently re-queried, the ObjectState changes are not entirely necessary, but included for brevity.

public void UpdateManyToMany()
    Catalog existing = null;
    Genre other = null;
    String existingValue = String.Empty;
    String existingOtherValue = String.Empty;

    using (CatalogDataAccessor a = new CatalogDataAccessor())
        //Note that we include the navigation property in the query
        existing = a.CreateQuery().Include("Genres").FirstOrDefault();
        Assert.IsTrue(existing.Genres.Count() > 1,
                                     "Should be at least 1 linked item"); } //save the original description existingValue = existing.Description; //set a new dummy value (with a date/time so we can see it working) existing.Description = "Edit " +
                             DateTime.Now.ToString("yyyyMMdd hh:mm:ss"); existing.State = ObjectState.Modified; other = existing.Genres.First(); //save the original value existingOtherValue = other.Description; //set a new value other.Description = "Edit " +
                        DateTime.Now.ToString("yyyyMMdd hh:mm:ss"); other.State = ObjectState.Modified; //a new data access class (new DbContext) using (CatalogDataAccessor b = new CatalogDataAccessor()) { //single method to handle inserts and updates b.InsertOrUpdate(existing); } //return the values to the original ones existing.Description = existingValue; other.Description = existingOtherValue; existing.State = ObjectState.Modified; other.State = ObjectState.Modified; using (CatalogDataAccessor c = new CatalogDataAccessor()) { //update the entities back to normal c.InsertOrUpdate(existing); } }

Adding and Removing FK-based Relationships

The intention here is that a one-to-many relationship results in a single entity on one side of the join (or relationship) and you can set this as you would normally with the Entity Framework, by assigning the ID or entity to the target entity.

To remove the relationship you do the same thing, set the relationship property to NULL.

public void CreateRemoveFK()
    File newFile = CreateFile();
    Size newSize = CreateSize();
    using(FileDataAccessor a = new FileDataAccessor())

        newFile.Size = newSize;
        newFile.SizeId = newSize.SizeId;
        newFile.State = ObjectState.Modified;
    using (FileDataAccessor b = new FileDataAccessor())
        File result = b.GetEntity(x => x.FileId == newFile.FileId, 
                                  x => x.Size); Assert.IsNotNull(result); Assert.IsNotNull(result.Size); newFile = result; result.Size = null; result.SizeId = null; result.State = ObjectState.Modified; b.InsertOrUpdate(result); b.SaveChanges(); } using (FileDataAccessor c = new FileDataAccessor()) { c.Delete(newSize); c.Delete(newFile); c.SaveChanges(); } }

Note that I’ve added the FK relationship but not to both sides of the relationship in this example.  It seems to work fine if you do or don’t.

Removing Many-To-Many Relationships

Removing a related entity is not as easy as it should be.  Because many-to-many relationships make use of a “join table”, it’s not as simple as setting the relationship to NULL, as it is in the above example.  Instead you have to explicitly ‘delete’ the relationship and you have to specify the navigation property it applies to.

public void AddRemoveRelationship() { Catalog existing = null; using (DataAccessor<Catalog> a = new DataAccessor<Catalog>()) { existing = a.CreateQuery().Include("Genres").FirstOrDefault(); Assert.IsNotNull(existing, "Should find at least one Catalog"); } Genre newEntity = new Genre(); newEntity.State = ObjectState.Added; newEntity.Title = "Unit"; newEntity.Description = "Test"; newEntity.GenreId = 1000; existing.Genres.Add(newEntity); using (DataAccessor<Catalog> a = new DataAccessor<Catalog>()) { a.InsertOrUpdate(existing); a.SaveChanges(); } newEntity.State = ObjectState.Unchanged; existing.State = ObjectState.Modified; using (DataAccessor<Catalog> b = new DataAccessor<Catalog>()) { b.ModifyRelatedEntities<Genre>(existing, x => x.Genres,
EntityState.Deleted, newEntity); b.SaveChanges(); } using (DataAccessor<Genre> c = new DataAccessor<Genre>()) { c.Delete(newEntity); c.SaveChanges(); } }

Therefore, I’ve implemented a function called ‘ModifyRelatedEntities’ which allows for the state to be explicitly set.  Note that you don’t need to call this to create the relationship, only to remove it.


Finally, before we close out this article, I want to revisit an aspect of some of the previous articles.  When I was building a generic data access class, alongside the type/entity specific ones, it dawned on me that generic considerations should not be side-by-side, but should underpin ‘extended’ data access.

This brought one “minor” complication – generic base classes are hellish in terms of things like copy constructors, since you can’t cast without explicit types.  What I ended up having to do was have the generic implementation derive from another base class, which allows for reuse of an underlying Data Context.

An example:

using(CatalogDataAccessor a = new CatalogDataAccessor())
  var result = a.CreateQuery().Where(x => x.CatalogId < 10).ToList(); SizeDataAccessor sa = new SizeDataAccessor(a); var more = sa.CreateQuery().Where(x => x.SizeId < 10).ToList();

This is accomplished by the SizeDataAccessor having the following constructor defined:

public SizeDataAccessor(BaseAccessor existing) : 
       base(existing.DataContext) { }

..and the generic data accessor defines a constructor like so:

public DataAccessor(BaseAccessor existing)
    DataContext = existing.DataContext;
    Transaction = existing.Transaction;

However, this also allows the use of transactions:

using(CatalogDataAccessor a = new CatalogDataAccessor())
    var result = a.CreateQuery().Where(x => x.CatalogId < 10).ToList();
    SizeDataAccessor sa = new SizeDataAccessor(a);
    var more = sa.CreateQuery().Where(x => x.SizeId < 10).ToList();
    a.CommitTransaction(); }



This is not the end.  Rather than writing more about the implementation, I’d prefer to upload this article and get the solution sample out as well and let you experiment with the concepts presented here (if you choose to do so).  I’ll write a follow up article examining specific scenarios shortly, and if there are any updates I’ll repost in this article.

This has been a long journey thus far, and there is still plenty to cover off.  The unit tests cover come in at around 75% code coverage, but a lot of the functionality needs to be tested through a WCF interface.  I’ve sort of simulated some of the scenarios by forcing entities to detached states (by disposing the original Data Context) but it’s far from full proof.

Notes about the Solution

You’ll need to download the NuGet package for the Entity Framework v6 RC as I haven’t included it in the archive. The fastest way is to delete the packages.config file in one of the projects, and then install the EF NuGet package.

Feel free to E-mail me if you have feedback or have questions, or leave a comment on this article.

Solution [ Files ]

Check back for the next article shortly.

Sep 062013


In my last post, I explained how to add a NuGet package to a solution, and from there how to generate and lightly consume an Entity Framework data model.  We’re going to take it to the next level now, and do a bit more with the framework.

Expanding on the Solution

Since we had a tiny bit of read functionality in the last post, I’m going to build upon that functionality and introduce concepts we’ll need as we start to factor in layers – particularly service boundaries.  Luckily for us the latest version of the Entity Framework gives you one super-massive benefit over earlier versions: the entities are defined as Plain Old CLR Objects (POCO) out-of-the-box!

What are POCO objects?

Well, we should really say “what are POCOs?”, since the “O” stands for “Object”.  That said, POCO classes are basic class definitions with properties and little else.  They are intended to represent basic data entities, with little else.

The main benefit is that they aren’t tied to any persistence implementation (i.e. scaffolding which handles the object’s persistence to a data store) which means the objects can be passed around and consumed without the risk of corrupting the data.

Here’s an example from the data model we generated in the last article:

namespace RSPhotography.DataAccess
    using System;
    using System.Collections.Generic;
    public partial class Genre
        public Genre()
            this.ChildGenres = new HashSet<Genre>();
            this.Catalogs = new HashSet<Catalog>();
        public int GenreId { get; set; }
        public string Title { get; set; }
        public string Description { get; set; }
        public Nullable<int> ParentGenreId { get; set; }
        public virtual ICollection<Genre> ChildGenres { get; set; }
        public virtual Genre ParentGenre { get; set; }
        public virtual ICollection<Catalog> Catalogs { get; set; }


Note that this example still has some more “complex” structure to it – namely the self join “navigation properties” which are correctly modelled as either a parent instance, or a collection of child elements.

These classes are generated by T4-templates which are shipped as part of the Entity Framework supporting files.  When you update the DB model, these files are automatically updated to reflect schema changes.  Nifty.

You can extend these classes by placing an implementation side-by-side within the same assembly, which is a handy way to define static properties which are not blown away when the model is updated.  If you have the need for properties which are common to your entity classes, you can also specify a base class, but opening the data model and setting the class in the properties window:


I’ll come back to this in a future article.

Give me an example of why POCO objects are useful!

OK, fair enough.  It might not look like much of a drawcard, so I’ll demonstrate the usefulness by showing how the model can be exposed in a more real world kind of fashion.  So, you’ll recall we have a data access assembly which contains classes.

Within this assembly is a data accessor for data contained within the Catalog table, let’s expand on that class first.  I’m going to add a more useful implementation, like so:

public ReadOnlyCollection<Catalog> FindItemsByLocation(Location loc) { if (loc.LocationId <= 0 || loc.RegionId <= 0) { throw new ArgumentException("LocationId or RegionId

must be specified"); } _context.Configuration.LazyLoadingEnabled = false; IQueryable<Catalog> query = (from a in _context.Catalogs select a); if (loc.LocationId > 0) { query = query.Where(x => x.Locations.Any(

y => y.LocationId == loc.LocationId)); } if (loc.RegionId > 0) { query = query.Where(x => x.Locations.Any(

y => y.RegionId == loc.RegionId)); } return query.ToList().AsReadOnly(); }


If going to explain this implementation in a little more detail.

  • First off, there’s some basic argument checks, to ensure something is being passed in to actually filter the data. 
  • Next, I’m setting “Lazy Loading” to false – which means that the returned entities will be disconnected from the data context.  More on that later.

  • Now I use something called an IQueryable to build up my actual data query, I establish the base query first
  • Then I filter based on the arguments to ensure the parameters are applied as “WHERE” clauses
  • When I’m done, the last step is to execute the query, calling .ToList() on the IQueryable variable

So I’ve updated my unit tests and added a new one which invokes this new data access method.  Let’s take a look at the impact of Lazy Loading, which I mentioned above.

Lazy Loading

Lazy loading is akin to “just read related data on demand”, and means that after the initial query, accessing a property or collection of an object will cause the database to be subsequently queried at runtime.  Here’s a visual difference:

Lazy Loading Enabled:


Lazy Loading Disabled:


Notice the difference?  You can expand the properties and collections when lazy loading is enabled, but not when it is disabled – we prefer the latter experience when dealing with service boundaries, where we want to return all relevant data at once.

What is an Object Graph?

This was posed to me late last year, and I thought it was commonly known.  An object graph is a complex collection of related instance objects.  When you deal with more complex data structures, this becomes important, because you need to factor in how big they can get.

We’ll be bumping into this term a lot in this article – you’ve been warned Smile

Now let’s introduce a WCF service application so that I can demonstrate the differences when you start exposing your Entity Framework entities to the wire.

Add a WCF Service Application

In your solution explorer add a new WCF Service Application, as illustrated below:

image image

I’m going to rename the default service to something more meaningful.  You can do this by renaming the files, and also by inline refactoring.


At this point, don’t forget to Update the Web.Config so that it includes the relevant Entity Framework configuration.  It’s easiest to open the App.Config from the data access assembly and the web.config horizontally.  Copy the config into the top of the web.config, just below the <configuration> node.


Add a project reference to the Data Access assembly, and then build some service plumbing.  Here’s mine, remembering to write the service interface first:

using RSPhotography.DataAccess;
using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.Linq;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.ServiceModel.Web;
using System.Text;

namespace RSPhotography.WebServices
    public interface ICatalogService

        ReadOnlyCollection<Catalog> GetCatalogByLocation(Location value);



using RSPhotography.DataAccess; using System; using System.Collections.Generic; using System.Collections.ObjectModel; using System.Diagnostics; using System.Linq; using System.Runtime.Serialization; using System.ServiceModel; using System.ServiceModel.Web; using System.Text; namespace RSPhotography.WebServices { public class CatalogService : ICatalogService { public ReadOnlyCollection<Catalog> GetCatalogByLocation

(Location value) { try { using (CatalogDataAccessor accessor =

new CatalogDataAccessor()) { return accessor.FindItemsByLocation(value); } } catch (Exception ex) { Trace.WriteLine(ex.Message); } return null; } } }


So far so good.  If you’ve done all this and can compile, you’re heading in the right direction.  However, WCF introduces a whole new set of complexity as you’re about to find out.  To make sure everything’s OK, right click on the .svc file in Solution Explorer and select “View in Browser (name of your default browser)” and just make sure the service resolves:


Note that, by default, the project will use  IIS Express.

Adding another Unit Test project

The next step is to add a brand new Unit Test project and add a service reference to the newly minted WCF service.  You can just “Discover” the service once you have the Add Service Reference dialog open.  I Called it “ServiceReference” for brevity.


So here’s my unit test:

public void TestService()
    using (CatalogServiceClient c = new CatalogServiceClient())
        Location loc = new Location();
        loc.LocationId = 1;
        var result = c.GetCatalogByLocation(loc);
        Assert.IsTrue(result.Count > 0, "Should be at least one record");

All pretty straightforward, hopefully.  Here’s where things go pear shaped.  If you’ve followed along, you’ll find I’ve led you down the garden path – on purpose.  I want to introduce you to some WCF debugging and analysis because, believe me, you’re going to need to know this.

Important: Determining the cause of WCF issues

Upon executing the Unit Test, we receive this ugly and unhelpful exception:


“A first chance exception of type ‘System.ServiceModel.CommunicationException’ occurred in mscorlib.dll” – which is pretty generic and extremely unhelpful.

So at this point, you might diligently head over to your WCF application’s Web.Config file and change the following:

            <serviceMetadata httpGetEnabled="true" httpsGetEnabled="true"/>
            <serviceDebug includeExceptionDetailInFaults="true"/>


However, this won’t help.  What you need to do is enable tracing for the WCF service, which you can do manually in the service application’s Web.config.  However, doing this by hand is tedious – there’s a better way!

In Visual Studio, go to Tools –> WCF Service Configuration Editor.  Once it loads, browse to your Web.config, and then expand the “Diagnostics” node.  You can see the options, click to enable tracing.


Save and exit, and return to Visual Studio.  If you re-run the same unit test now, it will generate a service trace log in the location indicated in the config file.  If you browse to that location and double click on the trace file it should open in the WCF tracing tool.

Under “Activity” any issues will be in red text.  Double click on the red text and scan for exceptions:


As you can see from the below image, there has been an exception thrown:


This screen gives us the actual exception information:

Type 'System.Data.Entity.DynamicProxies. Catalog_1F5B5519A0772307B45546CB10E51B38AA4E82B656391DE726FA25F2B3B8A8B5' with data contract name


is not expected. Consider using a DataContractResolver or add any types not known

statically to the list of known types - for example, by using

the KnownTypeAttribute attribute or by adding them to the list of

known types passed to DataContractSerializer.


The secret is that in our data access class, you need to ensure that when Entity Framework entities are created, that the EF proxy creation is off, as WCF can  not handle non-static (that is, dynamic) types.

_context.Configuration.ProxyCreationEnabled = false;


Once you have applied this change and recompiled, there’s a good chance your test will succeed:


But wait – there’s more: Include()

So now we’re getting back a collection of results – exciting – but there’s one tiny problem when you look closely:


None of the related entities are returning.  That’s right, we’re extracting the base entities just fine, but the query doesn’t walk the object graph.  In the Entity Framework, when you query, you have to explicitly name “paths” to include in the query – particularly if you aren’t using lazy loading, as the query won’t automatically include all navigation properties.

To explicitly include a navigation property you have to specify it in an include directive, such as the below:

IQueryable<Catalog> query = (from a in _context.Catalogs.Include("Locations")
                             select a);


This forces the inclusion of one of the FK/collections we’re interested in.  Returning to the first unit test project we created, we can observe it working:


To extend the object graph to include other navigation properties, we add more to the “.Include()” directives – but use this sparingly!  It is not recommended to use more than two or three inclusions per query, due to the vast amount of data it will could suck into your object graph.

IQueryable<Catalog> query = (from a in _context.Catalogs
                             select a);


Now that we have that cleared up, you’d think we’re right as rain?  Not quite, the same sort of unit test through the WCF service implementation throws an exception:


It’s our unfriendly CommunicationException!  Switching to the WCF trace, we can quickly find the culprit.

There was an error while trying to serialize parameter The InnerException message was ‘Object graph for type ‘System.Collections.Generic.HashSet`1[[RSPhotography.DataAccess.Location, RSPhotography.DataAccess, Version=, Culture=neutral, PublicKeyToken=null]]’ contains cycles and cannot be serialized if reference tracking is disabled.’.  Please see InnerException for more details.”

It’s my old nemesis, the cyclic redundancy issue.  Because the Location table has a “self-join”, WCF complains about a potential issue if entities became self-referential, it could cause an infinite loop of sorts.  That’s kind of funny, because you’d think it would be more of an Entity Framework problem than a service level issue.

(Sort of) Solving the WCF Object Graph/Cycle Issue

Well, I’m working through this one step at a time – and I have prior experience with this particular issue way back in 2010 – however, at the moment the solution which at least works for reading data is fairly straightforward.

Could it be this simple?  Honestly, I hope so.. it seems that the key is to decorate each class with the following attribute:



Now obviously, we don’t want to spend too much time and effort on decorating each entity (you might have lots), so there’s a faster way to make this happen, and I alluded to it earlier in this post: base classes.

Adding a Base Class for your Entities

Create a new class in the same project as your EF entities.  It can be very simple, like mine:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.Text;
using System.Threading.Tasks;

namespace RSPhotography.DataAccess
    [DataContract(IsReference = true)]
    public partial class EntityBase


Heading back to Solution Explorer, if you expand the <Model>.edmx, locate the file.  Right click and select “Open With” and pick XML (Text) Editor.


Locating the placeholder for the class definition, you can add the declaration to inherit from a base class, like so:

<#=codeStringGenerator.UsingDirectives(inHeader: false)#>
<#=codeStringGenerator.EntityClassOpening(entity)#> : EntityBase


This will mean that all of your entity classes will derive from this common base class.  Rebuilding will cause the T4 template to regenerate the entity class definitions.  Don’t forget to rebuild everything, and refresh your service reference in the service unit test project!

Re-running the same unit test from before shows us:


Now you might expect problems on the client side, but it appears that the client generated classes (attached to the Service Reference) in the web service unit test includes both the base class definition, and the attribute necessary to avoid the cyclic graph issue:


Just to be sure, I added a service method to the service and passed a Catalog entity back – and it worked, no issues.  So much easier than before!


So there’s been a lot to cover off in this article.  It’s a work in progress at this stage, but we’ve accomplished quite a bit.  Let’s recap – we:

  • Reviewed POCO objects and their purpose,
  • We discussed lazy loading, and disabled it for transfer via WCF,
  • We looked at proxy classes and why they don’t work with WCF,
  • We established how to enable WCF tracing,
  • We used the WCF Configuration Tool and Trace Tool to locate the cause of exceptions,
  • Lastly, we edited the T4 template to establish a base class for all EF POCO classes

That’s probably a good place to finish up on for now.

Here’ the updated solution [ Solution Files ]

A note about the sample – you’ll have to re-add the Entity Framework NuGet package as I excluded it for license and file size reasons.  Fastest way is to delete the entry in one of the project’s packages.config file and then re-add through the NuGet package manager.

If anyone has a more elegant solution to establishing the DataContract attribute on entity classes (without hard coding it into the T4 template), I’d be keen to hear.

May 092013

If you’ve ever had any involvement with an Agile project (whether it was “pure” Agile or not), you’ll likely have encountered the beast which is effort forecasting and analysis.  This drives the initial estimate of the amount of work which your team thinks it can deliver within a given period.

Agile sprint
Example of a scrum style sprint

It doesn’t really matter how big your project is, sizing up the amount of work which can be produced is a time honoured tradition, but how do you know if you’re even in the ballpark of getting your estimates right?

Over at ThoughtWorks Studios, Martin Fowler (and others) have spent significant time and effort in trying to document some conclusions about this very topic and a PDF white paper can be found on the ThoughtWords Studios website.

It’s hardly a light read – at 32 pages – but can you really afford to take estimation lightly?  In a world of commercial agreements, balancing customer or client expectations and attempting to meet tight delivery timelines, getting your estimations accurate is a key step in delivery.

However, in my mind going into this document, it really helps to have a decent view of what it is you are trying to build.  The more uncertainty going into any kind of sizing or storyboarding exercise, the rougher the estimation or analysis is going to be. 

There’s no silver bullet, one-size-fits-all methodology at play here, however this document is a really good read if you are looking to canvass different views and opinions about how to set expectations around Agile delivery.  Be prepared to have your designs challenged, and to field changes as they can (and do) present themselves!

If you aren’t quite ready to dip into the minefield which is Agile planning and forecasting, perhaps you’d find value in another e-book from ThoughtWorks – “How do you develop a shared understanding on an Agile project?”.  Remember, for an Agile project to succeed, everyone needs to play their part – the methodology isn’t just for programmers!

For those who haven’t already gone to visit the ThoughtWorks website, here’s a direct link to the PDF.

Further reading: