Category Archives : Entity Framework


Update: Entity Framework Generic Data Access

Introduction

Hello and apologies for the long delay between articles, I am now returning to the Entity framework.  Following on from the previous article (Entity Validation) I’m back to review the latest edition of the generic data access class.

In the time between my original series of articles concerning disconnected POCO entities and a generic approach in handling them, I’ve refactored the class(es) a fair bit, to make the code a little more pleasant.  If you want to jump straight to the sample solution, I’ll place a link to it here and at the end of the article.

This article was originally published at Sanders Technology.  Check the site for updates and additional technology goodness.

This is a direct update to this former article: Flexible Data Operations.

Re-introducing the Data Access Base Class™

The concept was clear – have a base class which could be extended (by virtue of derived classes) which implements a common interface and provides very obvious functionality, such as Select, Insert, Update and Delete.

Being generic, the generic data access class is designed to work on a per-class basis, but when object graphs are introduced into the mix (courtesy of relationships or navigation properties), we end up having to deal with multiple classes.

Now, to quote an earlier article, here are the design prerogatives yet again:

Generic Implementation Design

I’m not going to go through the class function by function (too time consuming) however I do want to walk through the design decisions I made when considering this approach.

  1. Reusable

    The original intention was to conserve and protect the usage of the EF’s DbContext.  However, I also wanted an ability to encapsulate common queries in classes deriving from the generic implementation.  In the end I came up with the solution presented here.  Chances are high that there’s a more elegant approach, and I’m happy to hear from folks with suggestions.

  2. Generic

    Another key was to try and encapsulate as much of the common ‘CRUD’ functionality as possible without having to do things that were entity or type-specific.  Generally, with the exception of schema details and relationships, the approach to data access should be as uniform as possible, and so it should be with the mechanisms controlling such access.

  3. Flexible

    As always, providing a useful and flexible interface is a design goal.  There’s not much point introducing a repository or interface based design if consumers will write hacky workarounds to do something they need to do.  Hence, the exposure of IQueryable<T> return types.

  4. Extendable

    Chances are you’ll never fully anticipate all needs, and this is certainly true with persistence.  The aim here is that the generic approach can be extended fairly easily to prove realistically any capability that might be required down the track.  For example, a type-specific accessor (repository) could be implemented on top of the generic class to provide execution of stored procedures.

Class Definition

This is where the true complications start to occur – especially for disconnected or detached entities.  The Entity Framework is best used when a Data or Object context can remain instantiated and can subscribe to changes in entities which were created via the context .

Let’s take a look at what functionality is exposed by the generic class:

image
Figure 1 – The Obligatory Class Diagrams

Under the hood

To reduce the repetitiveness, I’ve combined “Insert” and “Update” into a single call (InsertOrUpdate) since the onus is on the calling code to have set the Object State for each entity correctly anyway.  That leaves us with Select, CreateQuery/GetEntities/GetEntity for reads and Delete for deletions. 

This hasn’t changed much since the earlier versions, but what *has* changed is how inserts, updates and deletes are handled by the class, behind the scenes.  The ApplyStateChanges (private) function has been refactored, and a chunk of functionality has been refactored into a separate function called ProcessEntityState.

Why refactor?  This makes the code a little more readable, and reduces the size of the ApplyStateChanges function so that it is responsible for processing.  There were also some changes to the way linked entities were handled in a many-to-one (or one-to-many) collection.

Changes to Entity Processing

Handling Object State

One big change I’ve made is how an entity set is handled when processing multiple entities.  The original implementation assumed that anything not local to the Data Context should be safely Attached, as per the below:

if (!ExistsLocal<T2>(item))
{
    dbSet.Attach(item);
}

if (item.State == ObjectState.Modified)
{
    dbSet.Attach(item);
}

However, after some more detailed testing, the code should respect the Object State which the calling code has applied to the entity (rather than relying solely on the Data Context’s local cache):

if (item.State == ObjectState.Modified)
{
    if (!ExistsLocal<T2>(item))
    {
        dbSet.Attach(item);
    }
}

Bug Fix: CreateQuery

A sample set of data I was using, went over the 100 row mark at one point, when I discovered a little faux par:

/// <summary> /// Allows freeform data queries from outside the Data Access classes,

/// without exposing the data context directly /// </summary> /// <returns>An IQueryable of type T - limited to 100 rows</returns> public IQueryable<T> CreateQuery() { IQueryable<T> _query = DataContext.Set<T>().AsQueryable<T>(); // _query = _query.Take(100); return _query; }

Applying the Take operation at this stage was actually restricting the query to the first 100 rows in the table, not restricting the final query to 100 rows.  Consequently, this was removed!

Finally

Let’s just jump right into the sample solution.  You can download a copy of the example solution

Please email me rob.sanders@sanderstechnology.com if you find any issues, have questions or leave a comment here…


Entity Framework 6.1.0 Beta 1

Some exciting news to share – for those interested in pre-releases, the next iteration of the Entity Framework – v6.1.0 has been released as a Beta just recently (Feb 11th, 2014).  For those who are curious about the new features, check out this link or read the summary I’ve copied from that original article, below.

As per normal, you can get the runtime assemblies via NuGet, but for the Visual Studio tools support, you’ll need separate downloads available from Microsoft.com.

What’s in Beta 1?

Entity Framework 6.1 is a minor update to Entity Framework 6 and includes a number of bug fixes and new features. The new features in this release include:

  • Tooling consolidation provides a consistent way to create a new EF model. This feature extends the ADO.NET Entity Data Model wizard to support creating Code First models, including reverse engineering from an existing database. These features were previously available in Beta quality in the EF Power Tools.

  • Handling of transaction commit failures provides the new System.Data.Entity.Infrastructure.CommitFailureHandler which makes use of the newly introduced ability to intercept transaction operations. The CommitFailureHandler allows automatic recovery from connection failures whilst committing a transaction.
  • IndexAttribute allows indexes to be specified by placing an [Index] attribute on a property (or properties) in your Code First model. Code First will then create a corresponding index in the database.
  • The public mapping API provides access to the information EF has on how properties and types are mapped to columns and tables in the database. In past releases this API was internal.
  • Ability to configure interceptors via the App/Web.config file (allowing interceptors to be added without recompiling the application).
  • Migrations model change detection has been improved so that scaffolded migrations are more accurate; performance of the change detection process has also been greatly enhanced.
  • Performance improvements including reduced database operations during initialization, optimizations for null equality comparison in LINQ queries, faster view generation (model creation) in more scenarios, and more efficient materialization of tracked entities with multiple associations.

What looks interesting in the Beta?

I’ll be looking closely at the public mapping API and the Interceptors.  I can theorise a couple of uses for the public mapping API, but I’ll need to do some personal evaluations to see if it is possible to do something I’ve long wanted to do with EF entities – model projection.

The interceptors will be a big boost to anyone who has had trouble diagnosing DB operations in a production environment.  I’ll take a closer look at those, too.

Worth a look, I’d suggest.


Data Modelling and the Entity Framework 1

Introduction

Last week I gave an internal presentation to my fellow consultants at CGI on the principals of data modelling/data architecture, modelling within Visual Studio 2013 and a history of the (ADO.NET) Entity Framework.

I’ve attached the slide deck to this article, and also in my presentations page.

Data Modelling – Concepts

imagesOnce we get past the initial introductions, I dove into some of the fundamental principles of data access design.  These are the key design considerations which every mature solution should take into consideration – particularly when building a design which marries to the ACID principles.

This part of the presentation wasn’t directly related to either data modelling in Visual Studio or implementing the Entity Framework ORM, but I don’t think it is a bad idea to restate some core data access design goals where possible.

Once we were past the concepts, we went straight into…

Visual Studio 2013 Database Projects

To be honest with you, I forced myself to use Visual Studio’s out-of-the-box database project as a design tool instead of jumping into SQL Management Studio as I normally would.  Partly, this was to give the tools some fair use – the designer support is still a bit sluggish – but there’s still some niceties to be had here.

The latest incarnation has some decent and attractive features, the SQL Compare functionality is simply superb for harmonizing T-SQL based on instances or other code repositories, and the T-SQL import wizard helps with getting projects up and running quickly.

Possibly the best feature is the publishing wizard, which you can use to easily deploy to SQL Azure or to instances; or to run as part of an automated build.

The Entity Framework

imagesThe second half of the presentation introduces the Entity Framework, and covers off a bit of history.  I’ve used the EF since the first version, so I have some experiences to share here.

Besides showing how the entity model is generated from the database schema, I wanted to impress upon the audience the costs vs. benefits of adopting an ORM solution – particularly focused on the quick wins against the limitations and potential performance problems.

Ultimately this lead into a review of a generic interface pattern which I’ve been working on for the past few weeks, and some of the power of consolidating common data access methods (e.g. Create, Read, Update and Delete) into a common implementation using generics.

The Surprise

At the end, I was planning to surprise the audience by “live switching” from accessing a local SQL instance to querying data from SQL Azure by simply changing a connection string, but due to having to move rooms at the last minute, the 4G connection I was using hadn’t been authorised on the SQL Azure Database, so the surprise failed.

The awesome takeaway (blown surprise aside) was that using the Entity Framework, there was no need to do any recompilation – the model worked seamlessly with local and Azure-based data stores.  I guess I’ll have to save that surprise for another audience at another time.

Summary

To be honest, I should have split this into two presentations.  There’s so much to discuss when it comes to decent data design principles, so I could have talked about those and data modelling in a single session.  The Entity Framework represents a large body of work in its own right, I could speak for hours about how it can be adapted and extended.

We didn’t even scratch the surface..  This may lead to a follow-up presentation potentially.  Here’s the slide deck from the day.