Nov 082013
 

Introduction

Last week I gave an internal presentation to my fellow consultants at CGI on the principals of data modelling/data architecture, modelling within Visual Studio 2013 and a history of the (ADO.NET) Entity Framework.

I’ve attached the slide deck to this article, and also in my presentations page.

Data Modelling – Concepts

imagesOnce we get past the initial introductions, I dove into some of the fundamental principles of data access design.  These are the key design considerations which every mature solution should take into consideration – particularly when building a design which marries to the ACID principles.

This part of the presentation wasn’t directly related to either data modelling in Visual Studio or implementing the Entity Framework ORM, but I don’t think it is a bad idea to restate some core data access design goals where possible.

Once we were past the concepts, we went straight into…

Visual Studio 2013 Database Projects

To be honest with you, I forced myself to use Visual Studio’s out-of-the-box database project as a design tool instead of jumping into SQL Management Studio as I normally would.  Partly, this was to give the tools some fair use – the designer support is still a bit sluggish – but there’s still some niceties to be had here.

The latest incarnation has some decent and attractive features, the SQL Compare functionality is simply superb for harmonizing T-SQL based on instances or other code repositories, and the T-SQL import wizard helps with getting projects up and running quickly.

Possibly the best feature is the publishing wizard, which you can use to easily deploy to SQL Azure or to instances; or to run as part of an automated build.

The Entity Framework

imagesThe second half of the presentation introduces the Entity Framework, and covers off a bit of history.  I’ve used the EF since the first version, so I have some experiences to share here.

Besides showing how the entity model is generated from the database schema, I wanted to impress upon the audience the costs vs. benefits of adopting an ORM solution – particularly focused on the quick wins against the limitations and potential performance problems.

Ultimately this lead into a review of a generic interface pattern which I’ve been working on for the past few weeks, and some of the power of consolidating common data access methods (e.g. Create, Read, Update and Delete) into a common implementation using generics.

The Surprise

At the end, I was planning to surprise the audience by “live switching” from accessing a local SQL instance to querying data from SQL Azure by simply changing a connection string, but due to having to move rooms at the last minute, the 4G connection I was using hadn’t been authorised on the SQL Azure Database, so the surprise failed.

The awesome takeaway (blown surprise aside) was that using the Entity Framework, there was no need to do any recompilation – the model worked seamlessly with local and Azure-based data stores.  I guess I’ll have to save that surprise for another audience at another time.

Summary

To be honest, I should have split this into two presentations.  There’s so much to discuss when it comes to decent data design principles, so I could have talked about those and data modelling in a single session.  The Entity Framework represents a large body of work in its own right, I could speak for hours about how it can be adapted and extended.

We didn’t even scratch the surface..  This may lead to a follow-up presentation potentially.  Here’s the slide deck from the day.

Sep 132012
 

In this session, learn about the DPC (Database Private Cloud) Reference Architecture and how it can be used to consolidate thousands of databases on a single scalable platform.

The session will drill into the hardware and software configuration and cover how its wired together. We will cover the use cases, savings, workload types and migrating workloads to the solution as well as discuss management options.

Presented by Danny Tambs

Disclaimer: These are conference session notes I compiled during various sessions at Microsoft Tech Ed 2012, September 11-14, 2012.  The majority of the content comprises notes taken from the presentation slides accompanied, occasionally, by my own narration.  Some of the content may be free hand style.  Enjoy… Rob

Introduction

The session covers a whitepaper produced by HP and Microsoft.  The following is a summary of the session:  This session is specifically about the DBC Reference Architecture (appliance/recipe).

  • Reference Whitepaper
  • Benefits
  • Architecture drilldown
    • Components
    • Manageability
    • Achieving high availabilitye)
    • Metering and chargeback

Consolidation

Different hardware profiles, ages and versions.  Silos and data formats create disparity across organisations and within them.  Becomes costly to maintain, to upgrade regularly and to find experience staff to support it.  Older hardware is typically more expensive to run.

Private Cloud

NIST (US) put together a loose set of definitions of what to expect from a “private cloud”.  Capabilities include: elasticity, resource pooling, self-service and control & customize.  Much shorter deployment times (hours rather than weeks).

MS/HP DBC Reference Architecture

  • Complete – factory built, virtualization to poll & consulting and support
  • Optimise – Central console, tuned for SQL Server workload, migrate with near zero downtime
  • Agile – Provision on demand, meter and chargeback usage, modular for scale as you go

Vision

  • Box product (buy, install)
  • Appliances/Reference Architectures (recipes, buy premade or build your own)
  • The Cloud (SQL Azure)

Here is the throughput you are aiming to get.  How do you ensure you have enough I/O?

DBC Platform

Major benefits include reduced configuration steps – the appliances are prebuilt (thousands of man hours) and can meet the operational requirements.

  • Reduces power consumption,
  • reduces complexity to manage
  • Retire older hardware and consolidate

Not particularly recommended for databases over 1 TB, better for smaller gigabyte sized DB solutions.  Appears to be targeted at consolidation of hundreds or thousands of disparate servers into a managed virtual environment.  Additional benefits include skillset changes, DBAs can better manage applications, not just services.

Deep Dive: DBC Plug and Play

Base configuration: is very basic configuration with minimalistic disk, I/O, CPU and memory. 

IMG_2211 IMG_2212

Software stack

IMG_2213

Of course you can use your own monitoring and other software packages, but they must be certified with Windows Server.  Many anti-virus packages have caused blue screens by having not been certified to work with clusters, for example.

Hardware Profile Configuration

 IMG_2214

Built in reliability and H/A features – RAID controllers, spare disks etc.  The software to manage the rack is built in, no extra software required. 

DBC Common Usage Scenario

Storage is comprised of ‘hardware blocks’ of storage.  iSCSI storage cabling contains redundancy so that missing cable doesn’t drop the iSCSI availability (following best practice)

.IMG_2215

Networking – HP ProCurve switches (10 GB switches) with redundancy.  Includes iSCSI storage traffic connectivity.  Traffic partitioned into general network and storage.

IMG_2217

The baseline sizing used for the hardware and the solution.  A single rack configuration is optimized for a balanced mix of 200 database instances.  There are tools available to determine socket-to-VCPU translation in the virtual world.

Size depends on workloads.  Need to baseline based on your own specific workload needs.  In a 4 Blade configuration, the majority of management software is on the Blade 1, and that only comprises less than 20% of the Blade’s capacity (or less).

Supports virtualized clusters for reduced downtime and higher availability.  How do you solve disaster recovery scenarios?  What’s your bottleneck?

Why isn’t this solved by hardware?  Important to note this is a consolidation reference, not a HA reference.  There are options though.  Database mirroring, etc

Do you need HA?  Sometimes no.  Do you need 1 minute failover? Not necessarily.

Linking to the Customer Network – link aggregation control protocol.  Twin switches, with built in redundancy.
Only forced downtime is upgrading/flashing of firmware.  Workload can automatically failover or it can be live migrated before upgrade.

Performance Figures

IMG_2219

No caching!  24/7 full uptime capability baselined against true random i/o reads.   Want more? Server 2012 brings a lot to the table:

IMG_2220

1 million IOPS possible in a virtual environment. 

Toolkits

MAP toolkit.  Will profile existing SQL Server instances, categorizes and builds catalog of SQL Server.  Helps build consolidation strategy.  Captures I/O, performance information and can advise what kind of VM profile is required to consolidate.  Recommended to run the tool over a few weeks especially in high load times.

Support

A range of HP and Microsoft support is available.

Jul 252012
 

Hi All.  This is a quick post to introduce you to SQL Server Data Tools – support and tools for database developers.

image

Introduction

Recently, I started a new solution in Visual Studio 2010.  There is a need to build and maintain a database schema (for SQL Server 2008 R2), so I decided to add what was once formerly known as “DataDude” – the Database Project for Visual Studio.

This was in a copy of Visual Studio Professional 2010 with Service Pack 1 – and the out-of-the-box solution only supported SQL Server 2008 and prior.  A bit surprised, I did some digging.  You’ll recall there was a ‘Database Edition GDR’ which came out a few years back..  well there’s now an even better flavour of support.

Introducing SQL Server Data Tools

It’s called ‘Microsoft SQL Server Data Tools’ and you can get a copy from the following link on MSDN.  There are quite a number of new bits and pieces included, and it works with both Visual Studio 2010 (alert: apply Service Pack 1 beforehand) or Visual Studio 2012 (RC – although with some known issues if upgrading from, the beta).

Although the installation takes a little while (depending on your connection speed), the wait is worthwhile.

image

I’ll borrow some text from the MSDN site in order to explain the purpose of SSDT:

Who is SSDT for, and what does it provide them?

SSDT is for SQL Server database developers, who often develop database schemas, views, stored procedures, and other database objects while developing their application logic.

  • Tooling for both SQL Server and SQL Azure Development: SSDT offers new capabilities in a single cohesive environment to compile, refactor, and deploy databases to specific editions of SQL Server and SQL Azure. The toolset makes it easy, for example, to migrate on-premise SQL Server schemas to the cloud on SQL Azure, and develop and maintain databases across both on premise and cloud deployments. SSDT can target SQL Server 2005, SQL Server 2008, SQL Server 2008 R2, SQL Server 2012, and SQL Azure databases, including all editions of these database servers.
  • For SQL Server DBAs: SSDT provides a central and unified toolset targeted to the specific needs of DBAs to develop and maintain databases, with visual tools for developing tables, schema compare, and rich T-SQL support for refactoring databases, building views, stored procedures, functions and triggers. The toolset provides both a live development mode, and an offline project mode that tracks and manages all artifacts associated with a database. This mode optionally fully integrates with Visual Studio 2010 for team development, source control and change tracking. All change operations are automatically transformed into optimized T-SQL alter scripts, and can optionally be applied immediately to the online database or saved for later execution.

[http://msdn.microsoft.com/en-us/data/hh322942]

It’s actually very easy to use. 

Real World Applications

For example, if you open a solution containing legacy database projects the tools will automatically prompt you as to whether you wish to upgrade your existing database projects to the newer edition.

The basic benefit is targeting SQL Server 2008 (and R2) as well as SQL Server 2012 and SQL Azure.  That last part might get your attention! That’s right – SQL Azure.  We’ll be checking this out soon and reporting back in a bit more detail. 

How Do I get SSDT?

You can download the “pre-installer” here.

For more information, check out the ‘Getting Started with SQL Data Tools’ located here.. or stay tuned at this location for more!

But wait.. there’s a bug?

Today I got into full swing with the SSDT and a real world project.  During my work I came across a fairly horrible bug which has been documented on the MSDN Forums.

Basically, if you initialize (i.e. use) any of the SSDT tools you can’t use the Entity Framework tools and vice-versa.  The interim workaround is to load two instances of Visual Studio (perhaps even the same solution?) and use the SSDT tools in the first instance, and the Entity Framework tools in the other.

A symptom which might lead you to this article?  When you have the SQL Server Data Tools installed, and try to update or create an Entity Framework data model – you may receive errors such as ‘Object not set’ etc.  In some cases it might crash the VS IDE.  In my experience, I received the ‘object not set’ errors, and the Model Explorer was greyed/did not render.  I was also not able to refresh my EDMX properly.

Whilst being a right royal pain in the butt, believe it or not this approach does work – even if it does make for a very disjointed development experience.  According to the thread, a fix is in the works – but no word on when it will be released.

Jan 012012
 

Introduction

In December 2011, Microsoft released an early Release Candidate (RC) of SQL Server 2012 (formerly known as codename Denali).

There are a bunch of new things in SQL Server 2012 – here’s a list of just a few:

To find out more about what is coming in SQL Server 2012 check out the following “What’s New?” page

Downloading SQL Server 2012 RC 0

I’ve found the easiest way to proceed with the RC0 is to obtain a DVD .iso (image) of the whole kit, otherwise, you can download it in component parts to keep your download footprint minimal.

The following location is the “correct” download link: http://www.microsoft.com/download/en/details.aspx?id=28145 or you can just go directly to the beta experience website here: http://www.microsoft.com/betaexperience/pd/SQLDCTP3CTA/enus/

Installing SQL Server 2012 RC 0

SQL Server, in recent years, has had one of the best setup and installation experiences of any Microsoft product on the market.  Well, based on y experience with the 2012 RC 0 release, this is getting a little more complicated.

The next few screenshots will guide you through a fairly straightforward configuration, using the latest release.

Installation Screenshots

1. Execute the installation package
2. In the SQL Server Installation Center, select the Installation option
3. From here, jump right in and click on “..new installation”
4. You’ll get all the prerequisite checks and loading of setup files, just proceed as you would with a prior
    edition of SQL Server
5. Once the main installer kicks in (you passed all the prerequisite checks etc), you’ll get to the intro page
6. Click through to the licensing page – note we’ll keep it on Evaluation:

0-2

7. Next, we’re going to select the first option (feature installation):

0-3

8. In the feature selection page, I’m selecting all features, but you should select just what you’re going to want
    to evaluate:

0-4

9. Skipping ahead through some screens now (they are unchanged from 2008 R2), the Server Configuration page has changed, requires you to individually set service accounts.

Best Practice: Use a different account for each service
Best Practice II: In a domain environment, if you wish to use Kerberos, ensure you use a Domain account for the service identity, and ensure the proper ADSI settings are set

0-5

10. Assign some user accounts as System Admins.  I always assign the local machine admin (in non-Production environments) and a Domain Admin (in a Domain environment) so that we don’t get accidentally locked out.

1-1

11. Same procedure for Analysis Services (if applicable):

1-2

12. Reporting Services has changed a little since 2008 R2, you have a few options for configuring native mode or SharePoint Integrated mode.  The native configuration is handy if you don’t have any exotic configuration requirements:

3

13. The Distributed Registry Controller (optional) is new to 2012.  You can only assign user accounts (not groups).  If you accidentally add a group, it’ll complain – and then (in my experience) crash..

4

13.1. Oops, a soft crash when trying to remove the offending group (best to avoid this situation):

2

14. If you’ve chosen it, the Distributed Replay client needs a Controller name:

5

15. Finally, assuming you’ve accepted most of the defaults, you may arrive at the summary screen:

6

16. When you are happy, you can kick off the install.  It may take a while, if you’ve selected a number of features.  Finally, if all goes to plan, you hopefully will end up with a success:

image

..and it’s probably time for a reboot. 

Post-Install Sanity Check

After the system has restarted, we can check out all the new stuff installed:

image

image

image

Summary

Well, this was a lesson in patience from my perspective.  Whilst nowhere near as challenging of some other products, there are a few new things to consider when installing SQL Server 2012. 

As always, it helps to read the installation guide.  There are a number of new changes this time around, so even if you’ve done plenty of SQL Server 2008 installs, it might pay to do a quick skim of the install material first.

Lastly, as always, it pays to properly plan your infrastructure.  I’m usually installing into sandboxes, so what I’ve presented here is by no means what I’d recommend for a production system.  If you are planning a production system with a pre-release edition, all power to you (brave).

Please always keep these issues in mind when planning your SQL/Infrastructure:

  • Disaster Recovery,
  • Fault Tolerance,
  • Availability and Scale,
  • Persisted Storage requirements,
  • Physical hardware limitations, and,
  • Backup and test your backup strategy!

Further Reading

Tutorials and Samples for SQL Server 2012 RC 0

Microsoft SQL Server Data Tools (SSDT)

Dec 162011
 

What is FILESTREAM?

Much data is unstructured, such as text documents, images, and videos. This unstructured data is often stored outside the database, separate from its structured data. This separation can cause data management complexities. Or, if the data is associated with structured storage, the file streaming capabilities and performance can be limited.

FILESTREAM integrates the SQL Server Database Engine with an NTFS file system by storing varbinary(max) binary large object (BLOB) data as files on the file system. Transact-SQL statements can insert, update, query, search, and back up FILESTREAM data. Win32 file system interfaces provide streaming access to the data.

FILESTREAM uses the NT system cache for caching file data. This helps reduce any effect that FILESTREAM data might have on Database Engine performance. The SQL Server buffer pool is not used; therefore, this memory is available for query processing.

SQL Server 2008’s FILESTREAM support can be very handy, but often overlooked during installation.  It is disabled by default in the installer, but can be enabled at install time, and also post-installation.

Obviously, the easier option is to enable during installation, but if you missed it, the steps to enable are relatively straightforward.

Keep in mind that there are two areas you need to consider: SQL Server Configuration and SQL Server Management Studio – both are responsible for enabling Filestream, but for different reasons.

To enable and change FILESTREAM settings (from MSDN)
  1. On the Start menu, point to All Programs, point to Microsoft SQL Server 2008 R2, point to Configuration Tools, and then click SQL Server Configuration Manager.

  2. In the list of services, right-click SQL Server Services, and then click Open.

  3. In the SQL Server Configuration Manager snap-in, locate the instance of SQL Server on which you want to enable FILESTREAM.

  4. Right-click the instance, and then click Properties.

  5. In the SQL Server Properties dialog box, click the FILESTREAM tab.

  6. Select the Enable FILESTREAM for Transact-SQL access check box.

  7. If you want to read and write FILESTREAM data from Windows, click Enable FILESTREAM for file I/O streaming access. Enter the name of the Windows share in the Windows Share Name box.

  8. If remote clients must access the FILESTREAM data that is stored on this share, select Allow remote clients to have streaming access to FILESTREAM data.

  9. Click Apply.

  10. In SQL Server Management Studio, click New Query to display the Query Editor.

  11. In Query Editor, enter the following Transact-SQL code:

    Copy

    EXEC sp_configure filestream_access_level, 2
    RECONFIGURE
  12. Click Execute.

Notes

The reason for the two locations (Configuration Manager and SQL Management Studio) is that FILESTREAM needs to be enabled at the network configuration level, and at the database engine level.  If you miss one or the other, Filestream will not be accessible or enabled!

Also note that you cannot enable FILESTREAM on a 32-bit version of SQL Server running on a 64-bit operating system.

Reference

http://msdn.microsoft.com/en-us/library/bb933993.aspx

http://msdn.microsoft.com/en-us/library/cc645923.aspx