MVP Series: Overlooked Features of SQL 2014

For the next couple months we are thrilled to have special guest authors from the Canadian MVP Award Program contributing posts around their favourite tips, tricks & features of SQL 2014. For the next few weeks, we will be posting a different article from one of our Canadian SQL Server MVPs each week. Please feel free to leave a comment. Thanks to Arthur Zubarev for this week's article!

It will be soon half a year since the release to manufacturing of the SQL Server 2014. At this point in time I am hopeful you are reading this post with SQL Server 2014 merrily humming in your data centre so now is the perfect timing to recon what features you have potentially overlooked. By all means, it is not too late to harness several enhancements to propel your company to reach new highs while ripping even more benefits out of the upgrade. But if you have not upgraded, still read on.

In short, this post aims to serve as an eye opener and a quick guide to the most commonly missed features in SQL Server 2014 and I assume you already know about the shiny new In-Memory OLTP offering so I can safely avoid repetition. But don't let a dozen (or so) other awesome additions to hide under the In-Memory OLTP carpet!

As an aside, the majority of the features discussed below are mostly available in the Enterprise version of the product.

Let me revisit the overlooked features thought, according to my own observations, 80% of businesses don't use or miss around 70% of the features offered by a software upgrade. I surmise SQL Server 2014 is no different. But, I would REALLY like this situation changed. In my opinion, it is due to the lack of educational content or advice and thus can easily be addressed by offering more insight. I will present with ten such potentially overlooked features in SQL Server 2014 without any particular order:

1) Delayed Durability: This is a feature you want to use in slow disk IO speeds scenarios and/or when the writes come in under contention, but you must ensure you can tolerate some small data losses (e.g. in the event of power loss) because the writes to log occur in asynchronous manner. It is a per-database feature. Find our more at

2) New Cardinality Estimator (CE): Bye-bye the SQL Server 7 era cardinality estimator, hello the modern workloads CE! If you need a short intro to CE it is one of the components in use to generate execution plans, and specifically responsible for estimating (or predicting) row counts which later is used to define what kind of operators or joins to use to operate on data. On by default for new databases, off for those in compatibility mode. You WILL need to re-check for optimal plans after the upgrade. More on CE is here

3) The Columnstore Index now can be Clustered, and is now Updatable: it is a big win for my clients, especially the seemingly banal updetability actually leads to suddenly more alive data warehouses. Perhaps it is time to review these drop-recreate columnstore index jobs, isn't it? The clustered in the index name means there is no more underlined data structure, the index itself is comprised of data, thus speed gains are inevitable! Many professionals are unaware of the further optimization in the data compression as well as the new COLUMNSTORE_ARCHIVE setting (my $0.02 - just apply it to each aged data partition). Explore yourself at

4) Backups to Azure Blob Storage: Need an off-site data access? Migrate or move a database into a new DC? This is the option to use. Reliable, no hardware involved (or tapes) and inbound data does not cost a penny. A good white paper is a click away:

5) Backup Encryption: there is no more need for third party tools, data can be encrypted at rest (regardless whether TDE was used or not), both on-prem or in the Cloud backups can be encrypted using even the Military Grade encryption (AES 256). Just remember to backup the asymmetric key. Start from

 6) Azure for AlwaysOn Replica and Azure + On-Prem hybrid Operations Mode (data files in Azure): AlwaysOn is not only now bumped up to 8 (from 4) Availability Replicas it now has the Cloud Migration Wizard you can leverage to migrate an on-premises database to SQL Azure. Once scenario in mind I have is when you want to offload the analytical loads to a ReadOnly secondary (that would be in the Cloud). What else is possible is to host part of your datafiles and/or log files in the Cloud, with the database engine remaining on premises not affected by slow external networks or you can Cloud-source your backups there. It is also possible to have a TDE encrypted database operate in this mode without the need of sharing your encryption key. If this is a compelling feature to you read further at

 7) Resource Governor for Disk on a Per Volume Basis and at a Process Level: Disk is a precious shared and the slowest resource possible a DBA would deal ever with in it's life. Since the introduction of the Resource Governor (RG) in SQL Server 2008 (I hope you use it already, hint: it is a great feature in a multitenat hosting scenario) it received a number of enhancements in the 2012 release (e.g. the CAP_CPU_PERCENT, better NUMA support), with the continued advent of the RG in SQL Server 2014 it is now possible to throttle the disk IO. With a little bit of imagination you can now control how much disk at a time your DBAs can have defragging indexes, or curb these pesky 'Mickey Mouse' applications chewing too much disk. MSDN is a good start to grokking the RG 

8) Buffer Pool Extension via SSD: Probably the hardest to use because it requires the SSD, but its idea is simple, utilize the near 0 seek time (mostly due to the elimination of the disk controller arm) feature of the Solid State Disk to place the buffer pool cache to it. It merely acts as an intermediate between the spindeled disk and the RAM absorbing frequent, short IO. Benefits potentially are great, especially under heavy reads scenario (due to the feature only dealing with non-dirty pages). My personal note: it is hard to find a good all around SSD even today, make sure to compare vendors and test, a lot. My fellow SQL Server MVP Jeremiah Peschka has a very good and long short enough post to comprehend the feature

9) Incremental Statistics: A very easy to pass by feature, be aware it only applies to partitioned tables. In essence, it allows the database engine bypass scanning stale data when generating statistics, and another less obvious side effect of it is that now lower than 20% change in data (in your active partition) will trigger the statistics updated. Great for those who have the Auto-update statistics set to off; now it is time to leverage this feature, but do your home work before to understood any intricacies. I recommend David Barbarin's post to skim through.

10) Lock Priority of Online Operations: This one arguably is the hardest feature to understand, but I see it being increasingly important to harness nowadays with the data volumes heading into the Big Data realm. It has two faces, one is for the Online Index Rebuilds, another is for table partitions switching. It is controlled via the new WAIT_AT_LOW_PRIORITY clause added to the ALTER INDEX, ALTER TABLE command Incidentally, it allows a DBA to control how much impact these expensive operations can have onto a busy OLTP (typically) database. Options are to kill the blockers, or abort the switch/OIR. Use with caution fellow DBAs and developers!

That's it for now, but I hope my article will help you extract even more potential out of the SQL Server 2014 or otherwise serve as an inducement to having the upgrade discussion started.


About The Author: I specialize in BI, DM, DW, Big Data, SQL Server, .Net and OSS. I am a MVP, MCDBA, MCAD, MCTS and MCITP. I strive to be an early adopter of new technology and like to embrace technical challenges. Events organizer, speaker. Technical Development Editor for a prominent publisher.