SQL Azure Use Case: Shared Storage Application

This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx


On-premise data will be a part of computing for quite some time – perhaps permanently. Bandwidth requirements, security, or even financial considerations for large data sets often dictate that relational (on non-relational) systems will be maintained locally in many organizations, especially in enterprise computing.

But distributed data systems are useful in many situations. Organizations may wish to store a portion of data off-site, either for sharing the data with other applications (including web-based applications) or as a supplement to a High-Availability and Disaster Recovery (HADR) strategy.


SQL Azure can be used to add an additional option to an HADR strategy by copying off portions (or all) of an on-premise database system.


In this arrangement, on-premise systems remain as they are. Data is replicated using many technologies, such as SQL Server Integration Services (SSIS), scripts, or Microsoft’s Sync Framework to a SQL Azure database. This data can be kept “cold”, meaning that a manual process is required to bring the data back, or as a “warm” standby using connection string management in the application.

Recently we architected a solution where a company kept a rolling two-week window of data replicated to SQL Azure using the Sync Framework. The application, a compiled EXE running on user’s systems, had a “switch connections” button, that allowed the users to take a laptop to another location, select that option, and continue working from anywhere they had Internet connectivity. This required forethought and planning, and did not replace their primary HADR systems, but it did allow them to continue operations in the case of a severe outage at multiple sites. Since they are an emergency services provider, this gave them the highest redundancy.

Another option is to amalgamate data from disparate sources.


In this arrangement, two or more data services (one of which is SQL Azure) are accessed by a single program. The program queries each system independently, and using LINQ a single query can work across all of the data, assuming there is some sort of natural or artificial “key” that can join the data sets together. The user programs simply view this single data set as a single data source, unaware of the underlying data sets. This allows great flexibility and agility in the downstream program. The upstream data sources can change as long as the elements are kept consistent.

There are performance and security implications to amalgamated data systems, but if architected carefully they provide multiple benefits. A few of of these are that other systems can access the individual data sources, reporting is simplified and standardized, and multiple copies of data are eliminated.


You can read more about the Sync Framework and SQL Azure here: http://social.technet.microsoft.com/wiki/contents/articles/sync-framework-sql-server-to-sql-azure-synchronization.aspx

If you are new to LINQ, you can find more resources on it here: http://msdn.microsoft.com/en-us/library/bb308959.aspx