June 2013

Volume 28 Number 6

Azure Insider - Architecting Multi-tenant Applications in Microsoft Azure

By Bruno Terkaly, Ricardo Villalobos

Bruno Terkaly and Ricardo VillalobosThere’s a large and vibrant ecosystem of services providers that package up solutions and host them on cloud platforms. Generally speaking, these Software as a Service (SaaS) companies use multi-tenant architectures. You can find many real-world examples at bit.ly/vIPcyc. If you think about it, most of the popular applications on the Web, such as Facebook or Hotmail, are multi-tenant applications. This approach makes sense from a business perspective because compute and storage resources can be maximized by sharing them among multiple subscribers. After all, one of the main points of cloud computing is to maximize efficiencies by sharing a large pool of compute and storage resources.

We get frequent questions relating to best practices and patterns for implementing multi-tenant architectures using Windows Azure. This is the first of two columns that will introduce you to some key concepts and principles, as well as provide some guidance on where you can get the hands-on skills to start getting up to speed on some core building blocks.

The concept of multi-tenant software is simple. It’s a single, comprehensive system that supports multiple client organizations (tenants), where each tenant runs securely in parallel on the same hardware and software, completely unaware that there are others. Done properly, it allows services providers to offer a lot of functionality to tenants while realizing extremely efficient utilization of Windows Azure. Tenants are focused on security, low cost and good performance.

Cloud architects must carefully consider some key pillars when building multi-tenant systems:

  • Identity and security
  • Data isolation and segregation
  • Metering and performance analytics
  • Scaling up and down while maintaining SLAs

We’ll discuss the first two here and the latter two in our next column.


Identity plays a central role in multi-tenant architectures. For starters, tenants must be guaranteed that their private data isn’t accessible by other users, except in cases where the goal is to share among subscribers. Microsoft SharePoint provides an example. You might have documents that are visible only to a specific tenant, but you might also have some that are shared by a workgroup (two or more tenants that may span organizations). The bottom line—identity plays a central role with respect to the visibility of data.

In today’s world, identity management systems must be interoperable, adhering to open standards (as maintained by the Organization for the Advancement of Structured Information Standards, or OASIS), such as WS-Trust and WS-Security. Together, these standards allow multi-tenant systems to assess the presence of, and broker trust relationships between, tenant and services provider in a secure message exchange. Moreover, multi-tenant architectures typically support multiple identity providers, such as Google, Facebook, Yahoo! and Microsoft Live. In more sophisticated scenarios, services providers will support corporate identities, such as Active Directory. Support for Web single sign-on (SSO) is also important.

Claims-Based Identity Management

It’s generally accepted that claims-based approaches to identity management provide the most value. For starters, claims-based approaches centralize the disparate parts of identity into a single abstract token composed of claims and an issuer or authority. The value of such approaches is that they’re based on open standards and are highly interoperable. Claims-based identity management allows services providers to decouple applications from the vast, low-level plumbing code of identity management. Supporting Web services standards (WS-Trust, WS-Federation, WS-Security), token formats (Security Assertion Markup Language [SAML], JSON Web Key [JWK], Simple Web Token [SWT]) and cryptography (X.509) isn’t trivial. After all, these standards are constantly evolving, so encapsulating and abstracting identity management is crucial.

The answer to all these challenges is the Windows Azure Active Directory Access Control Service (ACS), which radically simplifies implementing a claims-based identity management system. The ACS removes this low-level plumbing and factors it out of a services provider’s code, making it relatively easy to implement a claims-based identity management system. What makes the ACS so powerful is that much of the work can be done through the Web-based portal. The ACS greatly streamlines the provisioning of new tenants.

This means you, the developer, do not need to deal with the many complexities of identity management inside your application. The ACS solves difficult, time-intensive problems, such as:

  • How to redirect unauthenticated requests to the chosen identity provider
  • How to validate the incoming token issued by the identity provider
  • How to parse the incoming token (read the claims)
  • How to implement authorization checks
  • How to transform tokens by adding, removing or changing the claims types and values

We recommend you run through a couple of Web-based tutorials to get started. You’ll find a good one on authenticating users at bit.ly/uyDLkt. If you’re running Visual Studio 2012, Vittorio Bertocci’s four-part post at bit.ly/12ZOwN9 will provide the needed guidance. The tooling within Visual Studio continues to improve, freeing developers from hand-editing configuration files. You can download the tooling and SDKs from bit.ly/NN9NVE.

Figure 1 depicts the experience of a tenant with support for multiple identity providers.

Supporting Multiple Identity Providers
Figure 1 Supporting Multiple Identity Providers

Figure 2 represents what really takes place when you make use of the ACS. Tenants are completely unaware that the ACS is doing all the work behind the scenes. The workflow you see in Figure 2 ends with your multi-tenant application getting a security token, typically in SAML 2.0 format. The real beauty is that the token your multi-tenant application receives has been standardized, regardless of the identity provider chosen by the tenant. The ACS transparently brokers the transformation from an identity provider’s specific token format to a token format you specify at the ACS portal.

High-Level View of the Access Control Service Portal
Figure 2 High-Level View of the Access Control Service Portal

There is JavaScript that queries the ACS for the identity provider list you want your application to support, and then generates the login links for each Identity Provider. All the necessary redirections to get a standard security token to your application are handled by the system. The “relying party” is simply your multi-tenant application, because the multi-tenant application relies on an issuer to provide information about identity.

If you go through the tutorials mentioned previously, you’ll see the screen shown in Figure 3. The Getting Started part of the ACS portal is divided into four sections, simplifying the process of integrating identity management into a cloud-based, multi-tenant application. Section 1 allows you to select from a list the identity providers you wish to support. For this walk-through, we’ll choose Google, Microsoft Live ID and Yahoo! as the identity providers. With a little extra work, you could also include Facebook and Active Directory identities. Section 2 is where the binding takes place. It’s where you’ll specify the URL to which ACS returns the security token, typically the starting page of the application. After all, the ACS needs an endpoint to pass a token to in the format that’s needed. In section 2 you also specify the desired token format.

The Windows Azure ACS Portal
Figure 3 The Windows Azure ACS Portal

Section 3 is where you select the claims you want the identity provider to include in the security token. The relying party (the services provider’s multi-tenant application) will use these claims to both uniquely identify a tenant and to make authorization decisions, defining a tenant’s privileges within the service. The available claims vary by identity provider; Google differs from Windows Live, for example. With Google and Yahoo!, the claims in the token can include the e-mail address, user name, nameidentifier and provider name. For Microsoft Live ID, however, you get only the nameidentifier and the provider name, not the e-mail or user name. You’ll need to handle those subtleties within the application.

Section 4 lets you integrate some metadata and code into the service. This is done within the Visual Studio tooling. You need metadata inside the application configuration files to bind the application to the ACS. The tooling in Visual Studio 2012 makes this step a point-and-click experience, whereas with Visual Studio 2010 you’ll need to manually edit the sytem.web section of web.config.

Microsoft recently released Windows Azure Active Directory, which allows you to take advantage of Web SSO with your line-of-business (LOB) applications, on-premises and in the cloud. If your goal is to implement identity management in an LOB application, where the application runs both on-premises and in the cloud, you’ll want read the information at bit.ly/157yVPR. The documentation illustrates how Windows Azure Active Directory can take an existing LOB application and make it available for other Windows Azure Active Directory tenant administrators to use in their organizations.

The Bertocci blog post (previously mentioned) takes you to the screen in Figure 4, which shows that Identity and Access can be used to add configuration code in the multi-tenant application. Visual Studio 2012 completely eliminates the need to manually edit any configuration files. As you can see, we selected three identity providers and defined the realm and return URL for the multi-tenant application. The realm is simply a URI that identifies the application the user is logging in to. This URI also allows you to associate the claims for the application and the reply-to addresses. You’ll change the realm and return URL once you deploy your application to a Microsoft datacenter. Finally, you’ll add a management key that you get from the ACS portal to digitally sign tokens for security purposes.

The Visual Studio 2012 Identity and Access Dialog Box
Figure 4 The Visual Studio 2012 Identity and Access Dialog Box

Adding Code to the Application

Multi-tenant applications need to persist the logged-in user to permanent storage. This is initially necessary for the provisioning process but may also be necessary if tenant activity is being tracked. The data store used in this article is Windows Azure SQL Database, but you could easily change this to Windows Azure Tables and could also include on-premises data stores. In the Page_Load method of the Default.aspx.cs file, we can easily read the security token and parse the claims, as seen here:

protected void Page_Load(object sender, EventArgs e)
  // Grab the ClaimsIdentity object. Think of it as a token
  // and set of claims for the currently logged in user.
  ClaimsIdentity ci =
    Thread.CurrentPrincipal.Identity as ClaimsIdentity;
  // Create a TenantTokenManager object that parses the claims.
  TenantTokenManager tenantTokenManager = new TenantTokenManager(ci);
  // Save logged in user to persistent storage.

To encapsulate this logic, we’ll add a TenantTokenManager class to our cloud project, shown in Figure 5.

Figure 5 The TenantTokenManager Class

public class TenantTokenManager
  // Claims that uniquely identify tenants.
  public System.Security.Claims.ClaimsIdentity SecurityToken { get; set; }
  public string IdentityProviderName                         { get; set; }
  public string IdentityProviderNameIdentifier               { get; set; }
  public string IdentityProviderEmail                        { get; set; }
  public string TenantId                                     { get; set; }
  public string TenantName                                   { get; set; }
  public TenantTokenManager(System.Security.Claims.ClaimsIdentity ci)
      // Save a copy of the ClaimsIdentity security token.
      this.SecurityToken = ci;
      // Extract the provider name (Yahoo, Google, Microsoft Live).
      IdentityProviderName = (from c in ci.Claims
        where c.Type ==
        select c.Value).SingleOrDefault();
      // Extract the nameidentifier (a unique identifier of
      // a tenant from the identity provider).
      IdentityProviderNameIdentifier = (from c in ci.Claims
        where c.Type ==
        select c.Value).SingleOrDefault();
      // Extract the emailaddress (not available in Microsoft Live ID).
      IdentityProviderEmail = (from c in ci.Claims
        where c.Type ==
        select c.Value).SingleOrDefault();
      // Create a unique TenantId based on nameidentifier
      // and provider name.
      TenantId =
        IdentityProviderName + "-" + IdentityProviderNameIdentifier;
      // Extract name from security token (not available
      // from Microsoft Live ID).
      TenantName = SecurityToken.Name == 
        null ? "Not Avail." : SecurityToken.Name;
    } catch (Exception ex) { throw ex; }
  public void SaveTenantToDatabase() {

The TenantTokenManager class encapsulates two important functions. First, it parses the claims from the security token. In our case, the token format is SAML, but it could just as easily be JWT or SWT. Note that simple LINQ queries can be used to extract individual claims from the token. And notice in the code that for Microsoft Live ID, you can’t get any name or e-mail data. In this case, null values will return from the LINQ query. The comments in the code explain what the claims mean. We also added support for saving to the TenantTokenManager using the SaveToDatabase method, so that tenants can be provisioned and tracked.

The next logical step in the application is to make authorization decisions based on identity or the role membership of identities. For example, as shown in Figure 6, the stored procedure only returns records based on TenantId. This illustrates how you might start thinking about data isolation and segregation.

Figure 6 A Stored Procedure That Returns Records Based on TenantId

CREATE PROCEDURE [dbo].[GetTenantData]
  @TenantId nvarchar(max)
  FROM TenantRecords
  WHERE TenantId = @TenantId

Data Isolation and Segregation

When it comes to combining identity with data in a multi-tenant system, there are a number of significant problems to solve. First, the architect must figure out how to keep sensitive data completely isolated from other tenants. Obviously, you can’t allow things like credit-card numbers, Social Security numbers, e-mail and so forth to be compromised in any way. Each tenant must absolutely be guaranteed that its data is kept away from prying eyes. At the same time, data may also need to be shared among tenants, such as the sharing of calendars, photos or simple text messages. Another problem to solve is the provisioning of audit trails, whereby services providers can track the usage and login patterns of tenants. This lets organizations track changes made by administrator-level users to help resolve unexpected application behavior.

In any multi-tenant system, there’s the additional technical challenge to enable the services provider to maximize cost efficiencies while providing tenants with well-performing and flexible data services and data models. This challenge is especially difficult due to the sheer variety of data types. Consider that Windows Azure alone supports a multitude of storage options, such as tables, blobs, queues, SQL Database, Hadoop and more. The technical challenge for the services provider is significant, because each storage type is very different. Architects need to balance cost, performance, scalability and, ultimately, the data needs of tenants.

Let’s take a quick look at the four common Windows Azure storage types. Blobs are used to store unstructured binary and text data. Typically, blobs are images, audio or other multimedia objects, though sometimes binary executable code is stored as a blob. Queues are used for storing messages that may be accessed by a tenant. They provide reliable messaging with the scaled instances of the multi-tenant application. Tables offer NoSQL capabilities for applications that require storage of large amounts of unstructured data. Finally, SQL Database provides a full-featured, relational Database as a Service (DBaaS) for applications that need this.

Blobs These are comparatively easy to understand relative to Windows Azure Tables or SQL Database. You can read more about blobs at bit.ly/VGHszH. A Windows Azure account can have many blob containers. A blob container can have many blobs. Most services providers will have more than one account. The typical approach taken is to designate a container name for each tenant. This gives you the ability to define access rights, measure performance and quantify consumption of the blob service. Recently, Microsoft has revised its network topology to dramatically improve the bandwidth between compute and storage resources to support storage node network speeds up to 10Gbps.

Naming conventions for blobs may force you to maintain a map of container names to TenantIds. You can use e-mail addresses because they’re unique, but Windows Live ID doesn’t provide that claim within the security token. For Google and Yahoo!, you need to remove the “@” sign and “.” because blob containers can only contain letters, numbers and dashes.

The code in Figure 7 demonstrates an approach to provisioning blob containers. This code might be part of the sign-up and provisioning process for a tenant. Note that the TenantId is used as the container name. In a real-world scenario, of course, there would probably be some type of lookup table to provide container names based on TenantIds. In the code in Figure 7, the services provider has chosen to have a separate Windows Azure account (AccountTenantImages) to store the images of tenants. Note the code “blobClient.GetContainerReference(tm.TenantId),” which is where a blob container is provisioned for a new tenant.

Figure 7 Provisioning a Blob Container for a Tenant

// Grab the ClaimsIdentity object. Think of it as a token
// and set of claims for the currently logged in user.
ClaimsIdentity ci =
  Thread.CurrentPrincipal.Identity as ClaimsIdentity;
// Parse the claim.
TokenManager tm = new TokenManager(ci);
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
// Create the blob client.
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
// Retrieve a reference to a container.
CloudBlobContainer container = blobClient.GetContainerReference(tm.TenantId);
// Create the container if it doesn't already exist.

Another important point is that blobs support user-defined metadata, which means that each blob can support one or more name-value pairs. However, blob storage doesn’t allow you to query blobs on its metadata globally. To find information in metadata, you have two options. The first is to fetch all blobs from a blob container and then enumerate over them, searching the name-value pairs. The more scalable approach is to store metadata in table storage, along with the blob URL. This approach lets you query table storage to find the desired blob and then retrieve the blob from storage and query the metadata directly from a single blob.

Queues Because queues are typically used as mechanisms to provide reliable, asynchronous messaging within multi-tenant applications, tenants are generally insulated from them. With that said, an account has many queues and queues have many messages. Conceivably, the cloud architect could provision a separate queue for each tenant. But this decision would be based on the specific needs of the multi-tenant app. And keep in mind that this approach may not scale well and could become difficult to manage as the number of tenants rises beyond 100. 

Tables There are many data isolation and segregation options available within Windows Azure Tables. For example, the services provider could provision one storage account per tenant. Because each storage account appears as a line item on your Windows Azure bill, this approach can be useful if you want to identify the precise costs per tenant. Another option is to combine multiple tenants in a single storage account. This approach enables you to group tenants by geographic region, by regulatory requirements and potentially by replication requirements. However, you still need to layer in a partitioning scheme so the tenants within one account don’t get access to other tenants’ data—unless, of course, data-sharing strategies are desired. One approach to hosting multiple tenants in a single account is to include the TenantId in the table name and give each tenant its own copy of the tables.

Yet another approach is to put multiple tenants in a single table, in which case you’d probably use a table’s built-in partition keys and row keys to keep tenants’ data separate from one another. It’s common to use the TenantId as the partition key because it scales well while providing excellent query performance.

The code in Figure 8 demonstrates how multiple tenants could share a single table. The code illustrates how a tenant might retrieve an e-mail address and phone number using Windows Azure Tables. Notice that the PartionKey is essentially the TenantId, which can be derived from the login credentials, either directly or through another lookup table.

Figure 8 Segregating Data in Windows Azure Tables Using a PartionKey and a RowKey

// Grab the ClaimsIdentity object. Think of it as a token
// and set of claims for the currently logged in user.
ClaimsIdentity ci =
  Thread.CurrentPrincipal.Identity as ClaimsIdentity;
// Parse the claim.
TokenManager tm = new TokenManager(ci);
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
// Create a cloud table client object.
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
// Create an email address table object.
CloudTable emailAddressTable =
// Set up the filter for the partition key.
string pkFilter = TableQuery.GenerateFilterCondition("PartitionKey",
  QueryComparisons.Equal, tm.TenantId);
// Set up the filter for the row key.
string rkFilter = TableQuery.GenerateFilterCondition("RowKey", 
  QueryComparisons.Equal, tm.IdentityProviderName);
// Combine the partition key filter and the Row Key filter.
string combinedFilter = TableQuery.CombineFilters(rkFilter,
  TableOperators.And, pkFilter);
// Construct a query.
TableQuery<EmailAddressEntity> query =
  new TableQuery<EmailAddressEntity>().Where(combinedFilter);
// Execute the query.
EmailAddressEntity emailAddressEntity =
// Pull the data out that you searched for;
// do something with emailAddress and phoneNumber.
string emailAddress = emailAddressEntity.EMailAddress;
string phoneNumber = emailAddressEntity.PhoneNumber;

Wrapping Up

Architecting multi-tenant applications requires a good understanding of identity and data isolation and segregation issues. Services providers can insulate themselves from having to write a lot of the low-level plumbing code relating to identity management by taking advantage of the ACS. A multi-tenant application can support a variety of identity providers and avoid the obfuscation of its code base with some of the techniques provided here. Moreover, robust identity management simplifies the work it takes to either isolate or share tenant data.

In next month’s column, we’ll address the other key pillars of multi-tenant architectures—metering and performance analytics and scaling. In the meantime, we recommend that readers interested in more information read “Developing Multi-Tenant Applications for the Cloud, 3rd Edition,” available at bit.ly/asHI9I.

Bruno Terkaly is a developer evangelist for Microsoft. His depth of knowledge comes from years of experience in the field, writing code using a multitude of platforms, languages, frameworks, SDKs, libraries and APIs. He spends time writing code, blogging and giving live presentations on building cloud-based applications, specifically using the Windows Azure platform. He has published two applications to the Windows Store: Teach Kids Music and Kids Car Colors. You can read his blog at blogs.msdn.com/b/brunoterkaly.

Ricardo Villalobos is a seasoned software architect with more than 15 years of experience designing and creating applications for companies in the supply chain management industry. Holding different technical certifications, as well as a master’s degree in business administration from the University of Dallas, he works as a cloud architect in the Windows Azure CSV incubation group for Microsoft. You can read his blog at blog.ricardovillalobos.com.

Terkaly and Villalobos jointly present at large industry conferences; for availability, e-mail them at bterkaly@microsoft.com or Ricardo.Villalobos@microsoft.com.

Thanks to the following technical experts for reviewing this article: Patrick Butler Monterde (Microsoft) and Bhushan Nene (Microsoft)
Patrick is a Cloud Architect at Microsoft. In his current role he is responsible for developing cloud strategy and cloud IP for Microsoft Services. Prior to this role, Patrick worked as the Worldwide Windows Azure Technical Sales where he was responsible for supporting global enterprise customers to successfully adopt the Windows Azure platform. Patrick also worked as a senior consultant in MCS specializing in enterprise applications development.
Prior to joining Microsoft, Patrick held a number of positions from development, management and software architecture. He brings twelve years of enterprise consulting experience spanning multiple industries including; military, health care, oil and gas, government, and legal. He specializes in the Windows Azure platform, SQL Azure, .NET development, Microsoft SQL Server,  and project management. Patrick holds a B.S. in Computer Science from The Evergreen State College and an M.S. in Computer Information Systems from Phoenix University. Read Patrick’s latest at: http://blogs.msdn.com/b/patrick_butler_monterde/  

Bhushan Nene leads a team of Cloud Architects at Microsoft that helps strategic ISV partners develop applications on Windows Azure.  Bhushan has published number of technical articles, reference applications, and co-authored a book.  He has also presented at number of Cloud events.