Azure guidance for secure isolation

Microsoft Azure is a hyperscale public multi-tenant cloud services platform that provides customers with access to a feature-rich environment incorporating the latest cloud innovations such as artificial intelligence, machine learning, IoT services, big-data analytics, intelligent edge, and many more to help customers increase efficiency and unlock insights into their operations and performance.

A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent customers from accessing one another's data or applications.

Azure addresses the perceived risk of resource sharing by providing a trustworthy foundation for assuring multi-tenant, cryptographically certain, logically isolated cloud services using a common set of principles: (1) user access controls with authentication and identity separation, (2) compute isolation for processing, (3) networking isolation including data encryption in transit, (4) storage isolation with data encryption at rest, and (5) security assurance processes embedded in service design to correctly develop logically isolated services.

This article provides technical guidance to address common security and isolation concerns pertinent to cloud adoption. It also explores design principles and technologies available in Azure to help customers achieve their secure isolation objectives.

Executive summary

Microsoft Azure is a hyperscale public multi-tenant cloud services platform that provides customers with access to a feature=rich environment incorporating the latest cloud innovations such as artificial intelligence, machine learning, IoT services, big-data analytics, intelligent edge, and many more to help customers increase efficiency and unlock insights into their operations and performance.

A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent customers from accessing one another's data or applications.

Multi-tenancy in the public cloud improves efficiency by multiplexing resources among disparate customers at low costs; however, this approach introduces the perceived risk associated with resource sharing. Azure addresses this risk by providing a trustworthy foundation for assuring multi-tenant, cryptographically certain, logically isolated cloud services using a multi-layered approach depicted in Figure 1.

Azure isolation approaches Figure 1. Azure isolation approaches

A brief summary of isolation approaches is provided below.

  • User access controls with authentication and identity separation – All data in Azure irrespective of the type or storage location is associated with a subscription. A cloud tenant can be viewed as a dedicated instance of Azure Active Directory (Azure AD) that customer organization receives and owns when they sign up for a Microsoft cloud service. The identity and access stack helps enforce isolation among subscriptions, including limiting access to resources within a subscription only to authorized users.
  • Compute isolation – Azure provides customers with both logical and physical compute isolation for processing. Logical isolation is implemented via:
    • Hypervisor isolation for services that provide cryptographically certain isolation by using separate virtual machines and leveraging Azure Hypervisor isolation.
    • Drawbridge isolation inside a Virtual Machine (VM) for services that provide cryptographically certain isolation for workloads running on the same virtual machine by leveraging isolation provided by Drawbridge. These services provide small units of processing using customer code.
    • User context-based isolation for services that are comprised solely of Microsoft-controlled code and customer code is not allowed to run.
      In addition to robust logical compute isolation available by design to all Azure tenants, customers who desire physical compute isolation can utilize Azure Dedicated Host or Isolated Virtual Machines, which are deployed on server hardware dedicated to a single customer.
  • Networking isolation – Azure Virtual Network (VNet) helps ensure that each customer’s private network traffic is logically isolated from traffic belonging to other customers. Services can communicate using public IPs or private (VNet) IPs. Communication between customer VMs remains private within a VNet. Customers can connect their VNets via VNet peering or VPN gateways, depending on their connectivity options, including bandwidth, latency, and encryption requirements. Customers can use Network Security Groups (NSGs) to achieve network isolation and protect their Azure resources from the Internet while accessing Azure services that have public endpoints. Customers can use Virtual network service tags to define network access controls on network security groups or Azure Firewall. A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, thereby reducing the complexity of frequent updates to network security rules. Moreover, customers can use Azure Private Link to access Azure PaaS services over a private endpoint in their VNet, ensuring that traffic between their VNet and the service travels across the Microsoft global backbone network, which eliminates the need to expose the service to the public Internet. Finally, Azure provides customers with options to encrypt data in transit, including Transport Layer Security (TLS) end-to-end encryption of network traffic with TLS termination using Azure Key Vault certificates, VPN encryption using IPsec, and ExpressRoute encryption using MACsec with customer-managed keys (CMK) support.
  • Storage isolation – To ensure cryptographic certainty of logical data isolation, Azure Storage relies on data encryption at rest using advanced algorithms with multiple ciphers. This process relies on multiple encryption keys, as well as services such as Azure Key Vault and Azure AD to ensure secure key access and centralized key management. Azure Storage Service Encryption (SSE) ensures that data is automatically encrypted before persisting it to Azure Storage and decrypted before retrieval. All data written to Azure Storage is encrypted through FIPS 140-2 validated 256-bit AES encryption and customers have the option to use Azure Key Vault for customer-managed keys (CMK). Azure SSE encrypts the page blobs that store Azure Virtual Machine disks. Additionally, Azure Disk Encryption (ADE) may optionally be used to encrypt Azure Windows and Linux IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of customer data stored in Azure. This encryption includes managed disks.
  • Security assurance processes and practices – Azure isolation assurance is further enforced by Microsoft’s internal use of the Security Development Lifecycle (SDL) and other strong security assurance processes to protect attack surfaces and mitigate threats. Microsoft has established industry-leading processes and tooling that provides high confidence in the Azure isolation guarantee.

In line with the shared responsibility model in cloud computing, as customer workloads get migrated from an on-premises datacenter to the cloud, the delineation of responsibility between the customer and cloud service provider varies depending on the cloud service model. For example, with the Infrastructure as a Service (IaaS) model, Microsoft’s responsibility ends at the Hypervisor layer, and customers are responsible for all layers above the virtualization layer, including maintaining the base operating system in guest VMs. Customers can leverage Azure isolation technologies to achieve the desired level of isolation for their applications and data deployed in the cloud.

Throughout this article, call-out boxes outline important considerations or actions considered to be part of customer’s responsibility. For example, customers can use Azure Key Vault to store their secrets, including encryption keys that remain under customer control.

Note

Use of Azure Key Vault for Customer Managed Keys (CMK) is optional and represents customer’s responsibility.

Additional resources:

This article provides technical guidance to address common security and isolation concerns pertinent to cloud adoption. It also explores design principles and technologies available in Azure to help customers achieve their secure isolation objectives.

Tip

For recommendations on how to improve the security of applications and data deployed in Azure, customers should review the Azure Security Benchmark.

Identity-based isolation

Azure Active Directory (Azure AD) is an identity repository and cloud service that provides authentication, authorization, and access control for an organization’s users, groups, and objects. Azure AD can be used as a standalone cloud directory or as an integrated solution with existing on-premises Active Directory to enable key enterprise features such as directory synchronization and single sign-on.

Each Azure subscription is associated with an Azure AD tenant. Using Role-Based Access Control (RBAC), users, groups, and applications from that directory can be granted access to resources in the Azure subscription. For example, a storage account can be placed in a resource group to control access to that specific storage account using Azure AD. Azure Storage defines a set of built-in RBAC roles that encompass common permissions used to access blob or queue data. A request to Azure Storage can be authorized using either customer’s Azure AD account or the Storage Account Key. In this manner, only specific users can be given the ability to access data in Azure Storage.

All data in Azure irrespective of the type or storage location is associated with a subscription. A cloud tenant can be viewed as a dedicated instance of Azure AD that customer organization receives and owns when they sign up for a Microsoft cloud service. Authentication to the Azure portal is performed through Azure AD using an identity created either in Azure AD or federated with an on-premises Active Directory. The identity and access stack helps enforce isolation among subscriptions, including limiting access to resources within a subscription only to authorized users. This access restriction is an overarching goal of the Zero Trust model, which assumes that the network is compromised and requires a fundamental shift from the perimeter security model. When evaluating access requests, all requesting users, devices, and applications should be considered untrusted until their integrity can be validated in line with the Zero Trust design principles. Azure AD provides the strong, adaptive, standards-based identity verification required in a Zero Trust framework.

Tip

To learn more about how to implement Zero Trust architecture on Azure, read the 6-part blog series.

Azure Active Directory

The separation of the accounts used to administer cloud applications is critical to achieving logical isolation. Account isolation in Azure is achieved using Azure Active Directory (Azure AD) and its capabilities to support granular Role-Based Access Controls (RBAC). Each Azure account is associated with one Azure AD tenant. Users, groups, and applications from that directory can manage resources in Azure. Customers can assign appropriate access rights using the Azure portal, Azure command-line tools, and Azure Management APIs. Each Azure AD tenant is distinct and separate from other Azure ADs. An Azure AD instance is logically isolated using security boundaries to prevent customer data and identity information from comingling, thereby ensuring that users and administrators of one Azure AD cannot access or compromise data in another Azure AD instance, either maliciously or accidentally. Azure AD runs physically isolated on dedicated servers that are logically isolated to a dedicated network segment and where host-level packet filtering and Windows Firewall services provide additional protections from untrusted traffic.

Azure AD implements extensive data protection features, including tenant isolation and access control, data encryption in transit, secrets encryption and management, disk level encryption, advanced cryptographic algorithms used by various Azure AD components, data operational considerations for insider access, and more. Detailed information is available from a whitepaper Active Directory Data Security Considerations.

Tenant isolation in Azure AD involves two primary elements:

  • Preventing data leakage and access across tenants, which means that data belonging to Tenant A cannot in any way be obtained by users in Tenant B without explicit authorization by Tenant A.
  • Resource access isolation across tenants, which means that operations performed by Tenant A cannot in any way impact access to resources for Tenant B.

As shown in Figure 2, access via Azure AD requires user authentication through a Security Token Service (STS). The authorization system uses information on the user’s existence and enabled state (through the Directory Services API) and RBAC to determine whether the requested access to the target Azure AD instance is authorized for the user in the session. Aside from token-based authentication that is tied directly to the user, Azure AD further supports logical isolation in Azure through:

  • Azure AD instances are discrete containers and there is no relationship between them.
  • Azure AD data is stored in partitions and each partition has a pre-determined set of replicas that are considered the preferred primary replicas. Use of replicas provides high availability of Azure AD services to support identity separation and logical isolation.
  • Access is not permitted across Azure AD instances unless the Azure AD instance administrator grants it through federation or provisioning of user accounts from other Azure AD instances.
  • Physical access to servers that comprise the Azure AD service and direct access to Azure AD’s back-end systems is restricted to properly authorized Microsoft operational roles using Just-In-Time (JIT) privileged access management system.
  • Azure AD users have no access to physical assets or locations, and therefore it is not possible for them to bypass the logical RBAC policy checks.

Azure Active Directory logical tenant isolation Figure 2. Azure Active Directory logical tenant isolation

In summary, Azure’s approach to logical tenant isolation uses identity, managed through Azure Active Directory, as the first logical control boundary for providing tenant-level access to resources and authorization through Role-Based Access Control.

Data encryption key management

Azure has extensive support to safeguard customer data using data encryption, including various encryption models:

  • Server-side encryption that uses service-managed keys, customer-managed keys in Azure, or customer-managed keys on customer-controlled hardware.
  • Client-side encryption that enables customers to manage and store keys on-premises or in another secure location.

Data encryption provides isolation assurances that are tied directly to encryption key access. Since Azure uses strong ciphers for data encryption, only entities with access to encryption keys can have access to data. Deleting or revoking encryption keys renders the corresponding data inaccessible. More information about data encryption in transit is provided in Networking isolation section, whereas data encryption at rest is covered in Storage isolation section.

Proper protection and management of encryption keys is essential for data security. Azure Key Vault is a multi-tenant key management service that Microsoft recommends for managing and controlling access to encryption keys when seamless integration with Azure services is required. Azure Key Vault enables customers to store their encryption keys in a Hardware Security Module (HSM). For customers who require single-tenant key management service, Microsoft provides Azure Dedicated HSM.

Azure Key Vault

Azure Key Vault is a multi-tenant secrets management service that uses Hardware Security Modules (HSMs) to store and safeguard secrets, encryption keys, and certificates. Key Vault uses Federal Information Processing Standard (FIPS) 140-2 Level 2 validated HSMs, which meet security requirements covering 11 areas related to the design and implementation of a cryptographic module. For each area, the cryptographic module receives a security level rating 1 to 4 (from lowest to highest) depending on the requirements met. The Key Vault uses nCipher nShield family of HSMs that have an overall Security Level 2 rating (certificate #2643), which includes requirements for physical tamper evidence and role-based authentication. However, it meets Security Level 3 rating for several areas, including physical security, electromagnetic interference / electromagnetic compatibility (EMI/EMC), design assurance, and roles, services, and authentication.

The Azure Key Vault service provides an abstraction over the underlying HSMs. It provides a REST API to enable use from cloud applications and authentication through Azure AD to allow an organization to centralize and customize authentication, disaster recovery, high availability, and elasticity. Azure Key Vault supports RSA keys of sizes 2048-bit, 3072-bit and 4096-bit, as well as Elliptic Curve key types P-256, P-384, P-521, and P-256K (SECP256K1).

Azure Key Vault can handle requesting and renewing certificates, including Transport Layer Security (TLS) certificates, enabling customers to enroll and automatically renew certificates from supported public Certificate Authorities. Azure Key Vault certificates support provides for the management of customer’s X.509 certificates, which are built on top of keys and provide an automated renewal feature. Certificate owner can create a certificate through Key Vault or by importing an existing certificate. Both self-signed and Certificate Authority generated certificates are supported. Moreover, the Key Vault certificate owner can implement secure storage and management of X.509 certificates without interaction with private keys.

With Azure Key Vault, customers can import or generate encryption keys in HSMs that never leave the HSM protection boundary to support Bring Your Own Key (BYOK) scenarios, as shown in Figure 3. Keys generated inside the Azure Key Vault HSMs are not exportable – there can be no cleartext version of the key outside the HSMs. This binding is enforced by the underlying HSM.

Azure Key Vault support for Bring Your Own Key (BYOK) Figure 3. Azure Key Vault support for Bring Your Own Key (BYOK)

Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract customer cryptographic keys.

Azure Key Vault provides features for a robust solution for encryption key and certificate lifecycle management. Upon creation, every key vault is automatically associated with the Azure Active Directory (Azure AD) tenant that owns the subscription. Anyone trying to manage or retrieve content from a key vault must be authenticated by Azure AD, as described in Azure Key Vault security overview:

  • Authentication establishes the identity of the caller (user or application).
  • Authorization determines which operations the caller can perform, based on a combination of Azure AD Role-Based Access Control (RBAC) and Azure Key Vault policies.

Azure AD enforces tenant isolation and implements robust measures to prevent access by unauthorized parties, as described previously in Azure Active Directory section. Access to a key vault is controlled through two interfaces or planes: management plane and data plane.

  • Management plane enables customers to manage Key Vault itself, e.g., create and delete key vaults, retrieve key vault properties, and update access policies.
  • Data plane enables customers to work with the data stored in their key vaults, including adding, deleting, and modifying their keys, secrets, and certificates.

To access a key vault in either plane, all callers (users or applications) must have proper authentication and authorization. Both planes use Azure AD for authentication. For authorization, the management plane uses RBAC and the data plane uses Key Vault access policy.

When customers create a key vault in a resource group, they can manage access by using Azure AD, which enables customers to grant access at a specific scope level by assigning the appropriate RBAC roles. For example, to grant access to a user to manage key vaults, customers can assign a predefined key vault Contributor role to the user at a specific scope, including subscription, resource group, or specific resource.

Important

Customers should control tightly who has Contributor role access to their key vaults. If a user has Contributor permissions to a key vault management plane, the user can gain access to the data plane by setting a key vault access policy.

Additional resources:

Azure customers control access permissions and can extract detailed activity logs from the Azure Key Vault service. Azure Key Vault logs the following information:

  • All authenticated REST API requests, including failed requests
    • Operations on the key vault such as creation, deletion, setting access policies, etc.
    • Operations on keys and secrets in the key vault, including a) creating, modifying, or deleting keys or secretes, and b) signing, verifying, encrypting keys, etc.
  • Unauthenticated requests such as requests that do not have a bearer token, are malformed or expired, or have an invalid token.

Note

After creating one or more key vaults, customers can monitor how and when their key vaults are accessed and by whom.

Additional resources:

Customers can also use the Azure Key Vault solution in Azure Monitor to review Azure Key Vault logs. To use this solution, customers need to enable logging of Azure Key Vault diagnostics and direct the diagnostics to a Log Analytics workspace. With this solution, it is not necessary to write logs to Azure Blob storage.

Azure Dedicated HSM

The HSMs behind Azure Key Vault are by default multi-tenant. For customers who require single-tenant HSMs, Microsoft provides Azure Dedicated HSM that has FIPS 140-2 Level 3 validation (certificate #3205), as well as Common Criteria EAL4+ certification and conformance with the Electronic Identification Authentication and Trust Services (eIDAS) requirements. The underlying HSM devices support up to 10,000 transactions per second with RSA-2048 keys. Operating in FIPS mode imposes a minimum RSA key length of 2048 bits; however, the maximum supported RSA key length is 8192 bits.

Azure Dedicated HSM is most suitable for scenarios where customers require full administrative control and sole access to their HSM device for administrative purposes. Dedicated HSMs are provisioned directly on customer’s Virtual Network (VNet) and can be used by applications running inside that VNet. After a device is provisioned, only the customer has administrative or application-level access to the device. Customers are responsible for the management of the device and they can get full activity logs directly from their devices.

Dedicated HSM can also connect to on-premises infrastructure via point-to-site or site-to-site Virtual Private Network (VPN). Listed below are the most common customer requirements for Dedicated HSM:

  • Migrating applications from on-premises to Azure Virtual Machines
  • Customer security posture requires they manage all aspects of the HSM
  • Need for HSMs validated to FIPS 140-2 Level 3
  • Proprietary HSMs features that cannot be abstracted in the Azure Key Vault service

Microsoft has no administrative control after the customer accesses the device for the first time, at which point the customer changes the password. Microsoft does not have any access to the keys stored in customer allocated Dedicated HSM. Microsoft maintains monitor-level access (which is not an admin role and can be disabled by the customer) for telemetry collection. This access covers hardware monitoring such as temperature, power supply health, and fan health. Customers who disable this monitoring service will no longer receive proactive health alerts from Microsoft.

Note

Microsoft provides detailed instructions for deploying Azure Dedicated HSM into an existing Virtual Network (VNet) using the Command Line Interface (CLI) and PowerShell.

Compute isolation

Microsoft Azure compute platform is based on machine virtualization. This approach means that customer code – whether it’s deployed in a PaaS Worker Role or an IaaS Virtual Machine – executes in a virtual machine hosted by a Windows Server Hyper-V hypervisor. On each Azure physical server, also known as a node, there is a Type 1 Hypervisor that runs directly over the hardware and divides the node into a variable number of Guest Virtual Machines (VMs), as shown in Figure 4. Each node also has one special Host VM, also known as Root VM, which runs the Host OS – a customized and hardened version of the latest Windows Server, which is stripped down to reduce the attack surface and include only those components necessary to manage the node. Isolation of the Root VM from the Guest VMs and the Guest VMs from one another is a key concept in Azure security architecture that forms the basis of Azure compute isolation, as described in Microsoft online documentation.

Isolation of Hypervisor, Root VM, and Guest VMs Figure 4. Isolation of Hypervisor, Root VM, and Guest VMs

Physical servers hosting VMs are grouped into clusters and they are independently managed by a scaled-out and redundant platform software component called the Fabric Controller (FC). Each FC manages the lifecycle of VMs running in its cluster, including provisioning and monitoring the health of the hardware under its control. For example, the FC is responsible for recreating VM instances on healthy servers when it determines that a server has failed. It also allocates infrastructure resources to tenant workloads and it manages unidirectional communication from the Host to Virtual Machines. Dividing the compute infrastructure into clusters isolates faults at the FC level and prevents certain classes of errors from affecting servers beyond the cluster in which they occur.

The FC is the brain of the Azure compute platform and the Host Agent is its proxy, integrating servers into the platform so that the FC can deploy, monitor, and manage the virtual machines for customers and Azure cloud services. The Hypervisor/Host OS pairing leverages decades of Microsoft’s experience in operating system security, including security focused investments in Microsoft Hyper-V to provide strong isolation of Guest VMs. Hypervisor isolation is discussed later in this section, including assurances for strongly defined security boundaries enforced by the Hypervisor, defense-in-depth exploit mitigation, and strong security assurance processes.

Management network isolation

There are three Virtual Local Area Networks (VLANs) in each compute hardware cluster, as shown in Figure 5:

  • Main VLAN interconnects untrusted customer nodes,
  • Fabric Controller (FC) VLAN that contains trusted FCs and supporting systems, and
  • Device VLAN that contains trusted network and other infrastructure devices.

Communication is permitted from the FC VLAN to the main VLAN but cannot be initiated from the main VLAN to the FC VLAN. This bridge from the FC VLAN to the Main VLAN is used to reduce the overall complexity and improve reliability/resiliency of the network. The connection is secured in several ways to ensure that commands are trusted and successfully routed:

  • Communication from an FC to a Fabric Agent (FA) is unidirectional and requires mutual authentication via certificates. The FA implements a TLS-protected service that only responds to requests from the FC. It cannot initiate connections to the FC or other privileged internal nodes.
  • The FC treats responses from the agent service as if they were untrusted. Communication with the agent is further restricted to a set of authorized IP addresses using firewall rules on each physical node, and routing rules at the border gateways.
  • Throttling is used to ensure that customer VMs cannot saturate the network and management commands form being routed.

Communication is also blocked from the main VLAN to the device VLAN. This way, even if a node running customer code is compromised, it cannot attack nodes on either the FC or device VLANs.

These controls ensure that the management consoles access to the Hypervisor is always valid and available.

VLAN isolation Figure 5. VLAN isolation

The Hypervisor and the Host OS provide network packet filters so untrusted VMs cannot generate spoofed traffic or receive traffic not addressed to them, direct traffic to protected infrastructure endpoints, or send/receive inappropriate broadcast traffic. By default, traffic is blocked when a VM is created, and then the FC agent configures the packet filter to add rules and exceptions to allow authorized traffic. More detailed information about network traffic isolation and separation of tenant traffic is provided in Networking isolation section.

Management console and management plane

The Azure Management Console and Management Plane follow strict security architecture principles of least privilege to secure and isolate tenant processing:

  • Management Console (MC) – The MC in Azure Cloud is comprised of the Azure portal GUI and the Azure Resource Manager API layers. They both utilize user credentials to authenticate and authorized all operations.
  • Management Plane (MP) – This layer performs the actual management actions and is comprised of the Compute Resource Provider (CRP), Fabric Controller (FC), Fabric Agent (FA), and the underlying Hypervisor (which has its own Hypervisor Agent to service communication). These layers all utilize system contexts that are granted the least permissions needed to perform their operations.

The Azure FC allocates infrastructure resources to tenants and manages unidirectional communications from the Host OS to Guest VMs. The VM placement algorithm of the Azure FC is highly sophisticated and nearly impossible to predict. The FA resides in the Host OS and it manages tenant VMs. The collection of the Azure Hypervisor, Host OS and FA, and customer VMs comprise a compute node, as shown in Figure 4. FCs manage FAs although FCs exist outside of compute nodes (separate FCs manage compute and storage clusters). If a customer updates their application’s configuration file while running in the MC, the MC communicates through CRP with the FC and the FC communicates with the FA.

CRP is the front-end service for Azure Compute, exposing consistent compute APIs through Azure Resource Manager, thereby enabling customers to create and manage virtual machine resources and extensions via simple templates.

Communications among various components (e.g., Azure Resource Manager to and from CRP, CRP to and from FC, FC to and from Hypervisor Agent) all operate on different communication channels with different identities and different permissions sets. This design follows common least-privilege models to ensure that a compromise of any single layer will prevent additional actions. Separate communications channels ensure that communications cannot bypass any layer in the chain. Figure 6 illustrates how the MC and MP securely communicate within the Azure Cloud for Hypervisor interaction initiated by a user’s OAuth 2.0 authentication to Azure Active Directory.

Management Console and Management Plane interaction for secure management flow Figure 6. Management Console and Management Plane interaction for secure management flow

All management commands are authenticated via RSA signed certificate or JSON Web Token (JWT). Authentication and command channels are encrypted via Transport Layer Security (TLS) 1.2 as described in Data encryption in transit section. Server certificates are used to provide TLS connectivity to the authentication providers where a separate authorization mechanism is used, e.g., Azure Active Directory or datacenter Security Token Service (dSTS). dSTS is a token provider like Azure Active Directory that is isolated to the Microsoft datacenter and utilized for service level communications.

Figure 6 illustrates the management flow corresponding to a user command to stop a virtual machine. The steps enumerated in Table 1 apply to other management commands in the same way and utilize the same encryption and authentication flow.

Table 1. Management flow involving various MC and MP components

Step Description Authentication Encryption
1. User authenticates via Azure Active Directory (Azure AD) by providing credentials and is issued a token. User Credentials TLS 1.2
2. Browser presents token to Azure portal to authenticate user. Azure portal verifies token using token signature and valid signing keys. JSON Web Token (Azure AD) TLS 1.2
3. User issues “stop VM” request on Azure portal. Azure portal sends “stop VM” request to Azure Resource Manager and presents user’s token that was provided by Azure AD. Azure Resource Manager verifies token using token signature and valid signing keys and that the user is authorized to perform the requested operation. JSON Web Token (Azure AD) TLS 1.2
4. Azure Resource Manager requests a token from dSTS server based on the client certificate that Azure Resource Manager has, enabling dSTS to grant a JSON Web Token with the correct identity and roles. Client Certificate TLS 1.2
5. Azure Resource Manager sends request to CRP. Call is authenticated via OAuth using a JSON Web Token representing the Azure Resource Manager system identity from dSTS, thus transition from user to system context. JSON Web Token (dSTS) TLS 1.2
6. CRP validates the request and determines which fabric controller can complete the request. CRP requests a certificate from dSTS based on its client certificate so that it can connect to the specific Fabric Controller (FC) that is the target of the command. Token will grant permissions only to that specific FC if CRP is allowed to communicate to that FC. Client Certificate TLS 1.2
7. CRP then sends the request to the correct FC with the JSON Web Token that was created by dSTS. JSON Web Token (dSTS) TLS 1.2
8. FC then validates the command is allowed and comes from a trusted source. Then it establishes a secure TLS connection to the correct Fabric Agent (FA) in the cluster that can execute the command by using a certificate that is unique to the target FA and the FC. Once the secure connection is established the command is transmitted. Mutual Certificate TLS 1.2
9. The FA again validates the command is allowed and comes from a trusted source. Once validated, the FA will establish a secure connection using mutual certificate authentication and issue the command to the Hypervisor Agent that is only accessible by the FA. Mutual Certificate TLS 1.2
10. Hypervisor Agent on the host executes an internal call to stop the VM. System Context N.A.

Commands generated through all steps of the process identified in this section and sent to the FC and FA on each node, are written to a local audit log and distributed to multiple analytics systems for stream processing in order to monitor system health and track security events and patterns. Tracking includes events that were processed successfully, as well as events that were invalid. Invalid requests are processed by the intrusion detection systems to detect anomalies.

Logical isolation implementation options

Azure provides isolation of compute processing through a multi-layered approach, including:

  • Hypervisor isolation for services that provide cryptographically isolation by using separate virtual machines and leveraging Azure Hypervisor isolation. Examples: App Service, Azure Container Instances, Azure Databricks, Azure Functions, Azure Kubernetes Service, Azure Machine Learning, Cloud Services, Data Factory, Service Fabric, Virtual Machines, Virtual Machine Scale Sets.
  • Drawbridge isolation inside a VM for services that provide cryptographically isolation to workloads running on the same virtual machine by leveraging isolation provided by Drawbridge. These services provide small units of processing using customer code. To provide security isolation, Drawbridge runs a user process together with a light-weight version of the Windows kernel (library OS) inside a pico-process. A pico-process is a secured process with no direct access to services or resources of the Host system. Examples: Automation, Azure Database for MySQL, Azure Database for PostgreSQL, Azure SQL Database, Azure Stream Analytics, Azure Synapse Analytics (formerly Azure SQL Data Warehouse).
  • User context-based isolation for services that are comprised solely of Microsoft-controlled code. Customer code is not allowed to run. Examples: API Management, Application Gateway, Azure Active Directory, Azure Backup, Azure Cache for Redis, Azure DNS, Azure Information Protection, Azure IoT Hub, Azure Key Vault, Azure portal, Azure Monitor (including Log Analytics), Azure Security Center, Azure Site Recovery, Container Registry, Content Delivery Network, Event Grid, Event Hubs, Load Balancer, Service Bus, Storage, Virtual Network, VPN Gateway, Traffic Manager.

These logical isolation options are discussed in the rest of this section.

Hypervisor isolation

Hypervisor isolation in Azure is based on Microsoft Hyper-V technology, which enables Azure Hypervisor-based isolation to benefit from decades of Microsoft experience in operating system security and investments in Hyper-V technology for virtual machine isolation. Customers can review independent third-party assessment reports about Hyper-V security functions, including the National Information Assurance Partnership (NIAP) Common Criteria Evaluation and Validation Scheme (CCEVS) reports such as the report published in Aug-2019 that is discussed herein.

The Target of Evaluation (TOE) was composed of Windows 10 and Windows Server Standard and Datacenter Editions (version 1903, May 2019 update), including Windows Server 2016 and 2019 Hyper-V evaluation platforms (“Windows”). TOE enforces the following security policies as described in the report:

  • Security Audit – Windows has the ability to collect audit data, review audit logs, protect audit logs from overflow, and restrict access to audit logs. Audit information generated by the system includes the date and time of the event, the user identity that caused the event to be generated, and other event-specific data. Authorized administrators can review, search, and sort audit records. Authorized administrators can also configure the audit system to include or exclude potentially auditable events to be audited based on a wide range of characteristics. In the context of this evaluation, the protection profile requirements cover generating audit events, selecting which events should be audited, and providing secure storage for audit event entries.
  • Cryptographic Support – Windows provides FIPS 140-2 Cryptographic Algorithm Validation Program (CAVP) validated cryptographic functions that support encryption/decryption, cryptographic signatures, cryptographic hashing, cryptographic key agreement (which is not studied in this evaluation), and random number generation. The TOE additionally provides support for public keys, credential management, and certificate validation functions and provides support for the National Security Agency’s Suite B cryptographic algorithms. Windows also provides extensive auditing support of cryptographic operations, the ability to replace cryptographic functions and random number generators with alternative implementations, and a key isolation service designed to limit the potential exposure of secret and private keys. In addition to using cryptography for its own security functions, Windows offers access to the cryptographic support functions for user-mode and kernel-mode programs. Public key certificates generated and used by Windows authenticate users and machines as well as protect both user and system data in transit.
  • User Data Protection – In the context of this evaluation Windows protects user data and provides virtual private networking capabilities.
  • Identification and Authentication – Each Windows user must be identified and authenticated based on administrator-defined policy prior to performing any TSF-mediated functions. Windows maintains databases of accounts including their identities, authentication information, group associations, and privilege and logon rights associations. Windows account policy functions include the ability to define the minimum password length, the number of failed logon attempts, the duration of lockout, and password age.
  • Protection of the TOE Security Functions (TSF) – Windows provides several features to ensure the protection of TOE security functions. Specifically, Windows:
    • Protects against unauthorized data disclosure and modification by using a suite of Internet standard protocols including IPsec, IKE, and ISAKMP.
    • Ensures process isolation security for all processes through private virtual address spaces, execution context, and security context.
    • Uses protected kernel-mode memory to store data structures defining process address space, execution context, memory protection, and security context.
    • Includes self-testing features that ensure the integrity of executable program images and its cryptographic functions.
    • Provides a trusted update mechanism to update its own Windows binaries.
  • Session Locking – In the context of this evaluation, Windows allows an authorized administrator to configure the system to display a logon banner before the logon dialog.
  • TOE Access – Windows allows an authorized administrator to configure the system to display a logon banner before the logon dialog.
  • Trusted Path for Communications – Windows uses TLS, HTTPS, DTLS, and EAP-TLS to provide a trusted path for communications.
  • Security Management – Windows includes several functions to manage security policies. Policy management is controlled through a combination of access control, membership in administrator groups, and privileges.

More information is available from the third-party certification report.

The critical Hypervisor isolation is provided through:

  • Strongly defined security boundaries enforced by the Hypervisor
  • Defense-in-depth exploits mitigation
  • Strong security assurance processes

These technologies are described in the rest of this section.

Strongly defined security boundaries

Customer code executes in a Hypervisor VM and benefits from Hypervisor enforced security boundaries, as shown in Figure 7. Azure Hypervisor is based on Microsoft Hyper-V technology. It divides an Azure node into a variable number of Guest VMs that have separate address spaces where they can load an operating system (OS) and applications operating in parallel to the Host OS that executes in the Root partition of the node.

Compute isolation with Azure Hypervisor Figure 7. Compute isolation with Azure Hypervisor (see online glossary of terms)

The Azure Hypervisor acts like a micro-kernel, passing all hardware access requests from Guest VMs using a Virtualization Service Client (VSC) to the Host OS for processing by using a shared-memory interface called VMBus. The Host OS proxies the hardware requests using a Virtualization Service Provider (VSP) that prevents users from obtaining raw read/write/execute access to the system and mitigates the risk of sharing system resources. The privileged Root partition (also known as Host OS) has direct access to the physical devices/peripherals on the system (e.g., storage controllers, GPUs, networking adapters, etc.). The Host OS allows Guest partitions to share the use of these physical devices by exposing virtual devices to each Guest partition. Consequently, an operating system executing in a Guest partition has access to virtualized peripheral devices that are provided by VSPs executing in the Root partition. These virtual device representations can take one of three forms:

  • Emulated devices – The Host OS may expose a virtual device with an interface identical to what would be provided by a corresponding physical device. In this case, an operating system in a Guest partition would use the same device drivers as it does when running on a physical system. The Host OS would emulate the behavior of a physical device to the Guest partition.
  • Para-virtualized devices – The Host OS may expose virtual devices with a virtualization-specific interface using the VMBus shared memory interface between the Host OS and the Guest. In this model, the Guest partition uses device drivers specifically designed to implement a virtualized interface. These para-virtualized devices are sometimes referred to as “synthetic” devices.
  • Hardware-accelerated devices – The Host OS may expose actual hardware peripherals directly to the Guest partition. This model allows for high I/O performance in a Guest partition, as the Guest partition can directly access hardware device resources without going through the Host OS. Azure Accelerated Networking is an example of a hardware accelerated device. Isolation in this model is achieved using input-output memory management units (I/O MMUs) to provide address space and interrupt isolation between each partition.

Virtualization extensions in the Host CPU enable the Azure Hypervisor to enforce isolation between partitions. The following fundamental CPU capabilities provide the hardware building blocks for Hypervisor isolation:

  • Second-level address translation – the Hypervisor controls what memory resources a partition is allowed to access through the use of second-level page tables provided by the CPU’s memory management unit (MMU). The CPU’s MMU uses second-level address translation under Hypervisor control to enforce protection on memory accesses performed by:
    • CPU when running under the context of a partition.
    • I/O devices that are being accessed directly by Guest partitions.
  • CPU context – the Hypervisor leverages virtualization extensions in the CPU to restrict privileges and CPU context that can be accessed while a Guest partition is running. The Hypervisor also uses these facilities to save and restore state when sharing CPUs between multiple partitions to ensure isolation of CPU state between the partitions.

The Azure Hypervisor makes extensive use of these processor facilities to provide isolation between partitions. The emergence of speculative side channel attacks has identified potential weaknesses in some of these processor isolation capabilities. In a multi-tenant architecture, any cross-VM attack across different tenants involves two steps: placing an adversary-controlled VM on the same Host as one of the victim VMs, and then breaching the logical isolation boundary to perform a side-channel attack. Azure provides protection from both threat vectors by using an advanced VM placement algorithm enforcing memory and process separation for logical isolation, as well as secure network traffic routing with cryptographic certainty at the Hypervisor. As discussed in section titled Exploitation of vulnerabilities in virtualization technologies later in the article, the Azure Hypervisor has been architected to provide robust isolation within the hypervisor itself that helps mitigate a wide range of sophisticated side channel attacks.

The Azure Hypervisor defined security boundaries provide the base level isolation primitives for strong segmentation of code, data, and resource between potentially hostile multi-tenants on shared hardware. These isolation primitives are used to create multi-tenant resource isolation scenarios including:

  • Isolation of network traffic between potentially hostile guests – Virtual Networks (VNets) provide isolation of network traffic between tenants as part of their fundamental design, as described later in Separation of tenant network traffic section. VNet forms an isolation boundary where the VMs within a VNet can only communicate with each other. Any traffic destined to a VM from within the VNet or external senders without the proper policy configured will be dropped by the Host and not delivered to the VM.
  • Isolation for encryption keys and cryptographic material – Customers can further augment the isolation capabilities with the use of hardware security managers or specialized key storage, e.g., storing encryption keys in FIPS 140-2 validated Hardware Security Modules via Azure Key Vault.
  • Scheduling of system resources – Azure design includes guaranteed availability and segmentation of compute, memory, storage, and both direct and para-virtualized device access.

The Azure Hypervisor meets the security objectives shown in Table 2.

Table 2. Azure Hypervisor security objectives

Objective Source
Isolation The Azure Hypervisor security policy mandates no information transfer between VMs. This policy requires capabilities in the Virtual Machine Manager (VMM) and hardware for the isolation of memory, devices, networking, and managed resources such as persisted data.
VMM integrity Integrity is a core security objective for virtualization systems. To achieve system integrity, the integrity of each Hypervisor component is established and maintained. This objective concerns only the integrity of the Hypervisor itself, not the integrity of the physical platform or software running inside VMs.
Platform integrity The integrity of the Hypervisor depends on the integrity of the hardware and software on which it relies. Although the Hypervisor does not have direct control over the integrity of the platform, Azure relies on hardware and firmware mechanisms such as the Cerberus security microcontroller to protect the underlying platform integrity, thereby preventing the VMM and Guests from running should platform integrity be compromised.
Management access Management functions are exercised only by authorized administrators, connected over secure connections with a principle of least privilege enforced by fine grained role access control mechanism.
Audit Azure provides audit capability to capture and protect system data so that it can later be inspected.
Defense-in-depth exploit mitigations

To further mitigate the risk of a security compromise, Microsoft has invested in numerous defense-in-depth mitigations in Azure systems software, hardware, and firmware to provide strong real-world isolation guarantees to Azure customers. The goal of these mitigations is to make weaponized exploitation of a vulnerability as expensive as possible for an attacker, limiting their impact and maximizing the window for detection. All exploit mitigations are evaluated for effectiveness by a thorough security review of the Azure Hypervisor attack surface using methods that adversaries may employ.

Moreover, Azure has adopted an assume-breach security strategy implemented via Red Teaming. This approach relies on a dedicated team of security researchers and engineers who conduct continuous ongoing testing of Azure systems and operations using the same tactics, techniques, and procedures as real adversaries against live production infrastructure, without the foreknowledge of the Azure infrastructure and platform engineering or operations teams. This approach tests security detection and response capabilities and helps identify production vulnerabilities in Azure Hypervisor and other systems, including configuration errors, invalid assumptions, or other security issues in a controlled manner. Microsoft invests heavily in these innovative security measures for continuous Azure threat mitigation. Table 3 outlines some of the mitigations intended to protect the Hypervisor isolation boundaries and hardware host integrity.

Table 3. Azure Hypervisor defense-in-depth

Mitigation Security Impact Mitigation Details
Control flow Integrity Increases cost to perform control flow integrity attacks (e.g., return-oriented—programming exploits) Control Flow Guard (CFG) ensures indirect control flow transfers are instrumented at compile time and enforced by the kernel (user-mode) or secure kernel (kernel-mode), mitigating stack return vulnerabilities.
User-mode code integrity Protects against malicious and unwanted binary execution in user mode Address Space Layout Randomization (ASLR) forced on all binaries in host partition, all code compiled with SDL security checks (e.g. strict_gs), arbitrary code generation restrictions in place on host processes prevent injection of runtime-generated code.
Hypervisor enforced user and kernel mode code integrity No code loaded into code pages marked for execution until authenticity of code is verified Virtualization-based Security (VBS) leverages memory isolation to create a secure world to enforce policy and store sensitive code and secrets. With Hypervisor enforced Code Integrity (HVCI), the secure world is used to prevent unsigned code from being injected into the normal world kernel.
Hardware root-of-trust with platform secure boot Ensures host only boots exact firmware and OS image required Windows secure boot validates that Azure Hypervisor infrastructure is only bootable in a known good configuration, aligned to Azure firmware, hardware, and kernel production versions.
Reduced attack surface VMM Protects against escalation of privileges in VMM user functions The Azure Hypervisor Virtual Machine Manager (VMM) contains both user and kernel mode components. User mode components are isolated with a sandbox to prevent break-out into kernel mode functions in addition to numerous layered mitigations.
Strong security assurance processes

The attack surface in Hyper-V is well understood. It has been the subject of ongoing research and thorough security reviews. Microsoft has been transparent about the Hyper-V attack surface and underlying security architecture as demonstrated during a public presentation at a Black Hat conference in 2018. Microsoft stands behind the robustness and quality of Hyper-V isolation with a $250,000 bug bounty program for critical Remote Code Execution (RCE), information disclosure, and Denial of Service (DOS) vulnerabilities reported in Hyper-V. By leveraging the same Hyper-V technology in Windows Server and Azure cloud platform, the publicly available documentation and bug bounty program ensure that security improvements will accrue to all users of Microsoft products and services. Table 4 summarizes the key attack surface points from the Black Hat presentation.

Table 4. Hyper-V attack surface details

Attack surface area Privileges granted if compromised High-level components
Hyper-V Hypervisor: full system compromise with the ability to compromise other Guests - Hypercalls
- Intercept handling
Host partition kernel-mode components System in kernel mode: full system compromise with the ability to compromise other Guests - Virtual Infrastructure Driver (VID) intercept handling
- Kernel-mode client library
- Virtual Machine Bus (VMBus) channel messages
- Storage Virtualization Service Provider (VSP)
- Network VSP
- Virtual Hard Disk (VHD) parser
- Azure Networking Virtual Filtering Platform (VFP) and Virtual Network (VNet)
Host partition user-mode components Worker process in user mode: limited compromise with ability to attack Host and elevate privileges - Virtual devices (VDEVs)

To protect these attack surfaces, Microsoft has established industry-leading processes and tooling that provides high confidence in the Azure isolation guarantee. As described in Security assurance processes and practices section later in this article, the approach includes purpose-built fuzzing, penetration testing, security development lifecycle, mandatory security training, security reviews, security intrusion detection based on Guest – Host threat indicators, and automated build alerting of changes to the attack surface area. This mature multi-dimensional assurance process helps augment the isolation guarantees provided by the Azure Hypervisor by mitigating the risk of security vulnerabilities.

Drawbridge isolation

For services that provide small units of processing using customer code, requests from multiple tenants are executed within a single VM and isolated using Microsoft Drawbridge technology. To provide security isolation, Drawbridge runs a user process together with a light-weight version of the Windows kernel (Library OS) inside a pico-process. A pico-process is a lightweight, secure isolation container with minimal kernel API surface and no direct access to services or resources of the Host system. The only external calls the pico-process can make are to the Drawbridge Security Monitor through the Drawbridge Application Binary Interface (ABI), as shown in Figure 8.

Process isolation using Drawbridge Figure 8. Process isolation using Drawbridge

The Security Monitor is divided into a system device driver and a user-mode component. The ABI is the interface between the Library OS and the Host. The entire interface consists of a closed set of fewer than 50 stateless function calls:

  • Down calls from the pico-process to the Host OS support abstractions such as threads, virtual memory, and I/O streams.
  • Up calls into the pico-process perform initialization, return exception information, and run in a new thread.

The semantics of the interface are fixed and support the general abstractions that applications require from any operating system. This design enables the Library OS and the Host to evolve separately.

The ABI is implemented within two components:

  • The Platform Adaptation Layer (PAL) runs as part of the pico-process.
  • The host implementation runs as part of the Host.

Pico-processes are grouped into isolation units called sandboxes. The sandbox defines the applications, file system, and external resources available to the pico-processes. When a process running inside a pico-process creates a new child process, it is run with its own Library OS in a separate pico-process inside the same sandbox. Each sandbox communicates to the Security Monitor and is not able to communicate with other sandboxes except via allowed I/O channels (sockets, named pipes etc.), which need to be explicitly allowed by the configuration given the default opt-in approach depending on service needs. The outcome is that code running inside a pico-process can only access its own resources and cannot directly attack the Host system or any colocated sandboxes. It is only able to affect objects inside its own sandbox.

When the pico-process needs system resources, it must call into the Drawbridge host to request them. The normal path for a virtual user process would be to call the Library OS to request resources and the Library OS would then call into the ABI. Unless the policy for resource allocation is set up in the driver itself, the Security Monitor would handle the ABI request by checking policy to see if the request is allowed and then servicing the request. This mechanism is used for all system primitives therefore ensuring that the code running in the pico-process cannot abuse the resources from the Host machine.

In addition to being isolated inside sandboxes, pico-processes are also substantially isolated from each other. Each pico-process resides in its own virtual memory address space and runs its own copy of the Library OS with its own user-mode kernel. Each time a user process is launched in a Drawbridge sandbox, a fresh Library OS instance is booted. While this task is more time-consuming compared to launching a non-isolated process on Windows, it is substantially faster than booting a VM while accomplishing logical isolation.

A normal Windows process can call more than 1200 functions that result in access to the Windows kernel; however, the entire interface for a pico-process consists of fewer than 50 calls down to the Host. Most application requests for operating system services are handled by the Library OS within the address space of the pico-process. By providing a significantly smaller interface to the kernel, Drawbridge creates a more secure and isolated operating environment in which applications are much less vulnerable to changes in the Host system and incompatibilities introduced by new OS releases. More importantly, a Drawbridge pico-process is a strongly isolated container within which untrusted code from even the most malicious sources can be run without risk of compromising the Host system. The Host assumes that no code running within the pico-process can be trusted. The Host validates all requests from the pico-process with security checks.

Like a Virtual Machine, the pico-process is much easier to secure than a traditional OS interface because it is significantly smaller, stateless, and has fixed and easily described semantics. Another added benefit of the small ABI / driver syscall interface is the ability to audit / fuzz the driver code with little effort. For example, syscall fuzzers can fuzz the ABI with high coverage numbers in a relatively short amount of time.

User context-based isolation

In cases where an Azure service is comprised of Microsoft-controlled code and customer code is not allowed to run, the isolation is provided by a user context. These services accept only user configuration inputs and data for processing – arbitrary code is not allowed. For these services, a user context is provided to establish the data that can be accessed and what Role-Based Access Control (RBAC) operations are allowed. This context is established by Azure Active Directory (Azure AD) as described earlier in Identity-based isolation section. Once the user has been identified and authorized, the Azure service creates an application user context that is attached to the request as it moves through execution, providing assurance that user operations are separated and properly isolated.

Physical isolation

In addition to robust logical compute isolation available by design to all Azure tenants, customers who desire physical compute isolation can utilize Azure Dedicated Host or Isolated Virtual Machines, which are both dedicated to a single customer.

Azure Dedicated Host

Azure Dedicated Host provides physical servers that can host one or more Azure VMs and are dedicated to one Azure subscription. Customers can provision dedicated hosts within a region, availability zone, and fault domain. They can then place Windows, Linux, and SQL Server on Azure VMs directly into provisioned hosts using whatever configuration best meets their needs. Dedicated Host provides hardware isolation at the physical server level, enabling customers to place their Azure VMs on an isolated and dedicated physical server that runs only their organization’s workloads to meet corporate compliance requirements.

Note

Customers can deploy a dedicated host using the Azure portal, Azure PowerShell, and Azure Command-Line Interface (CLI).

Customers can deploy both Windows and Linux virtual machines into dedicated hosts by selecting the server and CPU type, number of cores, and additional features. Dedicated Host enables control over platform maintenance events by allowing customers to opt in to a maintenance window to reduce potential impact to their provisioned services. Most maintenance events have little to no impact on customer VMs; however, customers in highly regulated industries or with sensitive workloads may want to have control over any potential maintenance impact.

Note

Microsoft provides detailed customer guidance on Windows and Linux Azure Virtual Machine provisioning using the Azure portal, Azure PowerShell, and Azure CLI.

Table 5 summarizes available security guidance for customer virtual machines provisioned in Azure.

Table 5. Security guidance for Azure virtual machines

VM Security guidance
Windows Secure policies Azure Disk Encryption Built-in security controls Security recommendations
Linux Secure policies Azure Disk Encryption Built-in security controls Security recommendations

Isolated Virtual Machines

Azure Compute offers Virtual Machine sizes that are isolated to a specific hardware type and dedicated to a single customer. These Virtual Machine instances allow customer workloads to be deployed on dedicated physical servers. Utilizing Isolated VMs essentially guarantees that a customer VM will be the only one running on that specific server node. Customers can also choose to further subdivide the resources on these Isolated VMs by using Azure support for nested Virtual Machines.

Networking isolation

The logical isolation of customer infrastructure in a public cloud is fundamental to maintaining security. The overarching principle for a virtualized solution is to allow only connections and communications that are necessary for that virtualized solution to operate, blocking all other ports and connections by default. Azure Virtual Network (VNet) helps ensure that each customer’s private network traffic is logically isolated from traffic belonging to other customers. Virtual Machines (VMs) in one VNet cannot communicate directly with VMs in a different VNet even if both VNets are created by the same customer. Networking isolation ensures that communication between customer VMs remains private within a VNet. Customers can connect their VNets via VNet peering or VPN gateways, depending on their connectivity options, including bandwidth, latency, and encryption requirements.

This section describes how Azure provides isolation of network traffic among tenants and enforces that isolation with cryptographic certainty.

Separation of tenant network traffic

Virtual networks (VNets) provide isolation of network traffic between tenants as part of their fundamental design. A customer subscription can contain multiple logically isolated private networks, and include firewall, load balancing, and network address translation. Each VNet is isolated from other VNets by default. Multiple deployments inside the same subscription can be placed on the same VNet, and then communicate with each other through private IP addresses.

Network access to VMs is limited by packet filtering at the network edge, at load balancers, and at the Host OS level. Customers can additionally configure their host firewalls to further limit connectivity, specifying for each listening port whether connections are accepted from the Internet or only from role instances within the same cloud service or VNet.

Azure provides network isolation for each deployment and enforces the following rules:

  • Traffic between VMs always traverses through trusted packet filters.
    • Protocols such as Address Resolution Protocol (ARP), Dynamic Host Configuration Protocol (DHCP), and other OSI Layer-2 traffic from a VM are controlled using rate-limiting and anti-spoofing protection.
    • VMs cannot capture any traffic on the network that is not intended for them.
  • Customer VMs cannot send traffic to Azure private interfaces and infrastructure services, or to other customers’ VMs. Customer VMs can only communicate with other VMs owned or controlled by the same customer and with Azure infrastructure service endpoints meant for public communications.
  • When customers put VMs on a VNet, those VMs get their own address spaces that are invisible, and hence, not reachable from VMs outside of a deployment or virtual network (unless configured to be visible via public IP addresses). Customer environments are open only through the ports that customers specify for public access; if the VM is defined to have a public IP address, then all ports are open for public access.

Packet flow and network path protection

Azure’s hyperscale network is designed to provide uniform high capacity between servers, performance isolation between services (including customers), and Ethernet Layer-2 semantics. Azure uses a number of networking implementations to achieve these goals: flat addressing to allow service instances to be placed anywhere in the network; load balancing to spread traffic uniformly across network paths; and end-system based address resolution to scale to large server pools, without introducing complexity to the network control plane.

These implementations give each service the illusion that all the servers assigned to it, and only those servers, are connected by a single non-interfering Ethernet switch – a Virtual Layer 2 (VL2) – and maintain this illusion even as the size of each service varies from one server to hundreds of thousands. This VL2 implementation achieves traffic performance isolation, ensuring that it is not possible that the traffic of one service could be affected by the traffic of any other service, as if each service were connected by a separate physical switch.

This section explains how packets flow through the Azure network, and how the topology, routing design, and directory system combine to virtualize the underlying network fabric - creating the illusion that servers are connected to a large, non-interfering datacenter-wide Layer-2 switch.

The Azure network uses two different IP-address families:

  • Customer address (CA) is the customer defined/chosen VNet IP address, also referred to as Virtual IP (VIP). The network infrastructure operates using CAs, which are externally routable. All switches and interfaces are assigned CAs, and switches run an IP-based (Layer-3) link-state routing protocol that disseminates only these CAs. This design allows switches to obtain the complete switch-level topology, as well as forward packets encapsulated with CAs along shortest paths.
  • Provider address (PA) is the Azure assigned internal fabric address that is not visible to users and is also referred to as Dynamic IP (DIP). No traffic goes directly from the Internet to a server; all traffic from the Internet must go through a Software Load Balancer (SLB) and be encapsulated to protect the internal Azure address space by only routing packets to valid Azure internal IP addresses and ports. Network Address Translation (NAT) separates internal network traffic from external traffic. Internal traffic uses RFC 1918 address space or private address space – the provider addresses (PAs) – that is not externally routable. The translation is performed at the SLBs. Customer addresses (CAs) that are externally routable are translated into internal provider addresses (PAs) that are only routable within Azure. These addresses remain unaltered no matter how their servers’ locations change due to virtual-machine migration or reprovisioning.

Each PA is associated with a CA, which is the identifier of the Top of Rack (ToR) switch to which the server is connected. VL2 uses a scalable, reliable directory system to store and maintain the mapping of PAs to CAs, and this mapping is created when servers are provisioned to a service and assigned PA addresses. An agent running in the network stack on every server, called the VL2 agent, invokes the directory system’s resolution service to learn the actual location of the destination and then tunnels the original packet there.

Azure assigns servers IP addresses that act as names alone, with no topological significance. Azure’s VL2 addressing scheme separates these server names (PAs) from their locations (CAs). The crux of offering Layer-2 semantics is having servers believe they share a single large IP subnet – i.e., the entire PA space – with other servers in the same service, while eliminating the Address Resolution Protocol (ARP) and Dynamic Host Configuration Protocol (DHCP) scaling bottlenecks that plague large Ethernet deployments.

Figure 9 depicts a sample packet flow where sender S sends packets to destination D via a randomly chosen intermediate switch using IP-in-IP encapsulation. PAs are from 20/8, and CAs are from 10/8. H(ft) denotes a hash of the 5-tuple, which is comprised of source IP, source port, destination IP, destination port, and protocol type. The ToR translates the PA to the CA, sends to the Intermediate switch, which sends to the destination CA ToR switch, which translates to the destination PA.

Sample packet flow Figure 9. Sample packet flow

A server cannot send packets to a PA if the directory service refuses to provide it with a CA through which it can route its packets, which means that the directory service enforces access control policies. Further, since the directory system knows which server is making the request when handling a lookup, it can enforce fine-grained isolation policies. For example, it could enforce the policy that only servers belonging to the same service can communicate with each other.

Traffic flow patterns

To route traffic between servers, which use PA addresses, on an underlying network that knows routes for CA addresses, the VL2 agent on each server captures packets from the host, and encapsulates them with the CA address of the ToR switch of the destination. Once the packet arrives at the CA (i.e., the destination ToR switch), the destination ToR switch decapsulates the packet and delivers it to the destination PA carried in the inner header. The packet is first delivered to one of the Intermediate switches, decapsulated by the switch, delivered to the ToR’s CA, decapsulated again, and finally sent to the destination. This approach is depicted in Figure 10 using two possible traffic patterns: 1) external traffic (orange line) traversing over ExpressRoute or the Internet to a VNet, and 2) internal traffic (blue line) between two VNets. Both traffic flows follow a similar pattern to isolate and protect network traffic.

Separation of tenant network traffic using VNets Figure 10. Separation of tenant network traffic using VNets

External traffic (orange line) – For external traffic, Azure provides multiple layers of assurance to enforce isolation depending on traffic patterns. When a customer places a public IP on their VNet gateway, traffic from the public Internet or customer on-premises network that is destined for that IP address will be routed to an Internet Edge Router. Alternatively, when a customer establishes private peering over an ExpressRoute connection, it is connected with an Azure VNet via VNet Gateway. This set-up aligns connectivity from the physical circuit and makes the private IP address space from the on-premises location addressable. Azure then uses Border Gateway Protocol (BGP) to share routing details with the on-premises network to establish end-to-end connectivity. When communication begins with a resource within the VNet, the network traffic traverses as normal until it reaches a Microsoft ExpressRoute Edge (MSEE) Router. In both cases, VNets provide the means for Azure VMs to act as part of customer’s on-premises network. A cryptographically protected IPsec/IKE tunnel is established between Azure and customer’s internal network (e.g., via Azure VPN Gateway or Azure ExpressRoute Private Peering), enabling the VM to connect securely to customer’s on-premises resources as though it was directly on that network.

At the Internet Edge Router or the MSEE Router, the packet is encapsulated using Generic Routing Encapsulation (GRE). This encapsulation uses a unique identifier specific to the VNet destination and the destination address, which is used to appropriately route the traffic to the identified VNet. Upon reaching the VNet Gateway, which is a special VNet used only to accept traffic from outside of an Azure VNet, the encapsulation is verified by the Azure network fabric to ensure: a) the endpoint receiving the packet is a match to the unique VNet ID used to route the data, and b) the destination address requested exists in this VNet. Once verified, the packet is routed as internal traffic from the VNet Gateway to the final requested destination address within the VNet. This approach ensures that traffic from external networks travels only to Azure VNet for which it is destined, enforcing isolation.

Internal traffic (blue line) – Internal traffic also uses GRE encapsulation/tunneling. When two resources in an Azure VNet attempt to establish communications between each other, the Azure network fabric reaches out to the Azure VNet routing directory service that is part of the Azure network fabric. The directory services use the customer address (CA) and the requested destination address to determine the provider address (PA). This information, including the VNet identifier, CA, and PA, is then used to encapsulate the traffic with GRE. The Azure network uses this information to properly route the encapsulated data to the appropriate Azure host using the PA. The encapsulation is reviewed by the Azure network fabric to confirm: (1) the PA is a match, (2) the CA is located at this PA, and (3) the VNet identifier is a match. Once all three are verified, the encapsulation is removed and routed to the CA as normal traffic (e.g., to a VM endpoint). This approach provides VNet isolation assurance based on correct traffic routing between cloud services.

Azure VNets implement several mechanisms to ensure secure traffic between tenants. These mechanisms align to existing industry standards and security practices, and prevent well-known attack vectors including:

  • Prevent IP address spoofing – Whenever encapsulated traffic is transmitted by a VNet, the service reverifies the information on the receiving end of the transmission. The traffic is looked up and encapsulated independently at the start of the transmission, as well as reverified at the receiving endpoint to ensure the transmission was performed appropriately. This verification is done with an internal VNet feature called SpoofGuard, which verifies that the source and destination are valid and allowed to communicate, thereby preventing mismatches in expected encapsulation patterns that might otherwise permit spoofing. The GRE encapsulation processes prevent spoofing as any GRE encapsulation and encryption not done by the Azure network fabric is treated as dropped traffic.
  • Provide network segmentation across customers with overlapping network spaces – Azure VNet’s implementation relies on established tunneling standards such as the GRE, which in turn allows the use of customer-specific unique identifiers (VNet IDs) throughout the cloud. The VNet identifiers are used as scoping identifiers. This approach ensures that a customer is always operating within their unique address space, overlapping address spaces between tenants, and the Azure network fabric. Anything that has not been encapsulated with a valid VNet ID is blocked within the Azure network fabric. In the example described above, any encapsulated traffic not performed by the Azure network fabric is discarded.
  • Prevent traffic from crossing between VNets – Preventing traffic from crossing between VNets is done through the same mechanisms that handle address overlap and prevent spoofing. Traffic crossing between VNets is rendered infeasible by using unique VNet IDs established per tenant in combination with verification of all traffic at the source and destination. Users do not have access to the underlying transmission mechanisms that rely on these IDs to perform the encapsulation. Consequently, any attempt to encapsulate and simulate these mechanisms would lead to dropped traffic.

In addition to these key protections, all unexpected traffic originating from the Internet is dropped by default. Any packet entering the Azure network will first encounter an Edge router. Edge routers intentionally allow all inbound traffic into the Azure network except spoofed traffic. This basic traffic filtering protects the Azure network from known bad malicious traffic. Azure also implements DDoS protection at the network layer, collecting logs to throttle or block traffic based on real time and historical data analysis, and mitigates attacks on demand.

Moreover, the Azure network fabric blocks traffic from any IPs originating in the Azure network fabric space that are spoofed. The Azure network fabric uses GRE and Virtual Extensible LAN (VXLAN) to validate that all allowed traffic is Azure-controlled traffic and all non-Azure GRE traffic is blocked. By using GRE tunnels and VXLAN to segment traffic using customer unique keys, Azure meets RFC 3809 and RFC 4110. When using Azure VPN Gateway in combination with ExpressRoute, Azure meets RFC 4111 and RFC 4364. With a comprehensive approach for isolation encompassing external and internal network traffic, Azure VNets provide customers with assurance that Azure successfully routes traffic between VNets, allows proper network segmentation for tenants with overlapping address spaces, and prevents IP address spoofing.

Customers are also able to utilize Azure services to further isolate and protect their resources. Using Network Security Groups (NSGs), a feature of Azure Virtual Network, customers can filter traffic by source and destination IP address, port, and protocol via multiple inbound and outbound security rules – essentially acting as a distributed virtual firewall and IP-based network Access Control List (ACL). Customers can apply an NSG to each NIC in a Virtual Machine, apply an NSG to the subnet that a NIC, or another Azure resource, is connected to, and directly to Virtual Machine Scale Sets, allowing finer control over the customer infrastructure.

At the infrastructure layer, Azure implements a Hypervisor firewall to protect all tenants running on top of the Hypervisor within virtual machines from unauthorized access. This Hypervisor firewall is distributed as part of the NSG rules deployed to the Host, implemented in the Hypervisor, and configured by the Fabric Controller agent, as shown in Figure 4. The Host OS instances utilize the built-in Windows Firewall to implement fine-grained ACLs at a greater granularity than router ACLs and are maintained by the same software that provisions tenants, so they are never out of date. They are applied using the Machine Configuration File (MCF) to Windows Firewall.

At the top of the operating system stack is the Guest OS, which customers utilize as their operating system. By default, this layer does not allow any inbound communication to cloud service or virtual network, essentially making it part of a private network. For PaaS Web and Worker roles, remote access is not permitted by default. It is possible for customers to enable Remote Desktop Protocol (RDP) access as an explicit option. For IaaS VMs created using the Azure portal, RDP and remote PowerShell ports are opened by default; however, port numbers are assigned randomly. For IaaS VMs created via PowerShell, RDP and remote PowerShell ports must be opened explicitly. If the administrator chooses to keep the RDP and remote PowerShell ports open to the Internet, the account allowed to create RDP and PowerShell connections should be secured with a strong password. Even if ports are open, customers can define ACLs on the public IPs for additional protection if desired.

Service tags

Customers can use Virtual network service tags to achieve network isolation and protect their Azure resources from the Internet while accessing Azure services that have public endpoints. With service tags, customers can define network access controls on network security groups or Azure Firewall. A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, thereby reducing the complexity of frequent updates to network security rules.

Note

Customers can create inbound/outbound network security group rules to deny traffic to/from the Internet and allow traffic to/from Azure. Service tags are available for a wide range of Azure services for use in network security group rules.

Additional resources:

Customers can use Azure Private Link to access Azure PaaS services and Azure-hosted customer/partner services over a private endpoint in their VNet, ensuring that traffic between their VNet and the service travels across the Microsoft global backbone network. This approach eliminates the need to expose the service to the public Internet. Customers can also create their own private link service in their own VNet and deliver it to their customers.

Azure private endpoint is a network interface that connects customers privately and securely to a service powered by Azure Private Link. Private endpoint uses a private IP address from customer’s VNet, effectively bringing the service into customer’s VNet.

From the networking isolation standpoint, key benefits of Azure Private Link include:

  • Customers can connect their VNet to services in Azure without a public IP address at the source or destination. Azure Private Link handles the connectivity between the service and its consumers over the Microsoft global backbone network.
  • Customers can access services running in Azure from on-premises over ExpressRoute private peering, VPN tunnels, and peered virtual networks using private endpoints. Azure Private Link eliminates the need to set up public peering or traverse the Internet to reach the service.
  • Customers can connect privately to services running in other Azure regions.

Note

Customers can use the Azure portal to manage private endpoint connections on Azure PaaS resources. For customer/partner owned Private Link services, Azure Power Shell and Azure CLI are the preferred methods for managing private endpoint connections.

Additional resources:

Data encryption in transit

Azure provides many options for encrypting data in transit. Data encryption in transit isolates customer network traffic from other traffic and helps protect data from interception. Data in transit applies to scenarios involving data traveling between:

  • Customer’s end users and Azure service
  • Customer’s on-premises datacenter and Azure region
  • Microsoft datacenters as part of expected Azure service operation

Customer’s end users connection to Azure service

Transport Layer Security (TLS): Azure uses the TLS protocol to help protect data when it is traveling between customers and Azure services. Most customer end users will connect to Azure over the Internet, and the precise routing of network traffic will depend on the many network providers that contribute to Internet infrastructure. As stated in the Online Services Data Protection Addendum (DPA), Microsoft does not control or limit the regions from which customer or customer’s end users may access or move customer data.

Important

Customers can increase security by enabling encryption in transit. For example, customers can use Azure Application Gateway to configure end-to-end encryption of network traffic and rely on Azure Key Vault integration for TLS termination.

Across Azure services, traffic to and from the service is protected by TLS 1.2 leveraging RSA-2048 for key exchange and AES-256 for data encryption. The corresponding crypto modules are FIPS 140-2 validated as part of the Microsoft Windows FIPS validation program.

TLS provides strong authentication, message privacy, and integrity. Perfect Forward Secrecy (PFS) protects connections between customer’s client systems and Microsoft cloud services by generating a unique session key for every session a customer initiate. PFS protects past sessions against potential future key compromises. This combination makes it more difficult to intercept and access data in transit.

In-transit encryption for VMs: Remote sessions to Windows and Linux VMs deployed in Azure can be conducted over protocols that ensure data encryption in transit. For example, the Remote Desktop Protocol (RDP) initiated from a client computer to Windows and Linux VMs enables TLS protection for data in transit. Customers can also use Secure Shell (SSH) to connect to Linux VMs running in Azure. SSH is an encrypted connection protocol available by default for remote management of Linux VMs hosted in Azure.

Important

Customers should review best practices for network security, including guidance for disabling RDP/SSH access to Virtual Machines from the Internet to mitigate brute force attacks to gain access to Azure Virtual Machines. Accessing VMs for remote management can then be accomplished via point-to-site VPN, site-to-site VPN, or ExpressRoute.

Azure Storage transactions: When interacting with Azure Storage through the Azure portal, all transactions take place over HTTPS. Moreover, customers can configure their storage accounts to accept requests only from secure connections by setting the “secure transfer required” property for the storage account. The “secure transfer required” option is enabled by default when creating a Storage account in the Azure portal.

Azure Files offers fully managed file shares in the cloud that are accessible via the industry-standard Server Message Block (SMB) protocol. By default, all Azure storage accounts have encryption in transit enabled. Consequently, when mounting a share over SMB or accessing it through the Azure portal (or PowerShell, CLI, and Azure SDKs), Azure Files will only allow the connection if it is made with SMB 3.0+ with encryption or over HTTPS.

Customer’s datacenter connection to Azure region

VPN encryption: Virtual Network (VNet) provides a means for Azure Virtual Machines (VMs) to act as part of a customer’s internal (on-premises) network. With VNet, customers choose the address ranges of non-globally-routable IP addresses to be assigned to the VMs so that they will not collide with addresses the customer is using elsewhere. Customers have options to securely connect to a VNet from their on-premises infrastructure or remote locations.

  • Site-to-Site (IPsec/IKE VPN tunnel) – A cryptographically protected “tunnel” is established between Azure and the customer’s internal network, allowing an Azure VM to connect to the customer’s back-end resources as though it was directly on that network. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it. Customers can use Azure VPN Gateway to send encrypted traffic between their VNet and their on-premises infrastructure across the public Internet, e.g., a site-to-site VPN relies on IPsec for transport encryption. Azure VPN Gateway supports a wide range of encryption algorithms that are FIPS 140-2 validated. Moreover, customers can configure Azure VPN Gateway to use custom IPsec/IKE policy with specific cryptographic algorithms and key strengths instead of relying on the default Azure policies. IPsec encrypts data at the IP level (Network Layer 3).
  • Point-to-Site (VPN over SSTP, OpenVPN, and IPsec) – A secure connection is established from an individual client computer to customer’s VNet using Secure Socket Tunneling Protocol (SSTP), OpenVPN, or IPsec. As part of the Point-to-Site VPN configuration, customers need to install a certificate and a VPN client configuration package, which allow the client computer to connect to any VM within the VNet. Point-to-Site VPN connections do not require a VPN device or a public facing IP address.

In addition to controlling the type of algorithm that is supported for VPN connections, Azure provides customers with the ability to enforce that all traffic leaving a VNet may only be routed through a VNet Gateway (e.g., Azure VPN Gateway). This enforcement allows customers to ensure that traffic may not leave a VNet without being encrypted. A VPN Gateway can be used for VNet-to-VNet connections while also providing a secure tunnel with IPsec/IKE. Azure VPN uses Pre-Shared Key (PSK) authentication whereby Microsoft generates a PSK when the VPN tunnel is created. Customers can change the autogenerated PSK to their own.

ExpressRoute encryption: ExpressRoute allows customers to create private connections between Microsoft datacenters and their on-premises infrastructure or colocation facility. ExpressRoute connections do not go over the public Internet and offer lower latency and higher reliability than IPsec protected VPN connections. ExpressRoute locations are the entry points to Microsoft’s global network backbone and they may or may not match the location of Azure regions. Once the network traffic enters the Microsoft backbone, it is guaranteed to traverse that private networking infrastructure instead of the public Internet. Customers can use ExpressRoute with several data encryption options, including MACsec that enables customers to store MACsec encryption keys in Azure Key Vault. MACsec encrypts data at the Media Access Control (MAC) level, i.e., data link layer (Network Layer 2). Both AES-128 and AES-256 block ciphers are supported for encryption. Customers can use MACsec to encrypt the physical links between their network devices and Microsoft network devices when they connect to Microsoft via ExpressRoute Direct.

Customers can enable IPsec in addition to MACsec on their ExpressRoute Direct ports, as shown in Figure 11. Using Azure VPN Gateway, customers can set up an IPsec tunnel over Microsoft Peering of customer’s ExpressRoute circuit between customer’s on-premises network and customer’s Azure VNet. MACsec secures the physical connection between customer’s on-premises network and Microsoft. IPsec secures the end-to-end connection between customer’s on-premises network and their VNets in Azure. MACsec and IPsec can be enabled independently.

VPN and ExpressRoute encryption for data in transit Figure 11. VPN and ExpressRoute encryption for data in transit

Traffic across Microsoft global network backbone

Azure services such as Storage and SQL Database can be configured for geo-replication to help ensure durability and high availability especially for disaster recovery scenarios. Azure relies on paired regions to deliver geo-redundant storage (GRS) and paired regions are also recommended when configuring active geo-replication for Azure SQL Database. Paired regions are located within the same Geo; however, network traffic is not guaranteed to always follow the same path from one Azure region to another. To provide the reliability needed for the Azure cloud, Microsoft has many physical networking paths with automatic routing around failures for optimal reliability.

Moreover, all Azure traffic traveling within a region or between regions is encrypted by Microsoft using MACsec, which relies on AES-128 block cipher for encryption. This traffic stays entirely within the Microsoft global network backbone and never enters the public Internet. The backbone is one of the largest in the world with more than 160,000 km of lit fiber optic and undersea cable systems.

Important

Customers should review Azure best practices for the protection of data in transit to help ensure that all data in transit is encrypted. For key Azure PaaS storage services (e.g., Azure SQL Database), data encryption in transit is enforced by default.

Third-party network virtual appliances

Azure provides customers with many features to help them achieve their security and isolation goals, including Azure Security Center, Azure Monitor, Azure Firewall, Azure VPN Gateway, Network Security Groups, Azure Application Gateway, Azure DDoS Protection, Network Watcher, Azure Sentinel, and Azure Policy. In addition to the built-in capabilities that Azure provides, customers can use third-party network virtual appliances to accommodate their specific network isolation requirements while at the same time leveraging existing in-house skills. Azure supports a wide range of appliances, including offerings from F5, Palo Alto Networks, Cisco, Check Point, Barracuda, Citrix, Fortinet, and many others. Network appliances support network functionality and services in the form of VMs in customer virtual networks and deployments.

The cumulative effect of network isolation restrictions is that each cloud service acts as though it were on an isolated network where VMs within the cloud service can communicate with one another, identifying one another by their source IP addresses with confidence that no other parties can impersonate their peer VMs. They can also be configured to accept incoming connections from the Internet over specific ports and protocols and to ensure that all network traffic leaving customer Virtual Networks is always encrypted.

Tip

Customers should review published Azure networking documentation for guidance on how to use native security features to help protect their data.

Additional resources:

Storage isolation

Microsoft Azure separates customer VM-based computation resources from storage as part of its fundamental design. The separation allows computation and storage to scale independently, making it easier to provide multi-tenancy and isolation. Consequently, Azure Storage runs on separate hardware with no network connectivity to Azure Compute except logically.

Each Azure subscription can have one or more storage accounts. Azure storage supports various authentication options, including:

  • Shared symmetric keys: Upon storage account creation, Azure generates two 512-bit storage account keys that control access to the storage account. These keys can be rotated and regenerated by customers at any point thereafter without coordination with their applications.
  • Azure AD based authentication: Access to Azure Storage can be controlled by Azure Active Directory (Azure AD), which enforces tenant isolation and implements robust measures to prevent access by unauthorized parties, including Microsoft insiders. More information about Azure AD tenant isolation is available from a white paper Azure Active Directory Data Security Considerations.
  • Shared access signatures (SAS): Shared access signatures or “pre-signed URLs” can be created from the shared symmetric keys. These URLs can be signification limited in scope to reduce the available attack surface, but at the same time allow applications to grant storage access to another user, service, or device.
  • User delegation SAS: Delegated authentication is similar to SAS but is based on Azure AD tokens rather than the shared symmetric keys. This approach allows a service that authenticates with Azure AD to create a pre signed URL with limited scope and grant temporary access to another user, service, or device.
  • Anonymous public read access: Customers can allow a small portion of their storage to be publicly accessible without authentication or authorization. This capability can be disabled at the subscription level for customers who desire more stringent control.

Azure Storage provides storage for a wide variety of workloads, including:

  • Azure Virtual Machines (disk storage)
  • Big data analytics (HDFS for HDInsight, Azure Data Lake Storage)
  • Storing application state, user data (Blob, Queue, Table storage)
  • Long-term data storage (Azure Archive Storage)
  • Network file shares in the cloud (File storage)
  • Serving web pages on the Internet (static websites)

While Azure Storage supports a wide range of different externally facing customer storage scenarios, internally, the physical storage for the above services is managed by a common set of APIs. To provide durability and availability, Azure Storage relies on data replication and data partitioning across storage resources that are shared among tenants. To ensure cryptographic certainty of logical data isolation, Azure Storage relies on data encryption at rest using advanced algorithms with multiple ciphers as described in this section.

Data replication

Customer data in an Azure Storage account is always replicated to help ensure durability and high availability. Azure Storage copies customer data to protect it from transient hardware failures, network or power outages, and even massive natural disasters. Customers can typically choose to replicate their data within the same data center, across availability zones within the same region, or across geographically separated regions. Specifically, when creating a storage account, customers can select one of the following redundancy options:

  • Locally redundant storage (LRS) replicates three copies (or the erasure coded equivalent, as described later) of customer data within a single data center. A write request to an LRS storage account returns successfully only after the data is written to all three replicas. Each replica resides in separate fault and upgrade domains within a scale unit (set of storage racks within a data center).
  • Zone-redundant storage (ZRS) replicates customer data synchronously across three storage clusters in a single region. Each storage cluster is physically separated from the others and is in its own Availability Zone (AZ). A write request to a ZRS storage account returns successfully only after the data is written to all replicas across the three clusters.
  • Geo-redundant storage (GRS) replicates customer data to a secondary (paired) region region that is hundreds of miles away from the primary region. GRS storage accounts are durable even in the case of a complete regional outage or a disaster in which the primary region isn't recoverable. For a storage account with GRS or RA-GRS enabled, all data is first replicated with LRS. An update is first committed to the primary location and replicated using LRS. The update is then replicated asynchronously to the secondary region using GRS. When data is written to the secondary location, it's also replicated within that location using LRS.
  • Read-access geo-redundant storage (RA-GRS) is based on GRS. It provides read-only access to the data in the secondary location, in addition to geo-replication across two regions. With RA-GRS, customers can read from the secondary region regardless of whether Microsoft initiates a failover from the primary to secondary region.
  • Geo-zone-redundant storage (GZRS) combines the high availability of ZRS with protection from regional outages as provided by GRS. Data in a GZRS storage account is replicated across three AZs in the primary region and also replicated to a secondary geographic region for protection from regional disasters. Each Azure region is paired with another region within the same geography, together making a regional pair.
  • Read-access geo-zone-redundant storage (RA-GZRS) is based on GZRS. Customers can optionally enable read access to data in the secondary region with RA-GZRS if their applications need to be able to read data in the event of a disaster in the primary region.

High-level Azure Storage architecture

Azure Storage production systems consist of storage stamps and the location service (LS), as shown in Figure 12. A storage stamp is a cluster of racks of storage nodes, where each rack is built as a separate fault domain with redundant networking and power. The LS manages all the storage stamps, as well as the account namespace across all stamps. It allocates accounts to storage stamps and manages them across the storage stamps for load balancing and disaster recovery. The LS itself is distributed across two geographic locations for its own disaster recovery (Calder, et al., 2011).

High-level Azure Storage architecture Figure 12. High-level Azure Storage architecture (Source: Calder, et al., 2011)

There are three layers within a storage stamp: front-end, partition, and stream, which are described in the rest of this section.

Front-end layer

The front-end (FE) layer consists of a set of stateless servers that take the incoming requests, authenticate and authorize the requests, and then route them to a partition server in the Partition Layer. The FE layer knows what partition server to forward each request to, since each front-end server caches a Partition Map. The Partition Map keeps track of the partitions for the service being accessed and what partition server is controlling (serving) access to each partition in the system. The FE servers also stream large objects directly from the stream layer.

Transferring large volumes of data across the Internet is inherently unreliable. Using Azure block blobs service, users can upload and store large files efficiently by breaking up large files into smaller blocks of data. In this manner, block blobs allow partitioning of data into individual blocks for reliability of large uploads, as shown in Figure 13. Each block can be up to 100 MB in size with up to 50,000 blocks in the block blob. If a block fails to transmit correctly, only that particular block needs to be resent versus having to resend the entire file again. In addition, with a block blob, multiple blocks can be sent in parallel to decrease upload time.

Block blob partitioning of data into individual blocks Figure 13. Block blob partitioning of data into individual blocks

Customers can upload blocks in any order and determine their sequence in the final block list commitment step. Customers can also upload a new block to replace an existing uncommitted block of the same block ID.

Partition layer

The partition layer is responsible for a) managing higher-level data abstractions (Blob, Table, Queue), b) providing a scalable object namespace, c) providing transaction ordering and strong consistency for objects, d) storing object data on top of the stream layer, and e) caching object data to reduce disk I/O. This layer also provides asynchronous geo-replication of data and is focused on replicating data across stamps. Inter-stamp replication is done in the background to keep a copy of the data in two locations for disaster recovery purposes.

Once a blob is ingested by the FE layer, the partition layer is responsible for tracking and storing where data is placed in the stream layer. Each storage tenant can have approximately 200 – 300 individual partition layer nodes and each node is responsible for tracking and serving a partition of the data stored in that Storage tenant. The High Throughput Block Blob (HTBB) feature enables data to be sharded within a single blob, which allows the workload for large blobs to be shared across multiple partition layer servers (Figure 14). Distributing the load among multiple partition layer servers greatly improves availability, throughput, and durability.

High Throughput Block Blobs spread traffic and data across multiple partition servers and streams Figure 14. High Throughput Block Blobs spread traffic and data across multiple partition servers and streams

Stream layer

The stream layer stores the bits on disk and is responsible for distributing and replicating the data across many servers to keep data durable within a storage stamp. It acts as a distributed file system layer within a stamp. It handles files, called streams, which are ordered lists of data blocks called extents that are analogous to extents on physical hard drives. Large blob objects can be stored in multiple extents, potentially on multiple physical extent nodes (ENs). The data is stored in the stream layer, but it is accessible from the partition layer. Partition servers and stream servers are colocated on each storage node in a stamp.

The stream layer provides synchronous replication (intra-stamp) across different nodes in different fault domains to keep data durable within the stamp. It is responsible for creating the three local replicated copies of each extent. The stream layer manager makes sure that all three copies are distributed across different physical racks and nodes on different fault and upgrade domains so that copies are resilient to individual disk/node/rack failures and planned downtime due to upgrades.

Erasure Coding – Azure Storage uses a technique called Erasure Coding, which allows for the reconstruction of data even if some of the data is missing due to disk failure. This approach is similar to the concept of RAID striping for individual disks where data is spread across multiple disks so that if a disk is lost, the missing data can be reconstructed using the parity bits from the data on the other disks. Erasure Coding splits an extent into equal data and parity fragments that are stored on separate ENs, as shown in Figure 15.

Erasure Coding further shards extent data across EN servers to protect against failure Figure 15. Erasure Coding further shards extent data across EN servers to protect against failure

All data blocks stored in stream extent nodes have a 64-bit cyclic redundancy check (CRC) and a header protected by a hash signature to provide extent node (EN) data integrity. The CRC and signature are checked before every disk write, disk read, and network receive. In addition, scrubber processes read all data at regular intervals verifying the CRC and looking for “bit rot”. If a bad extent is found a new copy of that extent is created to replace the bad extent.

Customer data in Azure Storage relies on data encryption at rest to provide cryptographic certainty for logical data isolation. Customers can choose between platform-managed encryption keys or customer-managed encryption keys. The handling of data encryption and decryption is transparent to customers, as discussed in the next section.

Data encryption at rest

Azure provides extensive options for data encryption at rest to help customers safeguard their data and meet their compliance needs using both Microsoft-managed encryption keys, as well as customer-managed encryption keys. For more information, see data encryption models. This process relies on multiple encryption keys, as well as services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management.

Note

Customers who require additional security and isolation assurances for their most sensitive customer data stored in Azure services can encrypt it using their own encryption keys they control in Azure Key Vault.

In general, controlling key access and ensuring efficient bulk encryption and decryption of data is accomplished via the following types of encryption keys (as shown in Figure 16), although additional encryption keys can be used as described in Storage Service Encryption (SSE) section.

  • Data Encryption Key (DEK) is a symmetric AES-256 key that is utilized for bulk encryption and decryption of a partition or a block of data. The cryptographic modules are FIPS 140-2 validated as part of the Windows FIPS validation program. Access to DEKs is needed by the resource provider or application instance that is responsible for encrypting and decrypting a specific block of data. A single resource may have many partitions and many DEKs. When a DEK is replaced with a new key, only the data in its associated block must be re-encrypted with the new key. DEK is encrypted by the Key Encryption Key (KEK) and is never stored unencrypted.
  • Key Encryption Key (KEK) is an asymmetric RSA key that is optionally provided by the customer. This key is utilized to encrypt the Data Encryption Key (DEK) using Azure Key Vault and exists only in Azure Key Vault. As mentioned previously in Data encryption key management section, Azure Key Vault relies on FIPS 140-2 validated Hardware Security Modules (HSMs) for key storage and management (certificate #2643). These keys are not exportable and there can be no clear version of the KEK outside the HSMs – the binding is enforced by the underlying HSM. KEK is never exposed directly to the resource provider or other services. Access to KEK is controlled by permissions in Azure Key Vault and access to Azure Key Vault must be authenticated through Azure Active Directory. These permissions can be revoked to block access to this key and, by extension, the data that is encrypted using this key as the root of the key chain.

Data Encryption Keys are encrypted using customer’s key stored in Azure Key Vault Figure 16. Data Encryption Keys are encrypted using customer’s key stored in Azure Key Vault

Therefore, key hierarchy involves both DEKs and KEKs. DEKs are encrypted with KEKs and stored separately for efficient access by resource providers in bulk encryption and decryption operations. However, only an entity with access to the KEKs can decrypt the DEKs. The entity that has access to the KEK may be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be deleted via deletion of the KEK.

Detailed information about various encryption models, as well as specifics on key management for a wide range of Azure platform services is available in online documentation. Moreover, some Azure services provide additional encryption models, including client-side encryption, to further encrypt their data using more granular controls. The rest of this section covers encryption implementation for key Azure storage scenarios such as Storage Service Encryption and Azure Disk Encryption for IaaS Virtual Machines, including server-side encryption for managed disks.

Tip

Customers should review published Azure data encryption documentation for guidance on how to protect their data.

Additional resources:

Storage Service Encryption (SSE)

Azure Storage Service Encryption for data at rest ensures that data is automatically encrypted before persisting it to Azure Storage and decrypted before retrieval. All data written to Azure Storage is encrypted through FIPS 140-2 validated 256-bit AES encryption, and the handling of encryption, decryption, and key management in Storage Service Encryption (SSE) is transparent to customers. By default, Microsoft controls the encryption keys and is responsible for key rotation, usage, and access. Keys are stored securely and protected inside a Microsoft key store. This option provides the most convenience for customers given that all Azure Storage services are supported.

However, customers can also choose to manage encryption with their own keys by specifying:

  • Customer-managed key for managing Azure Storage encryption whereby the key is stored in Azure Key Vault. This option provides a lot of flexibility for customers to create, rotate, disable, and revoke access controls. Customers must use Azure Key Vault to store customer-managed keys.
  • Customer-provided key for encrypting and decrypting Blob storage only whereby the key can be stored in Azure Key Vault or in another key store on customer premises to meet regulatory compliance requirements. Customer-provided keys enable customers to pass an encryption key to Storage service using Blob APIs as part of read or write operations.

Note

Customers can configure customer-managed keys (CMK) with Azure Key Vault using the Azure portal, PowerShell, or Azure CLI command-line tool. Customers can use .NET to specify a customer-provided key on a request to Blob storage.

SSE is enabled by default for all new and existing storage accounts and it cannot be disabled. As shown in Figure 17, the encryption process leverages the following keys to help ensure cryptographic certainty of data isolation at rest:

  • Data Encryption Key (DEK) is a symmetric AES-256 key that is utilized for bulk encryption and it is unique per storage account in Azure Storage. It is generated by the Azure Storage service as part of the storage account creation. This key is encrypted by the Key Encryption Key (KEK) and is never stored unencrypted.
  • Key Encryption Key (KEK) is an asymmetric RSA-2048 key that is utilized to encrypt the Data Encryption Key (DEK) using Azure Key Vault and exists only in Azure Key Vault. It is never exposed directly to the Azure Storage service or other services. Customers must use Azure Key Vault to store their customer-managed keys for Storage Service Encryption.
  • Stamp Key (SK) is a symmetric AES-256 key that provides a third layer of encryption key security and is unique to each Azure Storage stamp, i.e., cluster of storage hardware. This key is used to perform a final wrap of the DEK and results in the following key chain hierarchy: SK(KEK(DEK)).

These three keys are combined to protect any data that is written to Azure Storage and provide cryptographic certainty for logical data isolation in Azure Storage. As mentioned previously, Azure SSE is enabled by default and it cannot be disabled.

Encryption flow for Storage Service Encryption Figure 17. Encryption flow for Storage Service Encryption

Storage accounts are encrypted regardless of their performance tier (standard or premium) or deployment model (Azure Resource Manager or classic). All Azure Storage redundancy options support encryption and all copies of a storage account are encrypted. All Azure Storage resources are encrypted, including blobs, disks, files, queues, and tables. All object metadata is also encrypted.

Because data encryption is performed by the Storage service, server-side encryption with CMK enables customers to use any operating system types and images for their VMs. For Windows and Linux customer IaaS VMs, Azure also provides Azure Disk Encryption (ADE) that enables customers to encrypt managed disks with CMK within the Guest VM, as described in the next section. Combining SSE and ADE effectively enables double encryption of data at rest.

Azure Disk Encryption (ADE)

Azure Storage Service Encryption encrypts the page blobs that store Azure Virtual Machine disks. Additionally, Azure Disk Encryption (ADE) may optionally be used to encrypt Azure Windows and Linux IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of customer data stored in Azure. This encryption includes managed disks, as described later in this section. ADE leverages the industry standard BitLocker feature of Windows and the DM-Crypt feature of Linux to provide OS-based volume encryption that is integrated with Azure Key Vault.

Drive encryption through BitLocker and DM-Crypt is a data protection feature that integrates with the operating system and addresses the threats of data theft or exposure from lost, stolen, or inappropriately decommissioned computers. BitLocker and DM-Crypt provide the most protection when used with a Trusted Platform Module (TPM) version 1.2 or higher. The TPM is a microcontroller designed to secure hardware through integrated cryptographic keys – it is commonly pre-installed on newer computers. BitLocker and DM-Crypt can use this technology to protect the keys used to encrypt disk volumes and provide integrity to computer boot process.

For managed disks, ADE allows customers to encrypt the OS and Data disks used by an IaaS Virtual Machine; however, Data cannot be encrypted without first encrypting the OS volume. The solution relies on Azure Key Vault to help customers control and manage the disk encryption keys. Customers can supply their own encryption keys, which are safeguarded in Azure Key Vault to support Bring Your Own Key (BYOK) scenarios, as described previously in Data encryption key management section.

Currently, it is not possible to use on-premises key management service or standalone Hardware Security Modules (including Azure Dedicated HSM) to safeguard the encryption keys. Only Azure Key Vault service can be used to safeguard the customer-managed encryption keys for ADE.

Note

Detailed instructions are available for creating and configuring a key vault for Azure Disk Encryption with both Windows and Linux VMs.

ADE relies on two encryption keys for implementation, as described previously:

  • Data Encryption Key (DEK) is a symmetric AES-256 key used to encrypt OS and Data volumes through BitLocker or DM-Crypt. DEK itself is encrypted and stored in an internal location close to the data.
  • Key Encryption Key (KEK) is an asymmetric RSA-2048 key used to encrypt the Data Encryption Keys. KEK is kept in Azure Key Vault under customer control including granting access permissions through Azure Active Directory.

The DEK, encrypted with the KEK, is stored separately and only an entity with access to the KEK can decrypt the DEK. Access to the KEK is guarded by Azure Key Vault where customers can choose to store their keys in FIPS 140-2 validated Hardware Security Modules.

For Windows VMs, ADE selects the encryption method in BitLocker based on the version of Windows, e.g., XTS-AES 256 bit for Windows Server 2012 or greater. These crypto modules are FIPS 140-2 validated as part of the Microsoft Windows FIPS validation program. For Linux VMs, ADE uses the decrypt default of aes-xts-plain64 with a 256-bit volume master key that is FIPS 140-2 validated as part of DM-Crypt validation obtained by suppliers of Linux IaaS VM images in Microsoft Azure Marketplace.

Server-side encryption for managed disks

Azure managed disks are block-level storage volumes that are managed by Azure and used with Azure Windows and Linux Virtual Machines. They simplify disk management for Azure IaaS VMs by handling storage account management transparently for customers. Azure managed disks automatically encrypt customer data by default using 256-bit AES encryption that is FIPS 140-2 validated. For encryption key management, customers have the following choices:

  • Platform-managed keys is the default choice that provides transparent data encryption at rest for managed disks whereby keys are managed by Microsoft.
  • Customer-managed keys enables customers to have control over their own keys that can be imported into Azure Key Vault or generated inside Azure Key Vault. This approach relies on two sets of keys as described previously: DEK and KEK. DEK encrypts the data using an AES-256 based encryption and is in turn encrypted by an RSA-2048 KEK that is stored in Azure Key Vault.

Customer-managed keys (CMK) enable customers to have full control over their data and encryption keys. Customers can grant access to managed disks in their Azure Key Vault so that their keys can be used for encrypting and decrypting the DEK. Customers can also disable their keys or revoke access to managed disks at any time. Finally, customers have full audit control over key usage with Azure Key Vault monitoring to ensure that only managed disks or other authorized resources are accessing their encryption keys.

Customers are always in control of their customer data in Azure. They can access, extract, and delete their customer data stored in Azure at will. When a customer terminates their Azure subscription, Microsoft takes the necessary steps to ensure that the customer continues to own their customer data. A common customer concern upon data deletion or subscription termination is whether another customer or Azure administrator can access their deleted data. The following sections explain how data deletion, retention, and destruction works in Azure.

Data deletion

Storage is allocated sparsely, which means that when a virtual disk is created, disk space is not allocated for its entire capacity. Instead, a table is created that maps addresses on the virtual disk to areas on the physical disk and that table is initially empty. The first time a customer writes data on the virtual disk, space on the physical disk is allocated and a pointer to it is placed in the table.

When the customer deletes a blob or table entity, it will immediately get deleted from the index used to locate and access the data on the primary location, and then the deletion is done asynchronously at the geo-replicated copy of the data (for customers who provisioned geo-redundant storage). At the primary location, the customer can immediately try to access the blob or entity, and they won’t find it in their index, since Azure provides strong consistency for the delete. So, the customer can verify directly that the data has been deleted.

In Azure Storage, all disk writes are sequential. This approach minimizes the number of disk “seeks” but requires updating the pointers to objects every time they are written (new versions of pointers are also written sequentially). A side effect of this design is that it is not possible to ensure that a secret on disk is gone by overwriting it with other data. The original data will remain on the disk and the new value will be written sequentially. Pointers will be updated such that there is no way to find the deleted value anymore. Once the disk is full, however, the system has to write new logs onto disk space that has been freed up by the deletion of old data. Instead of allocating log files directly from disk sectors, log files are created in a file system running NTFS. A background thread running on Azure Storage nodes frees up space by going through the oldest log file, copying blocks that are still referenced from that oldest log file to the current log file (and updating all pointers as it goes). It then deletes the oldest log file. Consequently, there are two categories of free disk space on the disk: (1) space that NTFS knows is free, where it allocates new log files from this pool; and (2) space within those log files that Azure Storage knows is free since there are no current pointers to it.

The sectors on the physical disk associated with the deleted data become immediately available for reuse and are overwritten when the corresponding storage block is reused for storing other data. The time to overwrite varies depending on disk utilization and activity. This process is consistent with the operation of a log-structured file system where all writes are written sequentially to disk. This process is not deterministic and there is no guarantee when particular data will be gone from physical storage. However, when exactly deleted data gets overwritten or the corresponding physical storage allocated to another customer is irrelevant for the key isolation assurance that no data can be recovered after deletion:

  • A customer cannot read deleted data of another customer.
  • If anyone tries to read a region on a virtual disk that they have not yet written to, physical space will not have been allocated for that region and therefore only zeroes would be returned.

Customers are not provided with direct access to the underlying physical storage. Since customer software only addresses virtual disks, there is no way to express a request to read from or write to a physical address that is allocated to a different customer or a physical address that is free. For more information, see the blog post on data cleansing and leakage.

Conceptually, this rationale applies regardless of the software that keeps track of reads and writes. In the case of Azure SQL Database, it is the SQL Database software that does this enforcement. For Azure Storage, it is the Azure Storage software. In the case of non-durable drives of a VM, it is the VHD handling code of the Host OS. The mapping from virtual to physical address takes place outside of the customer VM.

Finally, as described in Data encryption at rest section and depicted in Figure 16, the encryption key hierarchy relies on the Key Encryption Key (KEK) which can be kept in Azure Key Vault under customer control (i.e., customer-managed key – CMK) and used to encrypt the Data Encryption Key (DEK), which in turns encrypts data at rest using AES-256 symmetric encryption. Data in Azure Storage is encrypted at rest by default and customers can choose to have encryption keys under their own control. In this manner, customers can also prevent access to their data stored in Azure. Moreover, since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be deleted via deletion of the KEK.

Data retention

At all times during the term of customer’s Azure subscription, customer has the ability to access, extract, and delete customer data stored in Azure.

If a subscription expires or is terminated, Microsoft will preserve customer data for a 90-day retention period to permit customers to extract data or renew their subscriptions. After this retention period, Microsoft will delete all customer data within an additional 90 days, i.e., customer data will be permanently deleted 180 days after expiration or termination. Given the data retention procedure, customers can control how long their data is stored by timing when they end the service with Microsoft. It is recommended that customers do not terminate their service until they have extracted all data so that the initial 90-day retention period can act as a safety buffer should customers later realize they missed something.

If the customer deleted an entire storage account by mistake, they should contact Azure Support promptly for assistance with recovery. Customers can create and manage support requests in the Azure portal. A storage account deleted within a subscription is retained for two weeks to allow for recovery from accidental deletion, after which it is permanently deleted. However, when a storage object (e.g., blob, file, queue, table) is itself deleted, the delete operation is immediate and irreversible. Unless the customer made a backup, deleted storage objects cannot be recovered. For Blob storage, customers can implement additional protection against accidental or erroneous modifications or deletes by enabling soft delete. When soft delete is enabled for a storage account, blobs, blob versions, and snapshots in that storage account may be recovered after they are deleted, within a retention period specified by the customer. To avoid retention of data after storage account or subscription deletion, customers can delete storage objects individually before deleting the storage account or subscription.

For accidental deletion involving Azure SQL Database, customers should check backups that the service makes automatically (e.g., full database backup is done weekly, and differential database backups are done hourly) and use point-in-time restore. Also, individual services (e.g., Azure DevOps) can have their own policies for accidental data deletion.

Data destruction

If a disk drive used for storage suffers a hardware failure, it is securely erased or destroyed before decommissioning. The data on the drive is erased to ensure that the data cannot be recovered by any means. When such devices are decommissioned, Microsoft follows the NIST SP 800-88 R1 disposal process with data classification aligned to FIPS 199 Moderate. Magnetic, electronic, or optical media are purged or destroyed in accordance with the requirements established in NIST SP 800-88 R1 where the terms are defined as follows:

  • Purge: “a media sanitization process that protects the confidentiality of information against a laboratory attack”, which involves “resources and knowledge to use nonstandard systems to conduct data recovery attempts on media outside their normal operating environment” using “signal processing equipment and specially trained personnel.” Note: For hard disk drives (including ATA, SCSI, SATA, SAS, etc.) a firmware-level secure-erase command (single-pass) is acceptable, or a software-level three-pass overwrite and verification (ones, zeros, random) of the entire physical media including recovery areas, if any. For solid state disks (SSD), a firmware-level secure-erase command is necessary.
  • Destroy: “a variety of methods, including disintegration, incineration, pulverizing, shredding, and melting” after which the media “cannot be reused as originally intended.”

Purge and Destroy operations must be performed using tools and processes approved by the Microsoft Cloud + AI Security Group. Records must be kept of the erasure and destruction of assets. Devices that fail to complete the Purge successfully must be degaussed (for magnetic media only) or Destroyed.

In addition to technical implementation details that enable Azure compute, networking, and storage isolation, Microsoft has invested heavily in security assurance processes and practices to correctly develop logically isolated services and systems, as described in the next section.

Security assurance processes and practices

Azure isolation assurance is further enforced by Microsoft’s internal use of the Security Development Lifecycle (SDL) and other strong security assurance processes to protect attack surfaces and mitigate threats. Microsoft has established industry-leading processes and tooling that provides high confidence in the Azure isolation guarantee.

  • Security Development Lifecycle (SDL) – The Microsoft SDL introduces security and privacy considerations throughout all phases of the development process, helping developers build highly secure software, address security compliance requirements, and reduce development costs. The guidance, best practices, tools, and processes in the Microsoft SDL are practices used internally to build all Azure services and create more secure products and services. This process is also publicly documented to share Microsoft’s learnings with the broader industry and incorporate industry feedback to create a stronger security development process.
  • Tooling and processes – All Azure code is subject to an extensive set of both static and dynamic analysis tools that identify potential vulnerabilities, ineffective security patterns, memory corruption, user privilege issues, and other critical security problems.
    • Purpose built fuzzing – A testing technique used to find security vulnerabilities in software products and services. It consists of repeatedly feeding modified, or fuzzed, data to software inputs to trigger hangs, exceptions, and crashes, i.e., fault conditions that could be leveraged by an attacker to disrupt or take control of applications and services. The Microsoft SDL recommends fuzzing all attack surfaces of a software product, especially those surfaces that expose a data parser to untrusted data.
    • Live-site penetration testing – Microsoft conducts ongoing live-site penetration testing to improve cloud security controls and processes, as part of the Red Teaming program described later in this section. Penetration testing is a security analysis of a software system performed by skilled security professionals simulating the actions of a hacker. The objective of a penetration test is to uncover potential vulnerabilities resulting from coding errors, system configuration faults, or other operational deployment weaknesses. The tests are conducted against Azure infrastructure and platforms as well as Microsoft’s own tenants, applications, and data. Customer tenants, applications, and data hosted in Azure are never targeted; however, customers can conduct their own penetration testing of their applications deployed in Azure.
    • Threat modeling – A core element of the Microsoft SDL. It’s an engineering technique used to help identify threats, attacks, vulnerabilities, and countermeasures that could affect applications and services. Threat modeling is part of the Azure routine development lifecycle.
    • Automated build alerting of changes to attack surface areaAttack Surface Analyzer is a Microsoft-developed open-source security tool that analyzes the attack surface of a target system and reports on potential security vulnerabilities introduced during the installation of software or system misconfiguration. The core feature of Attack Surface Analyzer is the ability to “diff” an operating system's security configuration, before and after a software component is installed. This feature is important because most installation processes require elevated privileges, and once granted, can lead to unintended system configuration changes.
  • Mandatory security training – The Microsoft Azure security training and awareness program requires all personnel responsible for Azure development and operations to take essential training as well as any additional training based on individual job requirements. These procedures provide a standard approach, tools, and techniques used to implement and sustain the awareness program. Microsoft has implemented a security awareness program called STRIKE that provides monthly e-mail communication to all Azure engineering personnel about security awareness and allows employees to register for in-person or online security awareness training. STRIKE offers a series of security training events throughout the year, as well as STRIKE Central, which is a centralized online resource for security awareness, training, documentation, and community engagement.
  • Bug Bounty Program – Microsoft strongly believes that close partnership with academic and industry researchers drives a higher level of security assurance for customers and their data. Security researchers play an integral role in the Azure ecosystem by discovering vulnerabilities missed in the software development process. The Microsoft Bug Bounty Program is designed to supplement and encourage research in relevant technologies (e.g., encryption, spoofing, hypervisor isolation, elevation of privileges, etc.) to better protect Azure’s infrastructure and customer data. As an example, for each critical vulnerability identified in the Azure Hypervisor, Microsoft compensates security researchers up to $250,000 – a significant amount to incentivize participation and vulnerability disclosure. The bounty range for vulnerability reports on Azure services is up to $300,000.
  • Red Team activities – Microsoft utilizes Red Teaming, a form of live site penetration testing against Microsoft-managed infrastructure, services, and applications. Microsoft simulates real-world breaches, continuously monitors security, and practices security incident response to test and improve the security of Azure. Red Teaming is predicated on the Assume Breach security strategy and executed by two core groups: Red Team (attackers) and Blue Team (defenders). The approach is designed to test Azure systems and operations using the same tactics, techniques, and procedures as real adversaries against live production infrastructure, without the foreknowledge of the infrastructure and platform Engineering or Operations teams. This approach tests security detection and response capabilities, and helps identify production vulnerabilities, configuration errors, invalid assumptions, or other security issues in a controlled manner. Every Red Team breach is followed by full disclosure between the Red Team and Blue Team to identify gaps, address findings, and significantly improve breach response.

When migrating to the cloud, customers accustomed to traditional on-premises data center deployment will usually conduct a risk assessment to gauge their threat exposure and formulate mitigating measures. In many of these instances, security considerations for traditional on-premises deployment tend to be well understood whereas the corresponding cloud options tend to be new. The next section is intended to help customers with this comparison.

Logical isolation considerations

A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping enforce controls designed to keep customers from accessing one another's data or applications. This section addresses concerns common to customers who are migrating from traditional on-premises physically isolated infrastructure to the cloud.

Physical versus logical security considerations

Table 6 provides a summary of key security considerations for physically isolated on-premises deployments (e.g., bare metal) versus logically isolated cloud-based deployments (e.g., Azure). It’s useful to review these considerations prior to examining risks identified to be specific to shared cloud environments.

Table 6. Key security considerations for physical versus logical isolation

Security consideration On-premises Azure
Firewalls, networking - Physical network enforcement (switches, etc.)
- Physical host-based firewall can be manipulated by compromised application
- 2 layers of enforcement
- Physical network enforcement (switches, etc.)
- Hyper-V host virtual network switch enforcement cannot be changed from inside VM
- VM host-based firewall can be manipulated by compromised application
- 3 layers of enforcement
Attack surface area - Large hardware attack surface exposed to complex workloads, enables firmware based advanced persistent threat (APT) - Hardware not directly exposed to VM, no potential for APT to persist in firmware from VM
- Small software-based Hyper-V attack surface area with low historical bug counts exposed to VM
Side channel attacks - Side channel attacks may be a factor, although reduced vs. shared hardware - Side channel attacks assume control over VM placement across applications; may not be practical in large cloud service
Patching - Varied effective patching policy applied across host systems
- Highly varied/fragile updating for hardware and firmware
- Uniform patching policy applied across host and VMs
Security analytics - Security analytics dependent on host-based security solutions, which assume host/security software has not been compromised - Outside VM (hypervisor based) forensics/snapshot capability allows assessment of potentially compromised workloads
Security policy - Security policy verification (patch scanning, vulnerability scanning, etc.) subject to tampering by compromised host
- Inconsistent security policy applied across customer entities
- Outside VM verification of security policies
- Possible to enforce uniform security policies across customer entities
Logging and monitoring - Varied logging and security analytics solutions - Common Azure platform logging and security analytics solutions
- Most existing on-premises / varied logging and security analytics solutions also work
Malicious insider - Persistent threat caused by system admins having elevated access rights typically for the duration of employment - Greatly reduced threat because admins have no default access rights

Listed below are key risks that are unique to shared cloud environments that may need to be addressed when accommodating sensitive data and workloads.

Exploitation of vulnerabilities in virtualization technologies

Compared to traditional on-premises hosted systems, Azure provides a greatly reduced attack surface by using a locked-down Windows Server core for the Host OS layered over the Hypervisor. Moreover, by default, guest PaaS VMs do not have any user accounts to accept incoming remote connections and the default Windows administrator account is disabled. Customer software in PaaS VMs is restricted by default to running under a low-privilege account, which helps protect customer’s service from attacks by its own end users. These permissions can be modified by customers, and they can also choose to configure their VMs to allow remote administrative access.

PaaS VMs offer more advanced protection against persistent malware infections than traditional physical server solutions, which if compromised by an attacker can be difficult to clean, even after the vulnerability is corrected. The attacker may have left behind modifications to the system that allow re-entry, and it is a challenge to find all such changes. In the extreme case, the system must be reimaged from scratch with all software reinstalled, sometimes resulting in the loss of application data. With PaaS VMs, reimaging is a routine part of operations, and it can help clean out intrusions that have not even been detected. This approach makes it much more difficult for a compromise to persist.

When VMs belonging to different customers are running on the same physical server, it is the Hypervisor’s job to ensure that they cannot learn anything important about what the other customer’s VMs are doing. Azure helps block unauthorized direct communication by design; however, there are subtle effects where one customer might be able to characterize the work being done by another customer. The most important of these effects are timing effects when different VMs are competing for the same resources. By carefully comparing operations counts on CPUs with elapsed time, a VM can learn something about what other VMs on the same server are doing. Known as side-channel attacks, these exploits have received plenty of attention in the academic press where researchers have been seeking to learn much more specific information about what is going on in a peer VM. Of particular interest are efforts to learn the cryptographic keys of a peer VM by measuring the timing of certain memory accesses and inferring which cache lines the victim’s VM is reading and updating. Under controlled conditions with VMs using hyper-threading, successful attacks have been demonstrated against commercially available implementations of cryptographic algorithms. There are several mitigations in Azure that reduce the risk of such an attack:

  • The standard Azure cryptographic libraries have been designed to resist such attacks by not having cache access patterns depend on the cryptographic keys being used.
  • Azure uses an advanced VM host placement algorithm that is highly sophisticated and nearly impossible to predict, which helps reduce the chances of adversary-controlled VM being placed on the same host as the target VM.
  • All Azure servers have at least eight physical cores and some have many more. Increasing the number of cores that share the load placed by various VMs adds noise to an already weak signal.
  • Customers can provision Virtual Machines on hardware dedicated to a single customer by using Azure Dedicated Host or Isolated VMs, as described in Physical isolation section.

Overall, PaaS (or any workload that autocreates VMs) contributes to churn in VM placement that leads to randomized VM allocation. Random placement of customer VMs makes it much harder for attackers to get on the same host. In addition, host access is hardened with greatly reduced attack surface that makes these types of exploits difficult to sustain.

Summary

A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate each customer's applications and data. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent customers from accessing one another's data or applications.

Azure addresses the perceived risk of resource sharing by providing a trustworthy foundation for assuring multi-tenant, cryptographically certain, logically isolated cloud services using a common set of principles:

  • User access controls with authentication and identity separation that leverages Azure Active Directory and Role-Based Access Control (RBAC)
  • Compute isolation for processing, including both logical and physical compute isolation
  • Networking isolation including separation of network traffic and data encryption in transit
  • Storage isolation with data encryption at rest using advanced algorithms with multiple ciphers and encryption keys, as well as provisions for customer-managed keys (CMK) under customer control in Azure Key Vault
  • Security assurance processes embedded in service design to correctly develop logically isolated services, including Security Development Lifecycle (SDL) and other strong security assurance processes to protect attack surfaces and mitigate risks

In line with the shared responsibility model in cloud computing, this article provides customer guidance for activities that are part of customer responsibility. It also explores design principles and technologies available in Azure to help customers achieve their secure isolation objectives.

Next steps

Learn more about: