Planning for an Azure Files deployment

Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard SMB protocol. Because Azure Files is fully managed, deploying it in production scenarios is much easier than deploying and managing a file server or NAS device. This article addresses the topics to consider when deploying an Azure file share for production use within your organization.

Management concepts

The following diagram illustrates the Azure Files management constructs:

File Structure

  • Storage Account: All access to Azure Storage is done through a storage account. See Scalability and Performance Targets for details about storage account capacity.

  • Share: A File Storage share is an SMB file share in Azure. All directories and files must be created in a parent share. An account can contain an unlimited number of shares and a share can store an unlimited number of files, up to the total capacity of the file share. The total capacity for premium and standard file shares is 100 TiB.

  • Directory: An optional hierarchy of directories.

  • File: A file in the share. A file may be up to 1 TiB in size.

  • URL format: For requests to an Azure file share made with the File REST protocol, files are addressable using the following URL format:

    https://<storage account>.file.core.windows.net/<share>/<directory>/<file>
    

Data access method

Azure Files offers two, built-in, convenient data access methods that you can use separately, or in combination with each other, to access your data:

  1. Direct cloud access: Any Azure file share can be mounted by Windows, macOS, and/or Linux with the industry standard Server Message Block (SMB) protocol or via the File REST API. With SMB, reads and writes to files on the share are made directly on the file share in Azure. To mount by a VM in Azure, the SMB client in the OS must support at least SMB 2.1. To mount on-premises, such as on a user's workstation, the SMB client supported by the workstation must support at least SMB 3.0 (with encryption). In addition to SMB, new applications or services may directly access the file share via File REST, which provides an easy and scalable application programming interface for software development.
  2. Azure File Sync: With Azure File Sync, shares can be replicated to Windows Servers on-premises or in Azure. Your users would access the file share through the Windows Server, such as through an SMB or NFS share. This is useful for scenarios in which data will be accessed and modified far away from an Azure datacenter, such as in a branch office scenario. Data may be replicated between multiple Windows Server endpoints, such as between multiple branch offices. Finally, data may be tiered to Azure Files, such that all data is still accessible via the Server, but the Server does not have a full copy of the data. Rather, data is seamlessly recalled when opened by your user.

The following table illustrates how your users and applications can access your Azure file share:

Direct cloud access Azure File Sync
What protocols do you need to use? Azure Files supports SMB 2.1, SMB 3.0, and File REST API. Access your Azure file share via any supported protocol on Windows Server (SMB, NFS, FTPS, etc.)
Where are you running your workload? In Azure: Azure Files offers direct access to your data. On-premises with slow network: Windows, Linux, and macOS clients can mount a local on-premises Windows File share as a fast cache of your Azure file share.
What level of ACLs do you need? Share and file level. Share, file, and user level.

Data security

Azure Files has several built-in options for ensuring data security:

  • Support for encryption in both over-the-wire protocols: SMB 3.0 encryption and File REST over HTTPS. By default:

    • Clients that support SMB 3.0 encryption send and receive data over an encrypted channel.
    • Clients that do not support SMB 3.0 with encryption can communicate intra-datacenter over SMB 2.1 or SMB 3.0 without encryption. SMB clients are not allowed to communicate inter-datacenter over SMB 2.1 or SMB 3.0 without encryption.
    • Clients can communicate over File REST with either HTTP or HTTPS.
  • Encryption at-rest (Azure Storage Service Encryption): Storage Service Encryption (SSE) is enabled for all storage accounts. Data at-rest is encrypted with fully-managed keys. Encryption at-rest does not increase storage costs or reduce performance.

  • Optional requirement of encrypted data in-transit: when selected, Azure Files rejects access to the data over unencrypted channels. Specifically, only HTTPS and SMB 3.0 with encryption connections are allowed.

    Important

    Requiring secure transfer of data will cause older SMB clients not capable of communicating with SMB 3.0 with encryption to fail. For more information, see Mount on Windows, Mount on Linux, and Mount on macOS.

For maximum security, we strongly recommend always enabling both encryption at-rest and enabling encryption of data in-transit whenever you are using modern clients to access your data. For example, if you need to mount a share on a Windows Server 2008 R2 VM, which only supports SMB 2.1, you need to allow unencrypted traffic to your storage account since SMB 2.1 does not support encryption.

If you are using Azure File Sync to access your Azure file share, we will always use HTTPS and SMB 3.0 with encryption to sync your data to your Windows Servers, regardless of whether you require encryption of data at-rest.

File share performance tiers

Azure Files offers two performance tiers: standard and premium.

Standard file shares

Standard file shares are backed by hard disk drives (HDDs). Standard file shares provide reliable performance for IO workloads that are less sensitive to performance variability such as general-purpose file shares and dev/test environments. Standard file shares are only available in a pay-as-you-go billing model.

Important

If you want to use file shares larger than 5 TiB, see the Onboard to larger file shares (standard tier) section for steps to onboard, as well as regional availability and restrictions.

Premium file shares

Premium file shares are backed by solid-state drives (SSDs). Premium file shares provide consistent high performance and low latency, within single-digit milliseconds for most IO operations, for IO-intensive workloads. This makes them suitable for a wide variety of workloads like databases, web site hosting, and development environments. Premium file shares are only available in a provisioned billing model. Premium file shares use a deployment model separate from standard file shares.

Azure Backup is available for premium file shares and Azure Kubernetes Service supports premium file shares in version 1.13 and above.

If you'd like to learn how to create a premium file share, see our article on the subject: How to create an Azure premium file storage account.

Currently, you cannot directly convert between a standard file share and a premium file share. If you would like to switch to either tier, you must create a new file share in that tier and manually copy the data from your original share to the new share you created. You can do this using any of the Azure Files supported copy tools, such as Robocopy or AzCopy.

Important

Premium file shares are available with LRS in most regions that offer storage accounts and with ZRS in a smaller subset of regions. To find out if premium file shares are currently available in your region, see the products available by region page for Azure. To find out what regions support ZRS, see Support coverage and regional availability.

To help us prioritize new regions and premium tier features, please fill out this survey.

Provisioned shares

Premium file shares are provisioned based on a fixed GiB/IOPS/throughput ratio. For each GiB provisioned, the share will be issued one IOPS and 0.1 MiB/s throughput up to the max limits per share. The minimum allowed provisioning is 100 GiB with min IOPS/throughput.

On a best effort basis, all shares can burst up to three IOPS per GiB of provisioned storage for 60 minutes or longer depending on the size of the share. New shares start with the full burst credit based on the provisioned capacity.

Shares must be provisioned in 1 GiB increments. Minimum size is 100 GiB, next size is 101 GiB and so on.

Tip

Baseline IOPS = 1 * provisioned GiB. (Up to a max of 100,000 IOPS).

Burst Limit = 3 * Baseline IOPS. (Up to a max of 100,000 IOPS).

egress rate = 60 MiB/s + 0.06 * provisioned GiB

ingress rate = 40 MiB/s + 0.04 * provisioned GiB

Provisioned share size is specified by share quota. Share quota can be increased at any time but can be decreased only after 24 hours since the last increase. After waiting for 24 hours without a quota increase, you can decrease the share quota as many times as you like, until you increase it again. IOPS/Throughput scale changes will be effective within a few minutes after the size change.

It is possible to decrease the size of your provisioned share below your used GiB. If you do this, you will not lose data but, you will still be billed for the size used and receive the performance (baseline IOPS, throughput, and burst IOPS) of the provisioned share, not the size used.

The following table illustrates a few examples of these formulae for the provisioned share sizes:

Capacity (GiB) Baseline IOPS Burst IOPS Egress (MiB/s) Ingress (MiB/s)
100 100 Up to 300 66 44
500 500 Up to 1,500 90 60
1,024 1,024 Up to 3,072 122 81
5,120 5,120 Up to 15,360 368 245
10,240 10,240 Up to 30,720 675 450
33,792 33,792 Up to 100,000 2,088 1,392
51,200 51,200 Up to 100,000 3,132 2,088
102,400 100,000 Up to 100,000 6,204 4,136

Note

File shares performance is subject to machine network limits, available network bandwidth, IO sizes, parallelism, among many other factors. For example, based on internal testing with 8 KiB read/write IO sizes, a single Windows virtual machine, Standard F16s_v2, connected to premium file share over SMB could achieve 20K read IOPS and 15K write IOPS. With 512 MiB read/write IO sizes, the same VM could achieve 1.1 GiB/s egress and 370 MiB/s ingress throughput. To achieve maximum performance scale, spread the load across multiple VMs. Please refer troubleshooting guide for some common performance issues and workarounds.

Bursting

Premium file shares can burst their IOPS up to a factor of three. Bursting is automated and operates based on a credit system. Bursting works on a best effort basis and the burst limit is not a guarantee, file shares can burst up to the limit.

Credits accumulate in a burst bucket whenever traffic for your file share is below baseline IOPS. For example, a 100 GiB share has 100 baseline IOPS. If actual traffic on the share was 40 IOPS for a specific 1-second interval, then the 60 unused IOPS are credited to a burst bucket. These credits will then be used later when operations would exceed the baseline IOPs.

Tip

Size of the burst bucket = Baseline IOPS * 2 * 3600.

Whenever a share exceeds the baseline IOPS and has credits in a burst bucket, it will burst. Shares can continue to burst as long as credits are remaining, though shares smaller than 50 TiB will only stay at the burst limit for up to an hour. Shares larger than 50 TiB can technically exceed this one hour limit, up to two hours but, this is based on the number of burst credits accrued. Each IO beyond baseline IOPS consumes one credit and once all credits are consumed the share would return to baseline IOPS.

Share credits have three states:

  • Accruing, when the file share is using less than the baseline IOPS.
  • Declining, when the file share is bursting.
  • Remaining constant, when there are either no credits or baseline IOPS are in use.

New file shares start with the full number of credits in its burst bucket. Burst credits will not be accrued if the share IOPS fall below baseline IOPS due to throttling by the server.

File share redundancy

Azure Files standard shares supports four data redundancy options: locally redundant storage (LRS), zone redundant storage (ZRS), geo-redundant storage (GRS), and geo-zone-redundant storage (GZRS) (preview).

Azure Files premium shares support both LRS and ZRS, ZRS is currently available in a smaller subset of regions.

The following sections describe the differences between the different redundancy options:

Locally redundant storage

Locally redundant storage (LRS) replicates your data three times within a single data center. LRS provides at least 99.999999999% (11 nines) durability of objects over a given year. LRS is the lowest-cost replication option and offers the least durability compared to other options.

If a datacenter-level disaster (for example, fire or flooding) occurs, all replicas in a storage account using LRS may be lost or unrecoverable. To mitigate this risk, Microsoft recommends using zone-redundant storage (ZRS), geo-redundant storage (GRS), or geo-zone-redundant storage (GZRS).

A write request to an LRS storage account returns successfully only after the data is written to all three replicas.

You may wish to use LRS in the following scenarios:

  • If your application stores data that can be easily reconstructed if data loss occurs, you may opt for LRS.
  • Some applications are restricted to replicating data only within a country/region due to data governance requirements. In some cases, the paired regions across which the data is replicated for GRS accounts may be in another country/region. For more information on paired regions, see Azure regions.

Zone redundant storage

Zone-redundant storage (ZRS) replicates your data synchronously across three storage clusters in a single region. Each storage cluster is physically separated from the others and is located in its own availability zone (AZ). Each availability zone—and the ZRS cluster within it—is autonomous and includes separate utilities and networking features. A write request to a ZRS storage account returns successfully only after the data is written to all replicas across the three clusters.

When you store your data in a storage account using ZRS replication, you can continue to access and manage your data if an availability zone becomes unavailable. ZRS provides excellent performance and low latency. ZRS offers the same scalability targets as locally redundant storage (LRS).

Consider ZRS for scenarios that require consistency, durability, and high availability. Even if an outage or natural disaster renders an availability zone unavailable, ZRS offers durability for storage objects of at least 99.9999999999% (12 9's) over a given year.

Geo-zone-redundant storage (GZRS) (preview) replicates your data synchronously across three Azure availability zones in the primary region, then replicates the data asynchronously to the secondary region. GZRS provides high availability together with maximum durability. GZRS is designed to provide at least 99.99999999999999% (16 9's) durability of objects over a given year. For read access to data in the secondary region, enable read-access geo-zone-redundant storage (RA-GZRS). For more information about GZRS, see Geo-zone-redundant storage for highly availability and maximum durability (preview).

For more information about availability zones, see Availability Zones overview.

Geo-redundant storage

Warning

If you are using your Azure file share as a cloud endpoint in a GRS storage account, you shouldn't initiate storage account failover. Doing so will cause sync to stop working and may also cause unexpected data loss in the case of newly tiered files. In the case of loss of an Azure region, Microsoft will trigger the storage account failover in a way that is compatible with Azure File Sync.

Geo-redundant storage (GRS) is designed to provide at least 99.99999999999999% (16 9's) durability of objects over a given year by replicating your data to a secondary region that is hundreds of miles away from the primary region. If your storage account has GRS enabled, then your data is durable even in the case of a complete regional outage or a disaster in which the primary region isn't recoverable.

If you opt for read-access geo-redundant storage (RA-GRS), you should know that Azure File does not support read-access geo-redundant storage (RA-GRS) in any region at this time. File shares in the RA-GRS storage account work like they would in GRS accounts and are charged GRS prices.

GRS replicates your data to another data center in a secondary region, but that data is available to be read only if Microsoft initiates a failover from the primary to secondary region.

For a storage account with GRS enabled, all data is first replicated with locally redundant storage (LRS). An update is first committed to the primary location and replicated using LRS. The update is then replicated asynchronously to the secondary region using GRS. When data is written to the secondary location, it's also replicated within that location using LRS.

Both the primary and secondary regions manage replicas across separate fault domains and upgrade domains within a storage scale unit. The storage scale unit is the basic replication unit within the datacenter. Replication at this level is provided by LRS; for more information, see Locally redundant storage (LRS): Low-cost data redundancy for Azure Storage.

Keep these points in mind when deciding which replication option to use:

  • Geo-zone-redundant storage (GZRS) (preview) provides high availability together with maximum durability by replicating data synchronously across three Azure availability zones and then replicating data asynchronously to the secondary region. You can also enable read access to the secondary region. GZRS is designed to provide at least 99.99999999999999% (16 9's) durability of objects over a given year. For more information on GZRS, see Geo-zone-redundant storage for highly availability and maximum durability (preview).
  • Zone-redundant storage (ZRS) provides highly availability with synchronous replication and may be a better choice for some scenarios than GRS. For more information on ZRS, see ZRS.
  • Asynchronous replication involves a delay from the time that data is written to the primary region, to when it is replicated to the secondary region. In the event of a regional disaster, changes that haven't yet been replicated to the secondary region may be lost if that data can't be recovered from the primary region.
  • With GRS, the replica isn't available for read or write access unless Microsoft initiates a failover to the secondary region. In the case of a failover, you'll have read and write access to that data after the failover has completed. For more information, please see Disaster recovery guidance.

Onboard to larger file shares (standard tier)

This section only applies to the standard file shares. All premium file shares are available with 100 TiB capacity.

Restrictions

  • LRS/ZRS to GRS/GZRS account conversion will not be possible for any storage account with large file shares enabled.

Regional availability

Standard file shares are available in all regions up to 5 TiB. In certain regions, they are available with a 100 TiB limit, those regions are listed in the following table:

Region Supported redundancy
Australia East LRS
Australia Southeast LRS
Canada Central LRS
Canada East LRS
Central India LRS
Central US* LRS
East Asia LRS
East US* LRS,ZRS
East US 2* LRS
France Central LRS, ZRS
France South LRS
Japan East LRS
North Central US LRS
North Europe LRS
South India LRS
Southeast Asia LRS, ZRS
UAE Central LRS
UK South LRS
UK West LRS
West Central US LRS
West Europe* LRS, ZRS
West US* LRS
West US 2 LRS, ZRS

* Supported for new accounts, not all existing accounts have completed the upgrade process. You can check if your existing storage accounts have completed the upgrade process by attempting to Enable large file shares.

To help us prioritize new regions and features, please fill out this survey.

Enable and create larger file shares

To begin using larger file shares, see our article How to enable and create large file shares.

Data growth pattern

Today, the maximum size for an Azure file share is 100 TiB. Because of this current limitation, you must consider the expected data growth when deploying an Azure file share.

It is possible to sync multiple Azure file shares to a single Windows File Server with Azure File Sync. This allows you to ensure that older, large file shares that you may have on-premises can be brought into Azure File Sync. For more information, see Planning for an Azure File Sync Deployment.

Data transfer method

There are many easy options to bulk transfer data from an existing file share, such as an on-premises file share, into Azure Files. A few popular ones include (non-exhaustive list):

  • Azure File Sync: As part of a first sync between an Azure file share (a "Cloud Endpoint") and a Windows directory namespace (a "Server Endpoint"), Azure File Sync will replicate all data from the existing file share to Azure Files.
  • Azure Import/Export: The Azure Import/Export service allows you to securely transfer large amounts of data into an Azure file share by shipping hard disk drives to an Azure datacenter.
  • Robocopy: Robocopy is a well known copy tool that ships with Windows and Windows Server. Robocopy may be used to transfer data into Azure Files by mounting the file share locally, and then using the mounted location as the destination in the Robocopy command.
  • AzCopy: AzCopy is a command-line utility designed for copying data to and from Azure Files, as well as Azure Blob storage, using simple commands with optimal performance.

Next steps