Azure Files scalability and performance targets

Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard SMB protocol. This article discusses the scalability and performance targets for Azure Files and Azure File Sync.

The scalability and performance targets listed here are high-end targets, but may be affected by other variables in your deployment. For example, the throughput for a file may also be limited by your available network bandwidth, not just the servers hosting the Azure Files service. We strongly recommend testing your usage pattern to determine whether the scalability and performance of Azure Files meet your requirements. We are also committed to increasing these limits over time. Please don't hesitate to give us feedback, either in the comments below or on the Azure Files UserVoice, about which limits you would like to see us increase.

Azure storage account scale targets

The parent resource for an Azure file share is an Azure storage account. A storage account represents a pool of storage in Azure that can be used by multiple storage services, including Azure Files, to store data. Other services that store data in storage accounts are Azure Blob storage, Azure Queue storage, and Azure Table storage. The following targets apply all storage services storing data in a storage account:

The following table describes default limits for Azure general-purpose v1, v2, Blob storage, and block blob storage accounts. The ingress limit refers to all data that is sent to a storage account. The egress limit refers to all data that is received from a storage account.


You can request higher capacity and ingress limits. To request an increase, contact Azure Support.

Resource Limit
Number of storage accounts per region per subscription, including standard, and premium storage accounts. 250
Maximum storage account capacity 5 PiB 1
Maximum number of blob containers, blobs, file shares, tables, queues, entities, or messages per storage account No limit
Maximum request rate1 per storage account 20,000 requests per second
Maximum ingress1 per storage account (US, Europe regions) 10 Gbps
Maximum ingress1 per storage account (regions other than US and Europe) 5 Gbps if RA-GRS/GRS is enabled, 10 Gbps for LRS/ZRS2
Maximum egress for general-purpose v2 and Blob storage accounts (all regions) 50 Gbps
Maximum egress for general-purpose v1 storage accounts (US regions) 20 Gbps if RA-GRS/GRS is enabled, 30 Gbps for LRS/ZRS2
Maximum egress for general-purpose v1 storage accounts (non-US regions) 10 Gbps if RA-GRS/GRS is enabled, 15 Gbps for LRS/ZRS2
Maximum number of virtual network rules per storage account 200
Maximum number of IP address rules per storage account 200

1 Azure Storage standard accounts support higher capacity limits and higher limits for ingress by request. To request an increase in account limits, contact Azure Support.

2 If your storage account has read-access enabled with geo-redundant storage (RA-GRS) or geo-zone-redundant storage (RA-GZRS), then the egress targets for the secondary location are identical to those of the primary location. For more information, see Azure Storage replication.


Microsoft recommends that you use a general-purpose v2 storage account for most scenarios. You can easily upgrade a general-purpose v1 or an Azure Blob storage account to a general-purpose v2 account with no downtime and without the need to copy data. For more information, see Upgrade to a general-purpose v2 storage account.

All storage accounts run on a flat network topology regardless of when they were created. For more information on the Azure Storage flat network architecture and on scalability, see Microsoft Azure Storage: A Highly Available Cloud Storage Service with Strong Consistency.

The following limits apply only when you perform management operations by using Azure Resource Manager with Azure Storage.

Resource Limit
Storage account management operations (read) 800 per 5 minutes
Storage account management operations (write) 10 per second / 1200 per hour
Storage account management operations (list) 100 per 5 minutes


General purpose storage account utilization from other storage services affects your Azure file shares in your storage account. For example, if you reach the maximum storage account capacity with Azure Blob storage, you will not be able to create new files on your Azure file share, even if your Azure file share is below the maximum share size.

Azure Files scale targets

There are three categories of limitations to consider for Azure Files: storage accounts, shares, and files.

For example: With premium file shares, a single share can achieve 100,000 IOPS and a single file can scale up to 5,000 IOPS. So, if you have three files in one share, the maximum IOPS you can get from that share is 15,000.

Standard storage account limits

See the Azure storage account scale targets section for these limits.

Premium FileStorage account limits

Premium files use a unique storage account called FileStorage. This account type is designed for workloads with high IOPS, high throughput with consistent low-latency. Premium file storage scales with the provisioned share size.

Area Target
Max provisioned size 100 TiB
Shares Unlimited
IOPS 100,000
Ingress 4,136 MiB/s
Egress 6,204 MiB/s


Storage account limits apply to all shares. Scaling up to the max for FileStorage accounts is only achievable if there is only one share per FileStorage account.

File share and file scale targets


Standard file shares larger than 5 TiB have certain limitations. For a list of limitations and instructions to enable larger file share sizes, see the enable larger file shares on standard file shares section of the planning guide.

Resource Standard file shares* Premium file shares
Minimum size of a file share No minimum; pay as you go 100 GiB; provisioned
Maximum size of a file share 100 TiB**, 5 TiB 100 TiB
Maximum size of a file in a file share 1 TiB 4 TiB
Maximum number of files in a file share No limit No limit
Maximum IOPS per share 10,000 IOPS**, 1,000 IOPS or 100 requests in 100ms 100,000 IOPS
Maximum number of stored access policies per file share 5 5
Target throughput for a single file share up to 300 MiB/sec**, Up to 60 MiB/sec , See premium file share ingress and egress values
Maximum egress for a single file share See standard file share target throughput Up to 6,204 MiB/s
Maximum ingress for a single file share See standard file share target throughput Up to 4,136 MiB/s
Maximum open handles per file or directory 2,000 open handles 2,000 open handles
Maximum number of share snapshots 200 share snapshots 200 share snapshots
Maximum object (directories and files) name length 2,048 characters 2,048 characters
Maximum pathname component (in the path \A\B\C\D, each letter is a component) 255 characters 255 characters
Hard link limit (NFS only) N/A 178
Maximum number of SMB Multichannel channels N/A 4

* The limits for standard file shares apply to all three of the tiers available for standard file shares: transaction optimized, hot, and cool.

** Default on standard file shares is 5 TiB, see Enable and create large file shares for the details on how to increase the standard file shares scale up to 100 TiB.

Additional premium file share level limits

Area Target
Minimum size increase/decrease 1 GiB
Baseline IOPS 400 + 1 IOPS per GiB, up to 100,000
IOPS bursting Max (4000,3x IOPS per GiB), up to 100,000
Egress rate 60 MiB/s + 0.06 * provisioned GiB
Ingress rate 40 MiB/s + 0.04 * provisioned GiB

File level limits

Area Standard file Premium file
Size 1 TiB 4 TiB
Max IOPS per file 1,000 Up to 8,000*
Concurrent handles 2,000 2,000
Egress See standard file throughput values 300 MiB/sec (Up to 1 GiB/s with SMB Multichannel preview)**
Ingress See standard file throughput values 200 MiB/sec (Up to 1 GiB/s with SMB Multichannel preview)**
Throughput Up to 60 MiB/sec See premium file ingress/egress values

* Applies to read and write IOs (typically smaller IO sizes <=64K). Metadata operations, other than reads and writes, may be lower.

** Subject to machine network limits, available bandwidth, IO sizes, queue depth, and other factors. For details see SMB Multichannel performance.

Azure File Sync scale targets

Azure File Sync has been designed with the goal of limitless usage, but limitless usage is not always possible. The following table indicates the boundaries of Microsoft's testing and also indicates which targets are hard limits:

Resource Target Hard limit
Storage Sync Services per region 100 Storage Sync Services Yes
Sync groups per Storage Sync Service 200 sync groups Yes
Registered servers per Storage Sync Service 99 servers Yes
Cloud endpoints per sync group 1 cloud endpoint Yes
Server endpoints per sync group 100 server endpoints Yes
Server endpoints per server 30 server endpoints Yes
File system objects (directories and files) per sync group 100 million objects No
Maximum number of file system objects (directories and files) in a directory 5 million objects Yes
Maximum object (directories and files) security descriptor size 64 KiB Yes
File size 100 GiB No
Minimum file size for a file to be tiered V9 and newer: Based on file system cluster size (double file system cluster size). For example, if the file system cluster size is 4kb, the minimum file size will be 8kb.
V8 and older: 64 KiB


An Azure File Sync endpoint can scale up to the size of an Azure file share. If the Azure file share size limit is reached, sync will not be able to operate.

Azure File Sync performance metrics

Since the Azure File Sync agent runs on a Windows Server machine that connects to the Azure file shares, the effective sync performance depends upon a number of factors in your infrastructure: Windows Server and the underlying disk configuration, network bandwidth between the server and the Azure storage, file size, total dataset size, and the activity on the dataset. Since Azure File Sync works on the file level, the performance characteristics of an Azure File Sync-based solution is better measured in the number of objects (files and directories) processed per second.

For Azure File Sync, performance is critical in two stages:

  1. Initial one-time provisioning: To optimize performance on initial provisioning, refer to Onboarding with Azure File Sync for the optimal deployment details.
  2. Ongoing sync: After the data is initially seeded in the Azure file shares, Azure File Sync keeps multiple endpoints in sync.

To help you plan your deployment for each of the stages, below are the results observed during the internal testing on a system with a config

System configuration Details
CPU 64 Virtual Cores with 64 MiB L3 cache
Memory 128 GiB
Disk SAS disks with RAID 10 with battery backed cache
Network 1 Gbps Network
Workload General Purpose File Server
Initial one-time provisioning Details
Number of objects 25 million objects
Dataset Size ~4.7 TiB
Average File Size ~200 KiB (Largest File: 100 GiB)
Initial cloud change enumeration 20 objects per second
Upload Throughput 20 objects per second per sync group
Namespace Download Throughput 400 objects per second

Initial one-time provisioning

Initial cloud change enumeration: When a new sync group is created, initial cloud change enumeration is the first step that will execute. In this process, the system will enumerate all the items in the Azure File Share. During this process, there will be no sync activity i.e. no items will be downloaded from cloud endpoint to server endpoint and no items will be uploaded from server endpoint to cloud endpoint. Sync activity will resume once initial cloud change enumeration completes. The rate of performance is 20 objects per second. Customers can estimate the time it will take to complete initial cloud change enumeration by determining the number of items in the cloud share and using the following formulae to get the time in days.

Time (in days) for initial cloud enumeration = (Number of objects in cloud endpoint)/(20 * 60 * 60 * 24)

Namespace download throughput When a new server endpoint is added to an existing sync group, the Azure File Sync agent does not download any of the file content from the cloud endpoint. It first syncs the full namespace and then triggers background recall to download the files, either in their entirety or, if cloud tiering is enabled, to the cloud tiering policy set on the server endpoint.

Ongoing sync Details
Number of objects synced 125,000 objects (~1% churn)
Dataset Size 50 GiB
Average File Size ~500 KiB
Upload Throughput 20 objects per second per sync group
Full Download Throughput* 60 objects per second

*If cloud tiering is enabled, you are likely to observe better performance as only some of the file data is downloaded. Azure File Sync only downloads the data of cached files when they are changed on any of the endpoints. For any tiered or newly created files, the agent does not download the file data, and instead only syncs the namespace to all the server endpoints. The agent also supports partial downloads of tiered files as they are accessed by the user.


The numbers above are not an indication of the performance that you will experience. The actual performance will depend on multiple factors as outlined in the beginning of this section.

As a general guide for your deployment, you should keep a few things in mind:

  • The object throughput approximately scales in proportion to the number of sync groups on the server. Splitting data into multiple sync groups on a server yields better throughput, which is also limited by the server and network.
  • The object throughput is inversely proportional to the MiB per second throughput. For smaller files, you will experience higher throughput in terms of the number of objects processed per second, but lower MiB per second throughput. Conversely, for larger files, you will get fewer objects processed per second, but higher MiB per second throughput. The MiB per second throughput is limited by the Azure Files scale targets.

See also