Resource limits for single databases using the DTU-based purchasing model
This article provides the detailed resource limits for Azure SQL Database single databases using the DTU-based purchasing model.
For DTU-based purchasing model resource limits for elastic pools, see DTU-based resource limits - elastic pools. For vCore-based resource limits, see vCore-based resource limits - single databases and vCore-based resource limits - elastic pools. For more information regarding the different purchasing models, see Purchasing models and service tiers.
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see Manage file space in Azure SQL Database.
Single database: Storage sizes and compute sizes
The following tables show the resources available for a single database at each service tier and compute size. You can set the service tier, compute size, and storage amount for a single database using the Azure portal, Transact-SQL, PowerShell, the Azure CLI, or the REST API.
Basic service tier
|Included storage (GB)||2|
|Max storage choices (GB)||2|
|Max in-memory OLTP storage (GB)||N/A|
|Max concurrent workers (requests)||30|
|Max concurrent sessions||300|
Standard service tier
|Included storage (GB)||250||250||250||250|
|Max storage choices (GB)||250||250||250||250, 500, 750, 1024|
|Max in-memory OLTP storage (GB)||N/A||N/A||N/A||N/A|
|Max concurrent workers (requests)||60||90||120||200|
|Max concurrent sessions||600||900||1200||2400|
Standard service tier (continued)
|Included storage (GB)||250||250||250||250||250|
|Max storage choices (GB)||250, 500, 750, 1024||250, 500, 750, 1024||250, 500, 750, 1024||250, 500, 750, 1024||250, 500, 750, 1024|
|Max in-memory OLTP storage (GB)||N/A||N/A||N/A||N/A||N/A|
|Max concurrent workers (requests)||400||800||1600||3200||6000|
|Max concurrent sessions||4800||9600||19200||30000||30000|
Premium service tier
|Included storage (GB)||500||500||500||500||4096||4096|
|Max storage choices (GB)||500, 750, 1024||500, 750, 1024||500, 750, 1024||500, 750, 1024||4096||4096|
|Max in-memory OLTP storage (GB)||1||2||4||8||14||32|
|Max concurrent workers (requests)||200||400||800||1600||2400||6400|
|Max concurrent sessions||30000||30000||30000||30000||30000||30000|
More than 1 TB of storage in the Premium tier is currently available in all regions except the following: China East, China North, Germany Central, Germany Northeast, UK North, UK South, US DOD Central, US DOD East, US Government Central, and West Central US. In these regions, the storage max in the Premium tier is limited to 1 TB. See P11-P15 Current Limitations.
Single database: Change storage size
- The DTU price for a single database includes a certain amount of storage at no additional cost. Extra storage beyond the included amount can be provisioned for an additional cost up to the max size limit in increments of 250 GB up to 1 TB, and then in increments of 256 GB beyond 1 TB. For included storage amounts and max size limits, see Single database: Storage sizes and compute sizes.
- Extra storage for a single database can be provisioned by increasing its max size using the Azure portal, Transact-SQL, PowerShell, the Azure CLI, or the REST API.
- The price of extra storage for a single database is the extra storage amount multiplied by the extra storage unit price of the service tier. For details on the price of extra storage, see SQL Database pricing.
Single database: Change DTUs
After initially picking a service tier, compute size, and storage amount, you can scale a single database up or down dynamically based on actual experience using the Azure portal, Transact-SQL, PowerShell, the Azure CLI, or the REST API.
The following video shows dynamically changing the service tier and compute size to increase available DTUs for a single database.
Changing the service tier and/or compute size of a database creates a replica of the original database at the new compute size, and then switches connections over to the replica. No data is lost during this process but during the brief moment when we switch over to the replica, connections to the database are disabled, so some transactions in flight may be rolled back. The length of time for the switch-over varies, but is less than 30 seconds 99% of the time. If there are large numbers of transactions in flight at the moment connections are disabled, the length of time for the switch-over may be longer.
The duration of the entire scale-up process depends on both the size and service tier of the database before and after the change. For example, a 250-GB database that is changing to, from, or within a Standard service tier, should complete within six hours. For a database the same size that is changing compute sizes within the Premium service tier, the scale-up should complete within three hours.
To monitor in-progress operations, see: Manage operations using the SQL REST API, Manage operations using CLI, Monitor operations using T-SQL and these two PowerShell commands: Get-AzureRmSqlDatabaseActivity and Stop-AzureRmSqlDatabaseActivity.
- If you are upgrading to a higher service tier or compute size, the database max size does not increase unless you explicitly specify a larger size (maxsize).
- To downgrade a database, the database used space must be smaller than the maximum allowed size of the target service tier and compute size.
- When downgrading from Premium to the Standard tier, an extra storage cost applies if both (1) the max size of the database is supported in the target compute size, and (2) the max size exceeds the included storage amount of the target compute size. For example, if a P1 database with a max size of 500 GB is downsized to S3, then an extra storage cost applies since S3 supports a max size of 500 GB and its included storage amount is only 250 GB. So, the extra storage amount is 500 GB – 250 GB = 250 GB. For pricing of extra storage, see SQL Database pricing. If the actual amount of space used is less than the included storage amount, then this extra cost can be avoided by reducing the database max size to the included amount.
- When upgrading a database with geo-replication enabled, upgrade its secondary databases to the desired service tier and compute size before upgrading the primary database (general guidance for best performance). When upgrading to a different, upgrading the secondary database first is required.
- When downgrading a database with geo-replication enabled, downgrade its primary databases to the desired service tier and compute size before downgrading the secondary database (general guidance for best performance). When downgrading to a different edition, downgrading the primary database first is required.
- The restore service offerings are different for the various service tiers. If you are downgrading to the Basic tier, there is a lower backup retention period. See Azure SQL Database Backups.
- The new properties for the database are not applied until the changes are complete.
Single database: Limitations of P11 and P15 when the maximum size greater than 1 TB
The following considerations and limitations apply to P11 and P15 databases with a maximum size greater than 1 TB:
- If you choose a maximum size greater than 1 TB when creating a database (using a value of 4 TB or 4096 GB), the create command fails with an error if the database is provisioned in an unsupported region.
- For existing P11 and P15 databases located in one of the supported regions, you can increase the maximum storage to beyond 1 TB in increments of 256 GB up to 4 TB. To see if a larger size is supported in your region, use the DATABASEPROPERTYEX function or inspect the database size in the Azure portal. Upgrading an existing P11 or P15 database can only be performed by a server-level principal login or by members of the dbmanager database role.
- If an upgrade operation is executed in a supported region the configuration is updated immediately. The database remains online during the upgrade process. However, you cannot utilize the full amount of storage beyond 1 TB of storage until the actual database files have been upgraded to the new maximum size. The length of time required depends upon on the size of the database being upgraded.
- When creating or updating a P11 or P15 database, you can only choose between 1-TB and 4-TB maximum size in increments of 256 GB. When creating a P11/P15, the default storage option of 1 TB is pre-selected. For databases located in one of the supported regions, you can increase the storage maximum to up to a maximum of 4 TB for a new or existing single database. For all other regions, the maximum size cannot be increased above 1 TB. The price does not change when you select 4 TB of included storage.
- If the maximum size of a database is set to greater than 1 TB, then it cannot be changed to 1 TB even if the actual storage used is below 1 TB. Thus, you cannot downgrade a P11 or P15 with a maximum size larger than 1 TB to a 1 TB P11 or 1 TB P15 or lower compute size, such as P1-P6). This restriction also applies to the restore and copy scenarios including point-in-time, geo-restore, long-term-backup-retention, and database copy. Once a database is configured with a maximum size greater than 1 TB, all restore operations of this database must be run into a P11/P15 with a maximum size greater than 1 TB.
- For active geo-replication scenarios:
- Setting up a geo-replication relationship: If the primary database is P11 or P15, the secondary(ies) must also be P11 or P15; lower compute sizes are rejected as secondaries since they are not capable of supporting more than 1 TB.
- Upgrading the primary database in a geo-replication relationship: Changing the maximum size to more than 1 TB on a primary database triggers the same change on the secondary database. Both upgrades must be successful for the change on the primary to take effect. Region limitations for the more than 1-TB option apply. If the secondary is in a region that does not support more than 1 TB, the primary is not upgraded.
- Using the Import/Export service for loading P11/P15 databases with more than 1 TB is not supported. Use SqlPackage.exe to import and export data.
- For vCore resource limits for a single database, see resource limits for single databases using the vCore-based purchasing model
- For vCore resource limits for elastic pools, see resource limits for elastic pools using the vCore-based purchasing model
- For DTU resource limites for elastic pools, see resource limits for elastic pools using the DTU-based purchasing model
- For resource limits for managed instances, see managed instance resource limits.
- For information about general Azure limits, see Azure subscription and service limits, quotas, and constraints.
- For information about resource limits on a database server, see overview of resource limits on a SQL Database server for information about limits at the server and subscription levels.
We'd love to hear your thoughts. Choose the type you'd like to provide:
Our feedback system is built on GitHub Issues. Read more on our blog.