Storage considerations for Azure Functions

Azure Functions requires an Azure Storage account when you create a function app instance. The following storage services may be used by your function app:

Storage service Functions usage
Azure Blob Storage Maintain bindings state and function keys.
Also used by task hubs in Durable Functions.
Azure Files File share used to store and run your function app code in a Consumption Plan and Premium Plan.
Azure Files is set up by default, but you can create an app without Azure Files under certain conditions.
Azure Queue Storage Used by task hubs in Durable Functions.
Azure Table Storage Used by task hubs in Durable Functions.

Important

When using the Consumption/Premium hosting plan, your function code and binding configuration files are stored in Azure Files in the main storage account. When you delete the main storage account, this content is deleted and cannot be recovered.

Storage account requirements

When creating a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. This is because Functions relies on Azure Storage for operations such as managing triggers and logging function executions. Some storage accounts don't support queues and tables. These accounts include blob-only storage accounts and Azure Premium Storage.

To learn more about storage account types, see Storage account overview.

While you can use an existing storage account with your function app, you must make sure that it meets these requirements. Storage accounts created as part of the function app create flow in the Azure portal are guaranteed to meet these storage account requirements. In the portal, unsupported accounts are filtered out when choosing an existing storage account while creating a function app. In this flow, you are only allowed to choose existing storage accounts in the same region as the function app you're creating. To learn more, see Storage account location.

Storage account guidance

Every function app requires a storage account to operate. If that account is deleted your function app won't run. To troubleshoot storage-related issues, see How to troubleshoot storage-related issues. The following additional considerations apply to the Storage account used by function apps.

Storage account location

For best performance, your function app should use a storage account in the same region, which reduces latency. The Azure portal enforces this best practice. If, for some reason, you need to use a storage account in a region different than your function app, you must create your function app outside of the portal.

Storage account connection setting

The storage account connection is maintained in the AzureWebJobsStorage application setting.

The storage account connection string must be updated when you regenerate storage keys. Read more about storage key management here.

Shared storage accounts

It's possible for multiple function apps to share the same storage account without any issues. For example, in Visual Studio you can develop multiple apps using the Azure Storage Emulator. In this case, the emulator acts like a single storage account. The same storage account used by your function app can also be used to store your application data. However, this approach isn't always a good idea in a production environment.

Lifecycle management policy considerations

Functions uses Blob storage to persist important information, such as function access keys. When you apply a lifecycle management policy to your Blob Storage account, the policy may remove blobs needed by the Functions host. Because of this, you shouldn't apply such policies to the storage account used by Functions. If you do need to apply such a policy, remember to exclude containers used by Functions, which are usually prefixed with azure-webjobs or scm.

Optimize storage performance

To maximize performance, use a separate storage account for each function app. This is particularly important when you have Durable Functions or Event Hub triggered functions, which both generate a high volume of storage transactions. When your application logic interacts with Azure Storage, either directly (using the Storage SDK) or through one of the storage bindings, you should use a dedicated storage account. For example, if you have an Event Hub-triggered function writing some data to blob storage, use two storage accounts—one for the function app and another for the blobs being stored by the function.

Storage data encryption

Azure Storage encrypts all data in a storage account at rest. For more information, see Azure Storage encryption for data at rest.

By default, data is encrypted with Microsoft-managed keys. For additional control over encryption keys, you can supply customer-managed keys to use for encryption of blob and file data. These keys must be present in Azure Key Vault for Functions to be able to access the storage account. To learn more, see Encryption at rest using customer-managed keys.

In-region data residency

When all customer data must remain within a single region, the storage account associated with the function app must be one with in-region redundancy. An in-region redundant storage account also must be used with Azure Durable Functions.

Other platform-managed customer data is only stored within the region when hosting in an internally load-balanced App Service Environment (ASE). To learn more, see ASE zone redundancy.

Create an app without Azure Files

Azure Files is set up by default for Premium and non-Linux Consumption plans to serve as a shared file system in high-scale scenarios. The file system is used by the platform for some features such as log streaming, but it primarily ensures consistency of the deployed function payload. When an app is deployed using an external package URL, the app content is served from a separate read-only file system, so Azure Files can be omitted if desired. In such cases, a writeable file system is provided, but it is not guaranteed to be shared with all function app instances.

When Azure Files isn't used, you must account for the following:

  • You must deploy from an external package URL.
  • Your app can't rely on a shared writeable file system.
  • The app can't use Functions runtime v1.
  • Log streaming experiences in clients such as the Azure portal default to file system logs. You should instead rely on Application Insights logs.

If the above are properly accounted for, you may create the app without Azure Files. Create the function app without specifying the WEBSITE_CONTENTAZUREFILECONNECTIONSTRING and WEBSITE_CONTENTSHARE application settings. You can do this by generating an ARM template for a standard deployment, removing these two settings, and then deploying the template.

Because Functions use Azure Files during parts of the the dynamic scale-out process, scaling could be limited when running without Azure Files on Consumption and Premium plans.

Mount file shares

This functionality is current only available when running on Linux.

You can mount existing Azure Files shares to your Linux function apps. By mounting a share to your Linux function app, you can leverage existing machine learning models or other data in your functions. You can use the az webapp config storage-account add command to mount an existing share to your Linux function app.

In this command, share-name is the name of the existing Azure Files share, and custom-id can be any string that uniquely defines the share when mounted to the function app. Also, mount-path is the path from which the share is accessed in your function app. mount-path must be in the format /dir-name, and it can't start with /home.

For a complete example, see the scripts in Create a Python function app and mount a Azure Files share.

Currently, only a storage-type of AzureFiles is supported. You can only mount five shares to a given function app. Mounting a file share may increase the cold start time by at least 200-300ms, or even more when the storage account is in a different region.

The mounted share is available to your function code at the mount-path specified. For example, when mount-path is /path/to/mount, you can access the target directory by file system APIs, as in the following Python example:

import os
...

files_in_share = os.listdir("/path/to/mount")

Next steps

Learn more about Azure Functions hosting options.