BlobServiceClient Class
A client to interact with the Blob Service at the account level.
This client provides operations to retrieve and configure the account properties as well as list, create and delete containers within the account. For operations relating to a specific container or blob, clients for those entities can also be retrieved using the get_client functions.
- Inheritance
-
azure.storage.blob._shared.base_client_async.AsyncStorageAccountHostsMixinBlobServiceClientazure.storage.blob._blob_service_client.BlobServiceClientBlobServiceClientazure.storage.blob._encryption.StorageEncryptionMixinBlobServiceClient
Constructor
BlobServiceClient(account_url: str, credential: Optional[Any] = None, **kwargs: Any)
Parameters
- account_url
- str
The URL to the blob storage account. Any other entities included in the URL path (e.g. container or blob) will be discarded. This URL can be optionally authenticated with a SAS token.
- credential
The credentials with which to authenticate. This is optional if the account URL already has a SAS token. The value can be a SAS token string, an instance of a AzureSasCredential from azure.core.credentials, an account shared access key, or an instance of a TokenCredentials class from azure.identity. If the resource URI already contains a SAS token, this will be ignored in favor of an explicit credential
- except in the case of AzureSasCredential, where the conflicting SAS tokens will raise a ValueError.
- api_version
- str
The Storage API version to use for requests. Default value is the most recent service version that is compatible with the current SDK. Setting to an older version may result in reduced feature compatibility.
New in version 12.2.0.
- secondary_hostname
- str
The hostname of the secondary endpoint.
- max_block_size
- int
The maximum chunk size for uploading a block blob in chunks.
Defaults to 4*1024*1024, or 4MB.
- max_single_put_size
- int
If the blob size is less than or equal max_single_put_size, then the blob will be
uploaded with only one http PUT request. If the blob size is larger than max_single_put_size,
the blob will be uploaded in chunks. Defaults to 64*1024*1024, or 64MB.
- min_large_block_upload_threshold
- int
The minimum chunk size required to use the memory efficient
algorithm when uploading a block blob. Defaults to 4*1024*1024+1.
- use_byte_buffer
- bool
Use a byte buffer for block blob uploads. Defaults to False.
- max_page_size
- int
The maximum chunk size for uploading a page blob. Defaults to 4*1024*1024, or 4MB.
- max_single_get_size
- int
The maximum size for a blob to be downloaded in a single call,
the exceeded part will be downloaded in chunks (could be parallel). Defaults to 32*1024*1024, or 32MB.
- max_chunk_get_size
- int
The maximum chunk size used for downloading a blob. Defaults to 4*1024*1024,
or 4MB.
Examples
Creating the BlobServiceClient with account url and credential.
from azure.storage.blob.aio import BlobServiceClient
blob_service_client = BlobServiceClient(account_url=self.url, credential=self.shared_access_key)
Creating the BlobServiceClient with Azure Identity credentials.
# Get a token credential for authentication
from azure.identity.aio import ClientSecretCredential
token_credential = ClientSecretCredential(self.active_directory_tenant_id, self.active_directory_application_id,
self.active_directory_application_secret)
# Instantiate a BlobServiceClient using a token credential
from azure.storage.blob.aio import BlobServiceClient
blob_service_client = BlobServiceClient(account_url=self.oauth_url, credential=token_credential)
Methods
| create_container |
Creates a new container under the specified account. If the container with the same name already exists, a ResourceExistsError will be raised. This method returns a client with which to interact with the newly created container. |
| delete_container |
Marks the specified container for deletion. The container and any blobs contained within it are later deleted during garbage collection. If the container is not found, a ResourceNotFoundError will be raised. |
| find_blobs_by_tags |
The Filter Blobs operation enables callers to list blobs across all containers whose tags match a given search expression. Filter blobs searches across all containers within a storage account but can be scoped within the expression to a single container. |
| get_account_information |
Gets information related to the storage account. The information can also be retrieved if the user has a SAS to a container or blob. The keys in the returned dictionary include 'sku_name' and 'account_kind'. |
| get_blob_client |
Get a client to interact with the specified blob. The blob need not already exist. |
| get_container_client |
Get a client to interact with the specified container. The container need not already exist. |
| get_service_properties |
Gets the properties of a storage account's Blob service, including Azure Storage Analytics. |
| get_service_stats |
Retrieves statistics related to replication for the Blob service. It is only available when read-access geo-redundant replication is enabled for the storage account. With geo-redundant replication, Azure Storage maintains your data durable in two locations. In both locations, Azure Storage constantly maintains multiple healthy replicas of your data. The location where you read, create, update, or delete data is the primary storage account location. The primary location exists in the region you choose at the time you create an account via the Azure Management Azure classic portal, for example, North Central US. The location to which your data is replicated is the secondary location. The secondary location is automatically determined based on the location of the primary; it is in a second data center that resides in the same region as the primary location. Read-only access is available from the secondary location, if read-access geo-redundant replication is enabled for your storage account. |
| get_user_delegation_key |
Obtain a user delegation key for the purpose of signing SAS tokens. A token credential must be present on the service object for this request to succeed. |
| list_containers |
Returns a generator to list the containers under the specified account. The generator will lazily follow the continuation tokens returned by the service and stop when all containers have been returned. |
| set_service_properties |
Sets the properties of a storage account's Blob service, including Azure Storage Analytics. If an element (e.g. analytics_logging) is left as None, the existing settings on the service for that functionality are preserved. |
| undelete_container |
Restores soft-deleted container. Operation will only be successful if used within the specified number of days set in the delete retention policy. New in version 12.4.0: This operation was introduced in API version '2019-12-12'. |
create_container
Creates a new container under the specified account.
If the container with the same name already exists, a ResourceExistsError will be raised. This method returns a client with which to interact with the newly created container.
async create_container(name: str, metadata: Optional[Dict[str, str]] = None, public_access: Optional[Union[PublicAccess, str]] = None, **kwargs) -> ContainerClient
Parameters
A dict with name-value pairs to associate with the container as metadata. Example: {'Category':'test'}
- container_encryption_scope
- dict or ContainerEncryptionScope
Specifies the default encryption scope to set on the container and use for all future writes.
New in version 12.2.0.
- timeout
- int
The timeout parameter is expressed in seconds.
Return type
Examples
Creating a container in the blob service.
try:
new_container = await blob_service_client.create_container("containerfromblobserviceasync")
properties = await new_container.get_container_properties()
except ResourceExistsError:
print("Container already exists.")
delete_container
Marks the specified container for deletion.
The container and any blobs contained within it are later deleted during garbage collection. If the container is not found, a ResourceNotFoundError will be raised.
async delete_container(container: Union[ContainerProperties, str], lease: Optional[Union[BlobLeaseClient, str]] = None, **kwargs) -> None
Parameters
- container
- str or ContainerProperties
The container to delete. This can either be the name of the container, or an instance of ContainerProperties.
- lease
If specified, delete_container only succeeds if the container's lease is active and matches this ID. Required if the container has an active lease.
- if_modified_since
- datetime
A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.
- if_unmodified_since
- datetime
A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.
- etag
- str
An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.
- match_condition
- MatchConditions
The match condition to use upon the etag.
- timeout
- int
The timeout parameter is expressed in seconds.
Return type
Examples
Deleting a container in the blob service.
# Delete container if it exists
try:
await blob_service_client.delete_container("containerfromblobserviceasync")
except ResourceNotFoundError:
print("Container already deleted.")
find_blobs_by_tags
The Filter Blobs operation enables callers to list blobs across all containers whose tags match a given search expression. Filter blobs searches across all containers within a storage account but can be scoped within the expression to a single container.
find_blobs_by_tags(filter_expression: str, **kwargs: Any) -> AsyncItemPaged[FilteredBlob]
Parameters
- filter_expression
- str
The expression to find blobs whose tags matches the specified condition. eg. ""yourtagname"='firsttag' and "yourtagname2"='secondtag'" To specify a container, eg. "@container='containerName' and "Name"='C'"
- results_per_page
- int
The max result per page when paginating.
- timeout
- int
The timeout parameter is expressed in seconds.
Returns
An iterable (auto-paging) response of BlobProperties.
Return type
get_account_information
Gets information related to the storage account.
The information can also be retrieved if the user has a SAS to a container or blob. The keys in the returned dictionary include 'sku_name' and 'account_kind'.
async get_account_information(**kwargs: Any) -> Dict[str, str]
Returns
A dict of account information (SKU and account type).
Return type
Examples
Getting account information for the blob service.
account_info = await blob_service_client.get_account_information()
print('Using Storage SKU: {}'.format(account_info['sku_name']))
get_blob_client
Get a client to interact with the specified blob.
The blob need not already exist.
get_blob_client(container: Union[ContainerProperties, str], blob: Union[BlobProperties, str], snapshot: Optional[Union[Dict[str, Any], str]] = None) -> BlobClient
Parameters
- container
- str or ContainerProperties
The container that the blob is in. This can either be the name of the container, or an instance of ContainerProperties.
- blob
- str or BlobProperties
The blob with which to interact. This can either be the name of the blob, or an instance of BlobProperties.
The optional blob snapshot on which to operate. This can either be the ID of the snapshot, or a dictionary output returned by create_snapshot.
Returns
A BlobClient.
Return type
Examples
Getting the blob client to interact with a specific blob.
blob_client = blob_service_client.get_blob_client(container="containertestasync", blob="my_blob")
try:
stream = await blob_client.download_blob()
except ResourceNotFoundError:
print("No blob found.")
get_container_client
Get a client to interact with the specified container.
The container need not already exist.
get_container_client(container: Union[azure.storage.blob._models.ContainerProperties, str]) -> azure.storage.blob.aio._container_client_async.ContainerClient
Parameters
- container
- str or ContainerProperties
The container. This can either be the name of the container, or an instance of ContainerProperties.
Returns
A ContainerClient.
Return type
Examples
Getting the container client to interact with a specific container.
# Get a client to interact with a specific container - though it may not yet exist
container_client = blob_service_client.get_container_client("containertestasync")
try:
blobs_list = []
async for blob in container_client.list_blobs():
blobs_list.append(blob)
for blob in blobs_list:
print("Found blob: ", blob.name)
except ResourceNotFoundError:
print("Container not found.")
get_service_properties
Gets the properties of a storage account's Blob service, including Azure Storage Analytics.
async get_service_properties(**kwargs: Any) -> Dict[str, Any]
Parameters
- timeout
- int
The timeout parameter is expressed in seconds.
Returns
An object containing blob service properties such as analytics logging, hour/minute metrics, cors rules, etc.
Return type
Examples
Getting service properties for the blob service.
properties = await blob_service_client.get_service_properties()
get_service_stats
Retrieves statistics related to replication for the Blob service.
It is only available when read-access geo-redundant replication is enabled for the storage account.
With geo-redundant replication, Azure Storage maintains your data durable in two locations. In both locations, Azure Storage constantly maintains multiple healthy replicas of your data. The location where you read, create, update, or delete data is the primary storage account location. The primary location exists in the region you choose at the time you create an account via the Azure Management Azure classic portal, for example, North Central US. The location to which your data is replicated is the secondary location. The secondary location is automatically determined based on the location of the primary; it is in a second data center that resides in the same region as the primary location. Read-only access is available from the secondary location, if read-access geo-redundant replication is enabled for your storage account.
async get_service_stats(**kwargs: Any) -> Dict[str, Any]
Parameters
- timeout
- int
The timeout parameter is expressed in seconds.
Returns
The blob service stats.
Return type
Examples
Getting service stats for the blob service.
stats = await blob_service_client.get_service_stats()
get_user_delegation_key
Obtain a user delegation key for the purpose of signing SAS tokens. A token credential must be present on the service object for this request to succeed.
async get_user_delegation_key(key_start_time: datetime, key_expiry_time: datetime, **kwargs: Any) -> UserDelegationKey
Parameters
- timeout
- int
The timeout parameter is expressed in seconds.
Returns
The user delegation key.
Return type
list_containers
Returns a generator to list the containers under the specified account.
The generator will lazily follow the continuation tokens returned by the service and stop when all containers have been returned.
list_containers(name_starts_with: Optional[str] = None, include_metadata: Optional[bool] = False, **kwargs) -> azure.core.async_paging.AsyncItemPaged[azure.storage.blob._models.ContainerProperties]
Parameters
- name_starts_with
- str
Filters the results to return only containers whose names begin with the specified prefix.
- include_metadata
- bool
Specifies that container metadata to be returned in the response. The default value is False.
- include_deleted
- bool
Specifies that deleted containers to be returned in the response. This is for container restore enabled account. The default value is False. .. versionadded:: 12.4.0
- include_system
- bool
Flag specifying that system containers should be included. .. versionadded:: 12.10.0
- results_per_page
- int
The maximum number of container names to retrieve per API call. If the request does not specify the server will return up to 5,000 items.
- timeout
- int
The timeout parameter is expressed in seconds.
Returns
An iterable (auto-paging) of ContainerProperties.
Return type
Examples
Listing the containers in the blob service.
# List all containers
all_containers = []
async for container in blob_service_client.list_containers(include_metadata=True):
all_containers.append(container)
for container in all_containers:
print(container['name'], container['metadata'])
# Filter results with name prefix
test_containers = []
async for name in blob_service_client.list_containers(name_starts_with='test-'):
test_containers.append(name)
for container in test_containers:
print(container['name'], container['metadata'])
set_service_properties
Sets the properties of a storage account's Blob service, including Azure Storage Analytics.
If an element (e.g. analytics_logging) is left as None, the existing settings on the service for that functionality are preserved.
async set_service_properties(analytics_logging: Optional[BlobAnalyticsLogging] = None, hour_metrics: Optional[Metrics] = None, minute_metrics: Optional[Metrics] = None, cors: Optional[List[CorsRule]] = None, target_version: Optional[str] = None, delete_retention_policy: Optional[RetentionPolicy] = None, static_website: Optional[StaticWebsite] = None, **kwargs) -> None
Parameters
- hour_metrics
- Metrics
The hour metrics settings provide a summary of request statistics grouped by API in hourly aggregates for blobs.
- minute_metrics
- Metrics
The minute metrics settings provide request statistics for each minute for blobs.
You can include up to five CorsRule elements in the list. If an empty list is specified, all CORS rules will be deleted, and CORS will be disabled for the service.
- target_version
- str
Indicates the default version to use for requests if an incoming request's version is not specified.
- delete_retention_policy
- RetentionPolicy
The delete retention policy specifies whether to retain deleted blobs. It also specifies the number of days and versions of blob to keep.
- static_website
- StaticWebsite
Specifies whether the static website feature is enabled, and if yes, indicates the index document and 404 error document to use.
- timeout
- int
The timeout parameter is expressed in seconds.
Return type
Examples
Setting service properties for the blob service.
# Create service properties
from azure.storage.blob import BlobAnalyticsLogging, Metrics, CorsRule, RetentionPolicy
# Create logging settings
logging = BlobAnalyticsLogging(read=True, write=True, delete=True, retention_policy=RetentionPolicy(enabled=True, days=5))
# Create metrics for requests statistics
hour_metrics = Metrics(enabled=True, include_apis=True, retention_policy=RetentionPolicy(enabled=True, days=5))
minute_metrics = Metrics(enabled=True, include_apis=True,
retention_policy=RetentionPolicy(enabled=True, days=5))
# Create CORS rules
cors_rule = CorsRule(['www.xyz.com'], ['GET'])
cors = [cors_rule]
# Set the service properties
await blob_service_client.set_service_properties(logging, hour_metrics, minute_metrics, cors)
undelete_container
Restores soft-deleted container.
Operation will only be successful if used within the specified number of days set in the delete retention policy.
New in version 12.4.0: This operation was introduced in API version '2019-12-12'.
async undelete_container(deleted_container_name: str, deleted_container_version: str, **kwargs: Any) -> azure.storage.blob.aio._container_client_async.ContainerClient
Parameters
- timeout
- int
The timeout parameter is expressed in seconds.
Return type
Feedback
Submit and view feedback for