ContainerClient Class

A client to interact with a specific container, although that container may not yet exist.

For operations relating to a specific blob within this container, a blob client can be retrieved using the get_blob_client function.

For more optional configuration, please click here.

Inheritance
azure.storage.blob._shared.base_client.StorageAccountHostsMixin
ContainerClient

Constructor

ContainerClient(account_url, container_name, credential=None, **kwargs)

Parameters

account_url
str
Required

The URI to the storage account. In order to create a client given the full URI to the container, use the from_container_url classmethod.

container_name
str
Required

The name of the container for the blob.

credential
Required

The credentials with which to authenticate. This is optional if the account URL already has a SAS token. The value can be a SAS token string, an instance of a AzureSasCredential from azure.core.credentials, an account shared access key, or an instance of a TokenCredentials class from azure.identity. If the resource URI already contains a SAS token, this will be ignored in favor of an explicit credential

  • except in the case of AzureSasCredential, where the conflicting SAS tokens will raise a ValueError.
api_version
str
Required

The Storage API version to use for requests. Default value is '2019-07-07'. Setting to an older version may result in reduced feature compatibility.

New in version 12.2.0.

secondary_hostname
str
Required

The hostname of the secondary endpoint.

max_block_size
int
Required

The maximum chunk size for uploading a block blob in chunks. Defaults to 4*1024*1024, or 4MB.

max_single_put_size
int
Required

If the blob size is less than or equal max_single_put_size, then the blob will be uploaded with only one http PUT request. If the blob size is larger than max_single_put_size, the blob will be uploaded in chunks. Defaults to 64*1024*1024, or 64MB.

min_large_block_upload_threshold
int
Required

The minimum chunk size required to use the memory efficient algorithm when uploading a block blob. Defaults to 4*1024*1024+1.

use_byte_buffer
bool
Required

Use a byte buffer for block blob uploads. Defaults to False.

max_page_size
int
Required

The maximum chunk size for uploading a page blob. Defaults to 4*1024*1024, or 4MB.

max_single_get_size
int
Required

The maximum size for a blob to be downloaded in a single call, the exceeded part will be downloaded in chunks (could be parallel). Defaults to 32*1024*1024, or 32MB.

max_chunk_get_size
int
Required

The maximum chunk size used for downloading a blob. Defaults to 4*1024*1024, or 4MB.

Examples

Get a ContainerClient from an existing BlobServiceClient.


   # Instantiate a BlobServiceClient using a connection string
   from azure.storage.blob import BlobServiceClient
   blob_service_client = BlobServiceClient.from_connection_string(self.connection_string)

   # Instantiate a ContainerClient
   container_client = blob_service_client.get_container_client("mynewcontainer")

Creating the container client directly.


   from azure.storage.blob import ContainerClient

   sas_url = "https://account.blob.core.windows.net/mycontainer?sv=2015-04-05&st=2015-04-29T22%3A18%3A26Z&se=2015-04-30T02%3A23%3A26Z&sr=b&sp=rw&sip=168.1.5.60-168.1.5.70&spr=https&sig=Z%2FRHIX5Xcg0Mq2rqI3OlWTjEg2tYkboXr1P9ZUXDtkk%3D"
   container = ContainerClient.from_container_url(sas_url)

Methods

acquire_lease

Requests a new lease. If the container does not have an active lease, the Blob service creates a lease on the container and returns a new lease ID.

create_container

Creates a new container under the specified account. If the container with the same name already exists, the operation fails.

delete_blob

Marks the specified blob or snapshot for deletion.

The blob is later deleted during garbage collection. Note that in order to delete a blob, you must delete all of its snapshots. You can delete both at the same time with the delete_blob operation.

If a delete retention policy is enabled for the service, then this operation soft deletes the blob or snapshot and retains the blob or snapshot for specified number of days. After specified number of days, blob's data is removed from the service during garbage collection. Soft deleted blob or snapshot is accessible through <xref:azure.storage.blob.list_blobs> specifying include=["deleted"] option. Soft-deleted blob or snapshot can be restored using <xref:BlobClient.undelete>

delete_blobs

Marks the specified blobs or snapshots for deletion.

The blobs are later deleted during garbage collection. Note that in order to delete blobs, you must delete all of their snapshots. You can delete both at the same time with the delete_blobs operation.

If a delete retention policy is enabled for the service, then this operation soft deletes the blobs or snapshots and retains the blobs or snapshots for specified number of days. After specified number of days, blobs' data is removed from the service during garbage collection. Soft deleted blobs or snapshots are accessible through <xref:azure.storage.blob.list_blobs> specifying include=["deleted"] Soft-deleted blobs or snapshots can be restored using <xref:BlobClient.undelete>

delete_container

Marks the specified container for deletion. The container and any blobs contained within it are later deleted during garbage collection.

download_blob

Downloads a blob to the StorageStreamDownloader. The readall() method must be used to read all the content or readinto() must be used to download the blob into a stream. Using chunks() returns an iterator which allows the user to iterate over the content in chunks.

exists

Returns True if a container exists and returns False otherwise.

from_connection_string

Create ContainerClient from a Connection String.

from_container_url

Create ContainerClient from a container url.

get_account_information

Gets information related to the storage account.

The information can also be retrieved if the user has a SAS to a container or blob. The keys in the returned dictionary include 'sku_name' and 'account_kind'.

get_blob_client

Get a client to interact with the specified blob.

The blob need not already exist.

get_container_access_policy

Gets the permissions for the specified container. The permissions indicate whether container data may be accessed publicly.

get_container_properties

Returns all user-defined metadata and system properties for the specified container. The data returned does not include the container's list of blobs.

list_blobs

Returns a generator to list the blobs under the specified container. The generator will lazily follow the continuation tokens returned by the service.

set_container_access_policy

Sets the permissions for the specified container or stored access policies that may be used with Shared Access Signatures. The permissions indicate whether blobs in a container may be accessed publicly.

set_container_metadata

Sets one or more user-defined name-value pairs for the specified container. Each call to this operation replaces all existing metadata attached to the container. To remove all metadata from the container, call this operation with no metadata dict.

set_premium_page_blob_tier_blobs

Sets the page blob tiers on all blobs. This API is only supported for page blobs on premium accounts.

set_standard_blob_tier_blobs

This operation sets the tier on block blobs.

A block blob's tier determines Hot/Cool/Archive storage type. This operation does not update the blob's ETag.

upload_blob

Creates a new blob from a data source with automatic chunking.

walk_blobs

Returns a generator to list the blobs under the specified container. The generator will lazily follow the continuation tokens returned by the service. This operation will list blobs in accordance with a hierarchy, as delimited by the specified delimiter character.

acquire_lease

Requests a new lease. If the container does not have an active lease, the Blob service creates a lease on the container and returns a new lease ID.

acquire_lease(lease_duration=-1, lease_id=None, **kwargs)

Parameters

lease_duration
int
Required

Specifies the duration of the lease, in seconds, or negative one (-1) for a lease that never expires. A non-infinite lease can be between 15 and 60 seconds. A lease duration cannot be changed using renew or change. Default is -1 (infinite lease).

lease_id
str
Required

Proposed lease ID, in a GUID string format. The Blob service returns 400 (Invalid request) if the proposed lease ID is not in the correct format.

if_modified_since
datetime

A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

if_unmodified_since
datetime

A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

etag
str

An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

match_condition
MatchConditions

The match condition to use upon the etag.

timeout
int

The timeout parameter is expressed in seconds.

Returns

A BlobLeaseClient object, that can be run in a context manager.

Return type

Examples

Acquiring a lease on the container.


   # Acquire a lease on the container
   lease = container_client.acquire_lease()

   # Delete container by passing in the lease
   container_client.delete_container(lease=lease)

create_container

Creates a new container under the specified account. If the container with the same name already exists, the operation fails.

create_container(metadata=None, public_access=None, **kwargs)

Parameters

metadata
dict[str, str]
Required

A dict with name_value pairs to associate with the container as metadata. Example:{'Category':'test'}

public_access
PublicAccess
Required

Possible values include: 'container', 'blob'.

container_encryption_scope
dict or ContainerEncryptionScope

Specifies the default encryption scope to set on the container and use for all future writes.

New in version 12.2.0.

timeout
int

The timeout parameter is expressed in seconds.

Return type

Examples

Creating a container to store blobs.


   container_client.create_container()

delete_blob

Marks the specified blob or snapshot for deletion.

The blob is later deleted during garbage collection. Note that in order to delete a blob, you must delete all of its snapshots. You can delete both at the same time with the delete_blob operation.

If a delete retention policy is enabled for the service, then this operation soft deletes the blob or snapshot and retains the blob or snapshot for specified number of days. After specified number of days, blob's data is removed from the service during garbage collection. Soft deleted blob or snapshot is accessible through <xref:azure.storage.blob.list_blobs> specifying include=["deleted"] option. Soft-deleted blob or snapshot can be restored using <xref:BlobClient.undelete>

delete_blob(blob, delete_snapshots=None, **kwargs)

Parameters

blob
str or BlobProperties
Required

The blob with which to interact. If specified, this value will override a blob value specified in the blob URL.

delete_snapshots
str
Required

Required if the blob has associated snapshots. Values include:

  • "only": Deletes only the blobs snapshots.

  • "include": Deletes the blob along with all snapshots.

version_id
str

The version id parameter is an opaque DateTime value that, when present, specifies the version of the blob to delete.

New in version 12.4.0.

This keyword argument was introduced in API version '2019-12-12'.

lease
BlobLeaseClient or str

Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

if_modified_since
datetime

A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

if_unmodified_since
datetime

A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

etag
str

An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

match_condition
MatchConditions

The match condition to use upon the etag.

if_tags_match_condition
str

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. "\"tagname\"='my tag'"

New in version 12.4.0.

timeout
int

The timeout parameter is expressed in seconds.

Return type

delete_blobs

Marks the specified blobs or snapshots for deletion.

The blobs are later deleted during garbage collection. Note that in order to delete blobs, you must delete all of their snapshots. You can delete both at the same time with the delete_blobs operation.

If a delete retention policy is enabled for the service, then this operation soft deletes the blobs or snapshots and retains the blobs or snapshots for specified number of days. After specified number of days, blobs' data is removed from the service during garbage collection. Soft deleted blobs or snapshots are accessible through <xref:azure.storage.blob.list_blobs> specifying include=["deleted"] Soft-deleted blobs or snapshots can be restored using <xref:BlobClient.undelete>

delete_blobs(*blobs, **kwargs)

Parameters

blobs
list[str], list[dict]<xref:,> or list[BlobProperties]
Required

The blobs to delete. This can be a single blob, or multiple values can be supplied, where each value is either the name of the blob (str) or BlobProperties.

Note

When the blob type is dict, here's a list of keys, value rules.

blob name:

key: 'name', value type: str

snapshot you want to delete:

key: 'snapshot', value type: str

whether to delete snapthots when deleting blob:

key: 'delete_snapshots', value: 'include' or 'only'

if the blob modified or not:

key: 'if_modified_since', 'if_unmodified_since', value type: datetime

etag:

key: 'etag', value type: str

match the etag or not:

key: 'match_condition', value type: MatchConditions

tags match condition:

key: 'if_tags_match_condition', value type: str

lease:

key: 'lease_id', value type: Union[str, LeaseClient]

timeout for subrequest:

key: 'timeout', value type: int

delete_snapshots
str

Required if a blob has associated snapshots. Values include:

  • "only": Deletes only the blobs snapshots.

  • "include": Deletes the blob along with all snapshots.

if_modified_since
datetime

A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

if_unmodified_since
datetime

A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

if_tags_match_condition
str

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. "\"tagname\"='my tag'"

New in version 12.4.0.

raise_on_any_failure
bool

This is a boolean param which defaults to True. When this is set, an exception is raised even if there is a single operation failure.

timeout
int

The timeout parameter is expressed in seconds.

Returns

An iterator of responses, one for each blob in order

Return type

<xref:Iterator>[HttpResponse]

Examples

Deleting multiple blobs.


   # Delete multiple blobs in the container by name
   container_client.delete_blobs("my_blob1", "my_blob2")

   # Delete multiple blobs by properties iterator
   my_blobs = container_client.list_blobs(name_starts_with="my_blob")
   container_client.delete_blobs(*my_blobs)

delete_container

Marks the specified container for deletion. The container and any blobs contained within it are later deleted during garbage collection.

delete_container(**kwargs)

Parameters

lease
BlobLeaseClient or str

If specified, delete_container only succeeds if the container's lease is active and matches this ID. Required if the container has an active lease.

if_modified_since
datetime

A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

if_unmodified_since
datetime

A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

etag
str

An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

match_condition
MatchConditions

The match condition to use upon the etag.

timeout
int

The timeout parameter is expressed in seconds.

Return type

Examples

Delete a container.


   container_client.delete_container()

download_blob

Downloads a blob to the StorageStreamDownloader. The readall() method must be used to read all the content or readinto() must be used to download the blob into a stream. Using chunks() returns an iterator which allows the user to iterate over the content in chunks.

download_blob(blob, offset=None, length=None, **kwargs)

Parameters

blob
str or BlobProperties
Required

The blob with which to interact. If specified, this value will override a blob value specified in the blob URL.

offset
int
Required

Start of byte range to use for downloading a section of the blob. Must be set if length is provided.

length
int
Required

Number of bytes to read from the stream. This is optional, but should be supplied for optimal performance.

validate_content
bool

If true, calculates an MD5 hash for each chunk of the blob. The storage service checks the hash of the content that has arrived with the hash that was sent. This is primarily valuable for detecting bitflips on the wire if using http instead of https, as https (the default), will already validate. Note that this MD5 hash is not stored with the blob. Also note that if enabled, the memory-efficient upload algorithm will not be used because computing the MD5 hash requires buffering entire blocks, and doing so defeats the purpose of the memory-efficient algorithm.

lease
BlobLeaseClient or str

Required if the blob has an active lease. If specified, download_blob only succeeds if the blob's lease is active and matches this ID. Value can be a BlobLeaseClient object or the lease ID as a string.

if_modified_since
datetime

A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

if_unmodified_since
datetime

A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

etag
str

An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

match_condition
MatchConditions

The match condition to use upon the etag.

if_tags_match_condition
str

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. "\"tagname\"='my tag'"

New in version 12.4.0.

cpk
CustomerProvidedEncryptionKey

Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.

max_concurrency
int

The number of parallel connections with which to download.

encoding
str

Encoding to decode the downloaded bytes. Default is None, i.e. no decoding.

timeout
int

The timeout parameter is expressed in seconds. This method may make multiple calls to the Azure service and the timeout will apply to each call individually.

Returns

A streaming object (StorageStreamDownloader)

Return type

exists

Returns True if a container exists and returns False otherwise.

exists(**kwargs)

Parameters

timeout
int

The timeout parameter is expressed in seconds.

Returns

boolean

from_connection_string

Create ContainerClient from a Connection String.

from_connection_string(conn_str, container_name, credential=None, **kwargs)

Parameters

conn_str
str
Required

A connection string to an Azure Storage account.

container_name
str
Required

The container name for the blob.

credential
Required

The credentials with which to authenticate. This is optional if the account URL already has a SAS token, or the connection string already has shared access key values. The value can be a SAS token string, an instance of a AzureSasCredential from azure.core.credentials, an account shared access key, or an instance of a TokenCredentials class from azure.identity. Credentials provided here will take precedence over those in the connection string.

credential
default value: None

Returns

A container client.

Return type

Examples

Creating the ContainerClient from a connection string.


   from azure.storage.blob import ContainerClient
   container_client = ContainerClient.from_connection_string(
       self.connection_string, container_name="mycontainer")

from_container_url

Create ContainerClient from a container url.

from_container_url(container_url, credential=None, **kwargs)

Parameters

container_url
str
Required

The full endpoint URL to the Container, including SAS token if used. This could be either the primary endpoint, or the secondary endpoint depending on the current location_mode.

credential
Required

The credentials with which to authenticate. This is optional if the account URL already has a SAS token, or the connection string already has shared access key values. The value can be a SAS token string, an instance of a AzureSasCredential from azure.core.credentials, an account shared access key, or an instance of a TokenCredentials class from azure.identity. If the resource URI already contains a SAS token, this will be ignored in favor of an explicit credential

  • except in the case of AzureSasCredential, where the conflicting SAS tokens will raise a ValueError.
credential
default value: None

Returns

A container client.

Return type

get_account_information

Gets information related to the storage account.

The information can also be retrieved if the user has a SAS to a container or blob. The keys in the returned dictionary include 'sku_name' and 'account_kind'.

get_account_information(**kwargs)

Returns

A dict of account information (SKU and account type).

Return type

get_blob_client

Get a client to interact with the specified blob.

The blob need not already exist.

get_blob_client(blob, snapshot=None)

Parameters

blob
str or BlobProperties
Required

The blob with which to interact.

snapshot
str
default value: None

The optional blob snapshot on which to operate. This can be the snapshot ID string or the response returned from <xref:BlobClient.create_snapshot>.

Returns

A BlobClient.

Return type

Examples

Get the blob client.


   # Get the BlobClient from the ContainerClient to interact with a specific blob
   blob_client = container_client.get_blob_client("mynewblob")

get_container_access_policy

Gets the permissions for the specified container. The permissions indicate whether container data may be accessed publicly.

get_container_access_policy(**kwargs)

Parameters

lease
BlobLeaseClient or str

If specified, get_container_access_policy only succeeds if the container's lease is active and matches this ID.

timeout
int

The timeout parameter is expressed in seconds.

Returns

Access policy information in a dict.

Return type

dict[str, <xref:Any>]

Examples

Getting the access policy on the container.


   policy = container_client.get_container_access_policy()

get_container_properties

Returns all user-defined metadata and system properties for the specified container. The data returned does not include the container's list of blobs.

get_container_properties(**kwargs)

Parameters

lease
BlobLeaseClient or str

If specified, get_container_properties only succeeds if the container's lease is active and matches this ID.

timeout
int

The timeout parameter is expressed in seconds.

Returns

Properties for the specified container within a container object.

Return type

Examples

Getting properties on the container.


   properties = container_client.get_container_properties()

list_blobs

Returns a generator to list the blobs under the specified container. The generator will lazily follow the continuation tokens returned by the service.

list_blobs(name_starts_with=None, include=None, **kwargs)

Parameters

name_starts_with
str
Required

Filters the results to return only blobs whose names begin with the specified prefix.

or str include
list[str]
Required

Specifies one or more additional datasets to include in the response. Options include: 'snapshots', 'metadata', 'uncommittedblobs', 'copy', 'deleted', 'deletedwithversions', 'tags', 'versions', 'immutabilitypolicy', 'legalhold'.

timeout
int

The timeout parameter is expressed in seconds.

Returns

An iterable (auto-paging) response of BlobProperties.

Return type

Examples

List the blobs in the container.


   blobs_list = container_client.list_blobs()
   for blob in blobs_list:
       print(blob.name + '\n')

set_container_access_policy

Sets the permissions for the specified container or stored access policies that may be used with Shared Access Signatures. The permissions indicate whether blobs in a container may be accessed publicly.

set_container_access_policy(signed_identifiers, public_access=None, **kwargs)

Parameters

signed_identifiers
dict[str, AccessPolicy]
Required

A dictionary of access policies to associate with the container. The dictionary may contain up to 5 elements. An empty dictionary will clear the access policies set on the service.

public_access
PublicAccess
Required

Possible values include: 'container', 'blob'.

lease
BlobLeaseClient or str

Required if the container has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

if_modified_since
datetime

A datetime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified date/time.

if_unmodified_since
datetime

A datetime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

timeout
int

The timeout parameter is expressed in seconds.

Returns

Container-updated property dict (Etag and last modified).

Return type

Examples

Setting access policy on the container.


   # Create access policy
   from azure.storage.blob import AccessPolicy, ContainerSasPermissions
   access_policy = AccessPolicy(permission=ContainerSasPermissions(read=True),
                                expiry=datetime.utcnow() + timedelta(hours=1),
                                start=datetime.utcnow() - timedelta(minutes=1))

   identifiers = {'test': access_policy}

   # Set the access policy on the container
   container_client.set_container_access_policy(signed_identifiers=identifiers)

set_container_metadata

Sets one or more user-defined name-value pairs for the specified container. Each call to this operation replaces all existing metadata attached to the container. To remove all metadata from the container, call this operation with no metadata dict.

set_container_metadata(metadata=None, **kwargs)

Parameters

metadata
dict[str, str]
Required

A dict containing name-value pairs to associate with the container as metadata. Example: {'category':'test'}

lease
BlobLeaseClient or str

If specified, set_container_metadata only succeeds if the container's lease is active and matches this ID.

if_modified_since
datetime

A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

if_unmodified_since
datetime

A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

etag
str

An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

timeout
int

The timeout parameter is expressed in seconds.

Returns

Container-updated property dict (Etag and last modified).

Return type

Examples

Setting metadata on the container.


   # Create key, value pairs for metadata
   metadata = {'type': 'test'}

   # Set metadata on the container
   container_client.set_container_metadata(metadata=metadata)

set_premium_page_blob_tier_blobs

Sets the page blob tiers on all blobs. This API is only supported for page blobs on premium accounts.

set_premium_page_blob_tier_blobs(premium_page_blob_tier, *blobs, **kwargs)

Parameters

premium_page_blob_tier
PremiumPageBlobTier
Required

A page blob tier value to set the blob to. The tier correlates to the size of the blob and number of allowed IOPS. This is only applicable to page blobs on premium storage accounts.

Note

If you want to set different tier on different blobs please set this positional parameter to None.

Then the blob tier on every BlobProperties will be taken.

blobs
list[str], list[dict]<xref:,> or list[BlobProperties]
Required

The blobs with which to interact. This can be a single blob, or multiple values can be supplied, where each value is either the name of the blob (str) or BlobProperties.

Note

When the blob type is dict, here's a list of keys, value rules.

blob name:

key: 'name', value type: str

premium blob tier:

key: 'blob_tier', value type: PremiumPageBlobTier

lease:

key: 'lease_id', value type: Union[str, LeaseClient]

timeout for subrequest:

key: 'timeout', value type: int

timeout
int

The timeout parameter is expressed in seconds. This method may make multiple calls to the Azure service and the timeout will apply to each call individually.

raise_on_any_failure
bool

This is a boolean param which defaults to True. When this is set, an exception is raised even if there is a single operation failure.

Returns

An iterator of responses, one for each blob in order

Return type

<xref:iterator>[HttpResponse]

set_standard_blob_tier_blobs

This operation sets the tier on block blobs.

A block blob's tier determines Hot/Cool/Archive storage type. This operation does not update the blob's ETag.

set_standard_blob_tier_blobs(standard_blob_tier, *blobs, **kwargs)

Parameters

standard_blob_tier
str or StandardBlobTier
Required

Indicates the tier to be set on all blobs. Options include 'Hot', 'Cool', 'Archive'. The hot tier is optimized for storing data that is accessed frequently. The cool storage tier is optimized for storing data that is infrequently accessed and stored for at least a month. The archive tier is optimized for storing data that is rarely accessed and stored for at least six months with flexible latency requirements.

Note

If you want to set different tier on different blobs please set this positional parameter to None.

Then the blob tier on every BlobProperties will be taken.

blobs
list[str], list[dict]<xref:,> or list[BlobProperties]
Required

The blobs with which to interact. This can be a single blob, or multiple values can be supplied, where each value is either the name of the blob (str) or BlobProperties.

Note

When the blob type is dict, here's a list of keys, value rules.

blob name:

key: 'name', value type: str

standard blob tier:

key: 'blob_tier', value type: StandardBlobTier

rehydrate priority:

key: 'rehydrate_priority', value type: RehydratePriority

lease:

key: 'lease_id', value type: Union[str, LeaseClient]

snapshot:

key: "snapshost", value type: str

version id:

key: "version_id", value type: str

tags match condition:

key: 'if_tags_match_condition', value type: str

timeout for subrequest:

key: 'timeout', value type: int

rehydrate_priority
RehydratePriority

Indicates the priority with which to rehydrate an archived blob

if_tags_match_condition
str

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. "\"tagname\"='my tag'"

New in version 12.4.0.

timeout
int

The timeout parameter is expressed in seconds.

raise_on_any_failure
bool

This is a boolean param which defaults to True. When this is set, an exception is raised even if there is a single operation failure.

Returns

An iterator of responses, one for each blob in order

Return type

<xref:Iterator>[HttpResponse]

upload_blob

Creates a new blob from a data source with automatic chunking.

upload_blob(name, data, blob_type=<BlobType.BlockBlob: 'BlockBlob'>, length=None, metadata=None, **kwargs)

Parameters

name
str or BlobProperties
Required

The blob with which to interact. If specified, this value will override a blob value specified in the blob URL.

data
Required

The blob data to upload.

blob_type
BlobType
Required

The type of the blob. This can be either BlockBlob, PageBlob or AppendBlob. The default value is BlockBlob.

length
int
Required

Number of bytes to read from the stream. This is optional, but should be supplied for optimal performance.

metadata
dict(str, str)
Required

Name-value pairs associated with the blob as metadata.

overwrite
bool

Whether the blob to be uploaded should overwrite the current data. If True, upload_blob will overwrite the existing data. If set to False, the operation will fail with ResourceExistsError. The exception to the above is with Append blob types: if set to False and the data already exists, an error will not be raised and the data will be appended to the existing blob. If set overwrite=True, then the existing append blob will be deleted, and a new one created. Defaults to False.

content_settings
ContentSettings

ContentSettings object used to set blob properties. Used to set content type, encoding, language, disposition, md5, and cache control.

validate_content
bool

If true, calculates an MD5 hash for each chunk of the blob. The storage service checks the hash of the content that has arrived with the hash that was sent. This is primarily valuable for detecting bitflips on the wire if using http instead of https, as https (the default), will already validate. Note that this MD5 hash is not stored with the blob. Also note that if enabled, the memory-efficient upload algorithm will not be used, because computing the MD5 hash requires buffering entire blocks, and doing so defeats the purpose of the memory-efficient algorithm.

lease
BlobLeaseClient or str

Required if the container has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

if_modified_since
datetime

A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

if_unmodified_since
datetime

A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

etag
str

An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

match_condition
MatchConditions

The match condition to use upon the etag.

if_tags_match_condition
str

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. "\"tagname\"='my tag'"

New in version 12.4.0.

timeout
int

The timeout parameter is expressed in seconds. This method may make multiple calls to the Azure service and the timeout will apply to each call individually.

premium_page_blob_tier
PremiumPageBlobTier

A page blob tier value to set the blob to. The tier correlates to the size of the blob and number of allowed IOPS. This is only applicable to page blobs on premium storage accounts.

standard_blob_tier
StandardBlobTier

A standard blob tier value to set the blob to. For this version of the library, this is only applicable to block blobs on standard storage accounts.

maxsize_condition
int

Optional conditional header. The max length in bytes permitted for the append blob. If the Append Block operation would cause the blob to exceed that limit or if the blob size is already greater than the value specified in this header, the request will fail with MaxBlobSizeConditionNotMet error (HTTP status code 412 - Precondition Failed).

max_concurrency
int

Maximum number of parallel connections to use when the blob size exceeds 64MB.

cpk
CustomerProvidedEncryptionKey

Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.

encryption_scope
str

A predefined encryption scope used to encrypt the data on the service. An encryption scope can be created using the Management API and referenced here by name. If a default encryption scope has been defined at the container, this value will override it if the container-level scope is configured to allow overrides. Otherwise an error will be raised.

New in version 12.2.0.

encoding
str

Defaults to UTF-8.

Returns

A BlobClient to interact with the newly uploaded blob.

Return type

Examples

Upload blob to the container.


   with open(SOURCE_FILE, "rb") as data:
       blob_client = container_client.upload_blob(name="myblob", data=data)

   properties = blob_client.get_blob_properties()

walk_blobs

Returns a generator to list the blobs under the specified container. The generator will lazily follow the continuation tokens returned by the service. This operation will list blobs in accordance with a hierarchy, as delimited by the specified delimiter character.

walk_blobs(name_starts_with=None, include=None, delimiter='/', **kwargs)

Parameters

name_starts_with
str
Required

Filters the results to return only blobs whose names begin with the specified prefix.

include
list[str]
Required

Specifies one or more additional datasets to include in the response. Options include: 'snapshots', 'metadata', 'uncommittedblobs', 'copy', 'deleted'.

delimiter
str
Required

When the request includes this parameter, the operation returns a BlobPrefix element in the response body that acts as a placeholder for all blobs whose names begin with the same substring up to the appearance of the delimiter character. The delimiter may be a single character or a string.

timeout
int

The timeout parameter is expressed in seconds.

Returns

An iterable (auto-paging) response of BlobProperties.

Return type