aio Package

Classes

BlobClient

A client to interact with a specific blob, although that blob may not yet exist.

BlobLeaseClient

Creates a new BlobLeaseClient.

This client provides lease operations on a BlobClient or ContainerClient.

BlobPrefix

An Iterable of Blob properties.

Returned from walk_blobs when a delimiter is used. Can be thought of as a virtual blob directory.

BlobServiceClient

A client to interact with the Blob Service at the account level.

This client provides operations to retrieve and configure the account properties as well as list, create and delete containers within the account. For operations relating to a specific container or blob, clients for those entities can also be retrieved using the get_client functions.

ContainerClient

A client to interact with a specific container, although that container may not yet exist.

For operations relating to a specific blob within this container, a blob client can be retrieved using the get_blob_client function.

ExponentialRetry

Exponential retry.

Constructs an Exponential retry object. The initial_backoff is used for the first retry. Subsequent retries are retried after initial_backoff + increment_power^retry_count seconds. For example, by default the first retry occurs after 15 seconds, the second after (15+3^1) = 18 seconds, and the third after (15+3^2) = 24 seconds.

LinearRetry

Linear retry.

Constructs a Linear retry object.

StorageStreamDownloader

A streaming object to download from Azure Storage.

Functions

download_blob_from_url

Download the contents of a blob to a local file or stream.

async download_blob_from_url(blob_url: str, output: str, credential: Optional[Union[str, Dict[str, str], AzureNamedKeyCredential, AzureSasCredential, "TokenCredential"]] # pylint: disable=line-too-long = None, **kwargs) -> None

Parameters

Name Description
blob_url
Required
str

The full URI to the blob. This can also include a SAS token.

output
Required
str or <xref:<xref:writable stream>>

Where the data should be downloaded to. This could be either a file path to write to, or an open IO handle to write to.

credential

The credentials with which to authenticate. This is optional if the blob URL already has a SAS token or the blob is public. The value can be a SAS token string, an instance of a AzureSasCredential or AzureNamedKeyCredential from azure.core.credentials, an account shared access key, or an instance of a TokenCredentials class from azure.identity. If the resource URI already contains a SAS token, this will be ignored in favor of an explicit credential

  • except in the case of AzureSasCredential, where the conflicting SAS tokens will raise a ValueError. If using an instance of AzureNamedKeyCredential, "name" should be the storage account name, and "key" should be the storage account key.
default value: None

Keyword-Only Parameters

Name Description
overwrite

Whether the local file should be overwritten if it already exists. The default value is False - in which case a ValueError will be raised if the file already exists. If set to True, an attempt will be made to write to the existing file. If a stream handle is passed in, this value is ignored.

max_concurrency
int

The number of parallel connections with which to download.

offset
int

Start of byte range to use for downloading a section of the blob. Must be set if length is provided.

length
int

Number of bytes to read from the stream. This is optional, but should be supplied for optimal performance.

validate_content

If true, calculates an MD5 hash for each chunk of the blob. The storage service checks the hash of the content that has arrived with the hash that was sent. This is primarily valuable for detecting bitflips on the wire if using http instead of https as https (the default) will already validate. Note that this MD5 hash is not stored with the blob. Also note that if enabled, the memory-efficient upload algorithm will not be used, because computing the MD5 hash requires buffering entire blocks, and doing so defeats the purpose of the memory-efficient algorithm.

Returns

Type Description

upload_blob_to_url

Upload data to a given URL

The data will be uploaded as a block blob.

param str blob_url: The full URI to the blob. This can also include a SAS token.

param data: The data to upload. This can be bytes, text, an iterable or a file-like object.

type data: bytes or str or Iterable

async upload_blob_to_url(blob_url: str, data: Union[Iterable[AnyStr], IO[AnyStr]], credential: Optional[Union[str, Dict[str, str], AzureNamedKeyCredential, AzureSasCredential, "TokenCredential"]] # pylint: disable=line-too-long = None, **kwargs) -> dict[str, Any]

Parameters

Name Description
credential

The credentials with which to authenticate. This is optional if the blob URL already has a SAS token. The value can be a SAS token string, an instance of a AzureSasCredential or AzureNamedKeyCredential from azure.core.credentials, an account shared access key, or an instance of a TokenCredentials class from azure.identity. If the resource URI already contains a SAS token, this will be ignored in favor of an explicit credential

  • except in the case of AzureSasCredential, where the conflicting SAS tokens will raise a ValueError. If using an instance of AzureNamedKeyCredential, "name" should be the storage account name, and "key" should be the storage account key.

paramtype credential: Optional[Union[str, Dict[str, str], AzureNamedKeyCredential, AzureSasCredential, "TokenCredential"]] # pylint: disable=line-too-long

keyword bool overwrite: Whether the blob to be uploaded should overwrite the current data. If True, upload_blob_to_url will overwrite any existing data. If set to False, the operation will fail with a ResourceExistsError.

keyword int max_concurrency: The number of parallel connections with which to download.

keyword int length: Number of bytes to read from the stream. This is optional, but should be supplied for optimal performance.

keyword dict(str,str) metadata: Name-value pairs associated with the blob as metadata.

keyword bool validate_content: If true, calculates an MD5 hash for each chunk of the blob. The storage service checks the hash of the content that has arrived with the hash that was sent. This is primarily valuable for detecting bitflips on the wire if using http instead of https as https (the default) will already validate. Note that this MD5 hash is not stored with the blob. Also note that if enabled, the memory-efficient upload algorithm will not be used, because computing the MD5 hash requires buffering entire blocks, and doing so defeats the purpose of the memory-efficient algorithm.

keyword str encoding: Encoding to use if text is supplied as input. Defaults to UTF-8.

returns: Blob-updated property dict (Etag and last modified)

rtype: dict(str, Any)

default value: None
blob_url
Required
data
Required

Keyword-Only Parameters

Name Description
overwrite

Whether the local file should be overwritten if it already exists. The default value is False - in which case a ValueError will be raised if the file already exists. If set to True, an attempt will be made to write to the existing file. If a stream handle is passed in, this value is ignored.

max_concurrency
int

The number of parallel connections with which to download.

offset
int

Start of byte range to use for downloading a section of the blob. Must be set if length is provided.

length
int

Number of bytes to read from the stream. This is optional, but should be supplied for optimal performance.

validate_content

If true, calculates an MD5 hash for each chunk of the blob. The storage service checks the hash of the content that has arrived with the hash that was sent. This is primarily valuable for detecting bitflips on the wire if using http instead of https as https (the default) will already validate. Note that this MD5 hash is not stored with the blob. Also note that if enabled, the memory-efficient upload algorithm will not be used, because computing the MD5 hash requires buffering entire blocks, and doing so defeats the purpose of the memory-efficient algorithm.