BlobAsyncClient Class

Definition

This class provides a client that contains generic blob operations for Azure Storage Blobs. Operations allowed by the client are uploading and downloading, copying a blob, retrieving and setting metadata, retrieving and setting HTTP headers, and deleting and un-deleting a blob.

This client is instantiated through BlobClientBuilder or retrieved via getBlobAsyncClient(String blobName).

For operations on a specific blob type (i.e append, block, or page) use getAppendBlobAsyncClient(), getBlockBlobAsyncClient(), or getPageBlobAsyncClient() to construct a client that allows blob specific operations.

Please refer to the Azure Docs for more information.

public class BlobAsyncClient extends BlobAsyncClientBase
Inheritance
java.lang.Object
BlobAsyncClient

Inherited Members

BlobAsyncClientBase.abortCopyFromUrl(String copyId) BlobAsyncClientBase.abortCopyFromUrlWithResponse(String copyId, String leaseId) BlobAsyncClientBase.beginCopy(BlobBeginCopyOptions options) BlobAsyncClientBase.beginCopy(String sourceUrl, Duration pollInterval) BlobAsyncClientBase.beginCopy(String sourceUrl, Map<String,String> metadata, AccessTier tier, RehydratePriority priority, RequestConditions sourceModifiedRequestConditions, BlobRequestConditions destRequestConditions, Duration pollInterval) BlobAsyncClientBase.copyFromUrl(String copySource) BlobAsyncClientBase.copyFromUrlWithResponse(BlobCopyFromUrlOptions options) BlobAsyncClientBase.copyFromUrlWithResponse(String copySource, Map<String,String> metadata, AccessTier tier, RequestConditions sourceModifiedRequestConditions, BlobRequestConditions destRequestConditions) BlobAsyncClientBase.createSnapshot() BlobAsyncClientBase.createSnapshotWithResponse(Map<String,String> metadata, BlobRequestConditions requestConditions) BlobAsyncClientBase.delete() BlobAsyncClientBase.deleteWithResponse(DeleteSnapshotsOptionType deleteBlobSnapshotOptions, BlobRequestConditions requestConditions) BlobAsyncClientBase.download() BlobAsyncClientBase.downloadContent() BlobAsyncClientBase.downloadContentWithResponse(DownloadRetryOptions options, BlobRequestConditions requestConditions) BlobAsyncClientBase.downloadStream() BlobAsyncClientBase.downloadStreamWithResponse(BlobRange range, DownloadRetryOptions options, BlobRequestConditions requestConditions, boolean getRangeContentMd5) BlobAsyncClientBase.downloadToFile(String filePath) BlobAsyncClientBase.downloadToFile(String filePath, boolean overwrite) BlobAsyncClientBase.downloadToFileWithResponse(BlobDownloadToFileOptions options) BlobAsyncClientBase.downloadToFileWithResponse(String filePath, BlobRange range, ParallelTransferOptions parallelTransferOptions, DownloadRetryOptions options, BlobRequestConditions requestConditions, boolean rangeGetContentMd5) BlobAsyncClientBase.downloadToFileWithResponse(String filePath, BlobRange range, ParallelTransferOptions parallelTransferOptions, DownloadRetryOptions options, BlobRequestConditions requestConditions, boolean rangeGetContentMd5, Set<OpenOption> openOptions) BlobAsyncClientBase.downloadWithResponse(BlobRange range, DownloadRetryOptions options, BlobRequestConditions requestConditions, boolean getRangeContentMd5) BlobAsyncClientBase.exists() BlobAsyncClientBase.existsWithResponse() BlobAsyncClientBase.generateSas(BlobServiceSasSignatureValues blobServiceSasSignatureValues) BlobAsyncClientBase.generateSas(BlobServiceSasSignatureValues blobServiceSasSignatureValues, Context context) BlobAsyncClientBase.generateUserDelegationSas(BlobServiceSasSignatureValues blobServiceSasSignatureValues, UserDelegationKey userDelegationKey) BlobAsyncClientBase.generateUserDelegationSas(BlobServiceSasSignatureValues blobServiceSasSignatureValues, UserDelegationKey userDelegationKey, String accountName, Context context) BlobAsyncClientBase.getAccountInfo() BlobAsyncClientBase.getAccountInfoWithResponse() BlobAsyncClientBase.getAccountName() BlobAsyncClientBase.getAccountUrl() BlobAsyncClientBase.getBlobName() BlobAsyncClientBase.getBlobUrl() BlobAsyncClientBase.getContainerAsyncClient() BlobAsyncClientBase.getContainerName() BlobAsyncClientBase.getCustomerProvidedKey() BlobAsyncClientBase.getEncryptionScope() BlobAsyncClientBase.getHttpPipeline() BlobAsyncClientBase.getProperties() BlobAsyncClientBase.getPropertiesWithResponse(BlobRequestConditions requestConditions) BlobAsyncClientBase.getServiceVersion() BlobAsyncClientBase.getSnapshotClient(String snapshot) BlobAsyncClientBase.getSnapshotId() BlobAsyncClientBase.getTags() BlobAsyncClientBase.getTagsWithResponse(BlobGetTagsOptions options) BlobAsyncClientBase.getVersionClient(String versionId) BlobAsyncClientBase.getVersionId() BlobAsyncClientBase.isSnapshot() BlobAsyncClientBase.query(String expression) BlobAsyncClientBase.queryWithResponse(BlobQueryOptions queryOptions) BlobAsyncClientBase.setAccessTier(AccessTier tier) BlobAsyncClientBase.setAccessTierWithResponse(AccessTier tier, RehydratePriority priority, String leaseId) BlobAsyncClientBase.setAccessTierWithResponse(BlobSetAccessTierOptions options) BlobAsyncClientBase.setHttpHeaders(BlobHttpHeaders headers) BlobAsyncClientBase.setHttpHeadersWithResponse(BlobHttpHeaders headers, BlobRequestConditions requestConditions) BlobAsyncClientBase.setMetadata(Map<String,String> metadata) BlobAsyncClientBase.setMetadataWithResponse(Map<String,String> metadata, BlobRequestConditions requestConditions) BlobAsyncClientBase.setTags(Map<String,String> tags) BlobAsyncClientBase.setTagsWithResponse(BlobSetTagsOptions options) BlobAsyncClientBase.undelete() BlobAsyncClientBase.undeleteWithResponse() java.lang.Object.clone() java.lang.Object.equals(java.lang.Object) java.lang.Object.finalize() java.lang.Object.getClass() java.lang.Object.hashCode() java.lang.Object.notify() java.lang.Object.notifyAll() java.lang.Object.toString() java.lang.Object.wait() java.lang.Object.wait(long) java.lang.Object.wait(long,int)

Constructors

BlobAsyncClient(HttpPipeline pipeline, String url, BlobServiceVersion serviceVersion, String accountName, String containerName, String blobName, String snapshot, CpkInfo customerProvidedKey)

Protected constructor for use by BlobClientBuilder.

BlobAsyncClient(HttpPipeline pipeline, String url, BlobServiceVersion serviceVersion, String accountName, String containerName, String blobName, String snapshot, CpkInfo customerProvidedKey, EncryptionScope encryptionScope)

Protected constructor for use by BlobClientBuilder.

BlobAsyncClient(HttpPipeline pipeline, String url, BlobServiceVersion serviceVersion, String accountName, String containerName, String blobName, String snapshot, CpkInfo customerProvidedKey, EncryptionScope encryptionScope, String versionId)

Protected constructor for use by BlobClientBuilder.

Fields

BLOB_DEFAULT_HTBB_UPLOAD_BLOCK_SIZE

If a blob is known to be greater than 100MB, using a larger block size will trigger some server-side optimizations. If the block size is not set and the size of the blob is known to be greater than 100MB, this value will be used.

BLOB_DEFAULT_NUMBER_OF_BUFFERS

The number of buffers to use if none is specied on the buffered upload method.

BLOB_DEFAULT_UPLOAD_BLOCK_SIZE

The block size to use if none is specified in parallel operations.

Methods

getAppendBlobAsyncClient()

Creates a new AppendBlobAsyncClient associated with this blob.

getBlockBlobAsyncClient()

Creates a new BlockBlobAsyncClient associated with this blob.

getPageBlobAsyncClient()

Creates a new PageBlobAsyncClient associated with this blob.

getSnapshotClient(String snapshot)

Creates a new BlobAsyncClient linked to the snapshot of this blob resource.

getVersionClient(String versionId)

Creates a new BlobAsyncClient linked to the versionId of this blob resource.

upload(BinaryData data)

Creates a new block blob. By default this method will not overwrite an existing blob.

Code Samples

{@codesnippet com.azure.storage.blob.BlobAsyncClient.upload#BinaryData}

upload(BinaryData data, boolean overwrite)

Creates a new block blob, or updates the content of an existing block blob.

Code Samples

{@codesnippet com.azure.storage.blob.BlobAsyncClient.upload#BinaryData-boolean}

upload(Flux<ByteBuffer> data, ParallelTransferOptions parallelTransferOptions)

Creates a new block blob. By default this method will not overwrite an existing blob.

Updating an existing block blob overwrites any existing metadata on the blob. Partial updates are not supported with this method; the content of the existing blob is overwritten with the new content. To perform a partial update of a block blob's, use stageBlock(String base64BlockId, Flux<ByteBuffer> data, long length) and commitBlockList(List<String> base64BlockIds). For more information, see the Azure Docs for Put Block and the Azure Docs for Put Block List.

The data passed need not support multiple subscriptions/be replayable as is required in other upload methods when retries are enabled, and the length of the data need not be known in advance. Therefore, this method does support uploading any arbitrary data source, including network streams. This behavior is possible because this method will perform some internal buffering as configured by the blockSize and numBuffers parameters, so while this method may offer additional convenience, it will not be as performant as other options, which should be preferred when possible.

Typically, the greater the number of buffers used, the greater the possible parallelism when transferring the data. Larger buffers means we will have to stage fewer blocks and therefore require fewer IO operations. The trade-offs between these values are context-dependent, so some experimentation may be required to optimize inputs for a given scenario.

Code Samples

{@codesnippet com.azure.storage.blob.BlobAsyncClient.upload#Flux-ParallelTransferOptions}

upload(Flux<ByteBuffer> data, ParallelTransferOptions parallelTransferOptions, boolean overwrite)

Creates a new block blob, or updates the content of an existing block blob.

Updating an existing block blob overwrites any existing metadata on the blob. Partial updates are not supported with this method; the content of the existing blob is overwritten with the new content. To perform a partial update of a block blob's, use stageBlock(String base64BlockId, Flux<ByteBuffer> data, long length) and commitBlockList(List<String> base64BlockIds). For more information, see the Azure Docs for Put Block and the Azure Docs for Put Block List.

The data passed need not support multiple subscriptions/be replayable as is required in other upload methods when retries are enabled, and the length of the data need not be known in advance. Therefore, this method does support uploading any arbitrary data source, including network streams. This behavior is possible because this method will perform some internal buffering as configured by the blockSize and numBuffers parameters, so while this method may offer additional convenience, it will not be as performant as other options, which should be preferred when possible.

Typically, the greater the number of buffers used, the greater the possible parallelism when transferring the data. Larger buffers means we will have to stage fewer blocks and therefore require fewer IO operations. The trade-offs between these values are context-dependent, so some experimentation may be required to optimize inputs for a given scenario.

Code Samples

{@codesnippet com.azure.storage.blob.BlobAsyncClient.upload#Flux-ParallelTransferOptions-boolean}

uploadFileResourceSupplier(String filePath)

RESERVED FOR INTERNAL USE. Resource Supplier for UploadFile.

uploadFromFile(String filePath)

Creates a new block blob with the content of the specified file. By default this method will not overwrite an existing blob.

Code Samples

{@codesnippet com.azure.storage.blob.BlobAsyncClient.uploadFromFile#String}

uploadFromFile(String filePath, boolean overwrite)

Creates a new block blob, or updates the content of an existing block blob, with the content of the specified file.

Code Samples

{@codesnippet com.azure.storage.blob.BlobAsyncClient.uploadFromFile#String-boolean}

uploadFromFile(String filePath, ParallelTransferOptions parallelTransferOptions, BlobHttpHeaders headers, Map<String,String> metadata, AccessTier tier, BlobRequestConditions requestConditions)

Creates a new block blob, or updates the content of an existing block blob, with the content of the specified file.

To avoid overwriting, pass "*" to setIfNoneMatch(String ifNoneMatch).

Code Samples

{@codesnippet com.azure.storage.blob.BlobAsyncClient.uploadFromFile#String-ParallelTransferOptions-BlobHttpHeaders-Map-AccessTier-BlobRequestConditions}

uploadFromFileWithResponse(BlobUploadFromFileOptions options)

Creates a new block blob, or updates the content of an existing block blob, with the content of the specified file.

To avoid overwriting, pass "*" to setIfNoneMatch(String ifNoneMatch).

Code Samples

{@codesnippet com.azure.storage.blob.BlobAsyncClient.uploadFromFileWithResponse#BlobUploadFromFileOptions}

uploadWithResponse(BlobParallelUploadOptions options)

Creates a new block blob, or updates the content of an existing block blob.

Updating an existing block blob overwrites any existing metadata on the blob. Partial updates are not supported with this method; the content of the existing blob is overwritten with the new content. To perform a partial update of a block blob's, use stageBlock(String base64BlockId, Flux<ByteBuffer> data, long length) and commitBlockList(List<String> base64BlockIds). For more information, see the Azure Docs for Put Block and the Azure Docs for Put Block List.

The data passed need not support multiple subscriptions/be replayable as is required in other upload methods when retries are enabled, and the length of the data need not be known in advance. Therefore, this method does support uploading any arbitrary data source, including network streams. This behavior is possible because this method will perform some internal buffering as configured by the blockSize and numBuffers parameters, so while this method may offer additional convenience, it will not be as performant as other options, which should be preferred when possible.

Typically, the greater the number of buffers used, the greater the possible parallelism when transferring the data. Larger buffers means we will have to stage fewer blocks and therefore require fewer IO operations. The trade-offs between these values are context-dependent, so some experimentation may be required to optimize inputs for a given scenario.

To avoid overwriting, pass "*" to setIfNoneMatch(String ifNoneMatch).

Code Samples

{@codesnippet com.azure.storage.blob.BlobAsyncClient.uploadWithResponse#BlobParallelUploadOptions}

Using Progress Reporting

{@codesnippet com.azure.storage.blob.BlobAsyncClient.uploadWithResponse#BlobParallelUploadOptions}

uploadWithResponse(Flux<ByteBuffer> data, ParallelTransferOptions parallelTransferOptions, BlobHttpHeaders headers, Map<String,String> metadata, AccessTier tier, BlobRequestConditions requestConditions)

Creates a new block blob, or updates the content of an existing block blob.

Updating an existing block blob overwrites any existing metadata on the blob. Partial updates are not supported with this method; the content of the existing blob is overwritten with the new content. To perform a partial update of a block blob's, use stageBlock(String base64BlockId, Flux<ByteBuffer> data, long length) and commitBlockList(List<String> base64BlockIds). For more information, see the Azure Docs for Put Block and the Azure Docs for Put Block List.

The data passed need not support multiple subscriptions/be replayable as is required in other upload methods when retries are enabled, and the length of the data need not be known in advance. Therefore, this method does support uploading any arbitrary data source, including network streams. This behavior is possible because this method will perform some internal buffering as configured by the blockSize and numBuffers parameters, so while this method may offer additional convenience, it will not be as performant as other options, which should be preferred when possible.

Typically, the greater the number of buffers used, the greater the possible parallelism when transferring the data. Larger buffers means we will have to stage fewer blocks and therefore require fewer IO operations. The trade-offs between these values are context-dependent, so some experimentation may be required to optimize inputs for a given scenario.

To avoid overwriting, pass "*" to setIfNoneMatch(String ifNoneMatch).

Code Samples

{@codesnippet com.azure.storage.blob.BlobAsyncClient.uploadWithResponse#Flux-ParallelTransferOptions-BlobHttpHeaders-Map-AccessTier-BlobRequestConditions}

Using Progress Reporting

{@codesnippet com.azure.storage.blob.BlobAsyncClient.uploadWithResponse#Flux-ParallelTransferOptions-BlobHttpHeaders-Map-AccessTier-BlobRequestConditions.ProgressReporter}

Applies to