SparkBatchOperations Class

SparkBatchOperations async operations.

You should not instantiate this class directly. Instead, you should create a Client instance that instantiates it for you and attaches it as an attribute.

Inheritance
builtins.object
SparkBatchOperations

Constructor

SparkBatchOperations(client, config, serializer, deserializer)

Parameters

client
Required

Client for service requests.

config
Required

Configuration of service client.

serializer
Required

An object model serializer.

deserializer
Required

An object model deserializer.

Variables

models

Alias to model classes used in this operation group.

Methods

cancel_spark_batch_job

Cancels a running spark batch job.

create_spark_batch_job

Create new spark batch job.

get_spark_batch_job

Gets a single spark batch job.

get_spark_batch_jobs

List all spark batch jobs which are running under a particular spark pool.

cancel_spark_batch_job

Cancels a running spark batch job.

async cancel_spark_batch_job(batch_id: int, **kwargs: Any) -> None

Parameters

batch_id
int
Required

Identifier for the batch job.

cls
callable

A custom type or function that will be passed the direct response

Returns

None, or the result of cls(response)

Return type

Exceptions

create_spark_batch_job

Create new spark batch job.

async create_spark_batch_job(spark_batch_job_options: azure.synapse.spark.models._models_py3.SparkBatchJobOptions, detailed: Optional[bool] = None, **kwargs: Any) -> azure.synapse.spark.models._models_py3.SparkBatchJob

Parameters

spark_batch_job_options
SparkBatchJobOptions
Required

Livy compatible batch job request payload.

detailed
bool
default value: None

Optional query param specifying whether detailed response is returned beyond plain livy.

cls
callable

A custom type or function that will be passed the direct response

Returns

SparkBatchJob, or the result of cls(response)

Return type

Exceptions

get_spark_batch_job

Gets a single spark batch job.

async get_spark_batch_job(batch_id: int, detailed: Optional[bool] = None, **kwargs: Any) -> azure.synapse.spark.models._models_py3.SparkBatchJob

Parameters

batch_id
int
Required

Identifier for the batch job.

detailed
bool
default value: None

Optional query param specifying whether detailed response is returned beyond plain livy.

cls
callable

A custom type or function that will be passed the direct response

Returns

SparkBatchJob, or the result of cls(response)

Return type

Exceptions

get_spark_batch_jobs

List all spark batch jobs which are running under a particular spark pool.

async get_spark_batch_jobs(from_parameter: Optional[int] = None, size: Optional[int] = None, detailed: Optional[bool] = None, **kwargs: Any) -> azure.synapse.spark.models._models_py3.SparkBatchJobCollection

Parameters

from_parameter
int
default value: None

Optional param specifying which index the list should begin from.

size
int
default value: None

Optional param specifying the size of the returned list. By default it is 20 and that is the maximum.

detailed
bool
default value: None

Optional query param specifying whether detailed response is returned beyond plain livy.

cls
callable

A custom type or function that will be passed the direct response

Returns

SparkBatchJobCollection, or the result of cls(response)

Return type

Exceptions

Attributes

models

models = <module 'azure.synapse.spark.models' from 'C:\\hostedtoolcache\\windows\\Python\\3.9.13\\x64\\lib\\site-packages\\azure\\synapse\\spark\\models\\__init__.py'>