SparkBatchOperations Class

SparkBatchOperations async operations.

You should not instantiate this class directly. Instead, you should create a Client instance that instantiates it for you and attaches it as an attribute.

Inheritance
builtins.object
SparkBatchOperations

Constructor

SparkBatchOperations(client, config, serializer, deserializer)

Parameters

Name Description
client
Required

Client for service requests.

config
Required

Configuration of service client.

serializer
Required

An object model serializer.

deserializer
Required

An object model deserializer.

Variables

Name Description
models

Alias to model classes used in this operation group.

Methods

cancel_spark_batch_job

Cancels a running spark batch job.

create_spark_batch_job

Create new spark batch job.

get_spark_batch_job

Gets a single spark batch job.

get_spark_batch_jobs

List all spark batch jobs which are running under a particular spark pool.

cancel_spark_batch_job

Cancels a running spark batch job.

async cancel_spark_batch_job(batch_id: int, **kwargs: Any) -> None

Parameters

Name Description
batch_id
Required
int

Identifier for the batch job.

Keyword-Only Parameters

Name Description
cls

A custom type or function that will be passed the direct response

Returns

Type Description

None, or the result of cls(response)

Exceptions

Type Description

create_spark_batch_job

Create new spark batch job.

async create_spark_batch_job(spark_batch_job_options: SparkBatchJobOptions, detailed: bool | None = None, **kwargs: Any) -> SparkBatchJob

Parameters

Name Description
spark_batch_job_options
Required

Livy compatible batch job request payload.

detailed

Optional query param specifying whether detailed response is returned beyond plain livy.

default value: None

Keyword-Only Parameters

Name Description
cls

A custom type or function that will be passed the direct response

Returns

Type Description

SparkBatchJob, or the result of cls(response)

Exceptions

Type Description

get_spark_batch_job

Gets a single spark batch job.

async get_spark_batch_job(batch_id: int, detailed: bool | None = None, **kwargs: Any) -> SparkBatchJob

Parameters

Name Description
batch_id
Required
int

Identifier for the batch job.

detailed

Optional query param specifying whether detailed response is returned beyond plain livy.

default value: None

Keyword-Only Parameters

Name Description
cls

A custom type or function that will be passed the direct response

Returns

Type Description

SparkBatchJob, or the result of cls(response)

Exceptions

Type Description

get_spark_batch_jobs

List all spark batch jobs which are running under a particular spark pool.

async get_spark_batch_jobs(from_parameter: int | None = None, size: int | None = None, detailed: bool | None = None, **kwargs: Any) -> SparkBatchJobCollection

Parameters

Name Description
from_parameter
int

Optional param specifying which index the list should begin from.

default value: None
size
int

Optional param specifying the size of the returned list. By default it is 20 and that is the maximum.

default value: None
detailed

Optional query param specifying whether detailed response is returned beyond plain livy.

default value: None

Keyword-Only Parameters

Name Description
cls

A custom type or function that will be passed the direct response

Returns

Type Description

SparkBatchJobCollection, or the result of cls(response)

Exceptions

Type Description

Attributes

models

models = <module 'azure.synapse.spark.models' from 'C:\\hostedtoolcache\\windows\\Python\\3.11.9\\x64\\Lib\\site-packages\\azure\\synapse\\spark\\models\\__init__.py'>