AzureDatabricksLinkedService Class

Azure Databricks linked service.

All required parameters must be populated in order to send to Azure.

Inheritance
azure.mgmt.datafactory.models._models_py3.LinkedService
AzureDatabricksLinkedService

Constructor

AzureDatabricksLinkedService(*, domain: Any, additional_properties: Optional[Dict[str, Any]] = None, connect_via: Optional[_models.IntegrationRuntimeReference] = None, description: Optional[str] = None, parameters: Optional[Dict[str, _models.ParameterSpecification]] = None, annotations: Optional[List[Any]] = None, access_token: Optional[_models.SecretBase] = None, authentication: Optional[Any] = None, workspace_resource_id: Optional[Any] = None, existing_cluster_id: Optional[Any] = None, instance_pool_id: Optional[Any] = None, new_cluster_version: Optional[Any] = None, new_cluster_num_of_worker: Optional[Any] = None, new_cluster_node_type: Optional[Any] = None, new_cluster_spark_conf: Optional[Dict[str, Any]] = None, new_cluster_spark_env_vars: Optional[Dict[str, Any]] = None, new_cluster_custom_tags: Optional[Dict[str, Any]] = None, new_cluster_log_destination: Optional[Any] = None, new_cluster_driver_node_type: Optional[Any] = None, new_cluster_init_scripts: Optional[Any] = None, new_cluster_enable_elastic_disk: Optional[Any] = None, encrypted_credential: Optional[Any] = None, policy_id: Optional[Any] = None, credential: Optional[_models.CredentialReference] = None, **kwargs)

Variables

additional_properties
dict[str, any]

Unmatched properties from the message are deserialized to this collection.

type
str

Required. Type of linked service.Constant filled by server.

connect_via
IntegrationRuntimeReference

The integration runtime reference.

description
str

Linked service description.

parameters
dict[str, ParameterSpecification]

Parameters for linked service.

annotations
list[any]

List of tags that can be used for describing the linked service.

domain
any

Required. <REGION>.azuredatabricks.net, domain name of your Databricks deployment. Type: string (or Expression with resultType string).

access_token
SecretBase

Access token for databricks REST API. Refer to https://docs.azuredatabricks.net/api/latest/authentication.html. Type: string (or Expression with resultType string).

authentication
any

Required to specify MSI, if using Workspace resource id for databricks REST API. Type: string (or Expression with resultType string).

workspace_resource_id
any

Workspace resource id for databricks REST API. Type: string (or Expression with resultType string).

existing_cluster_id
any

The id of an existing interactive cluster that will be used for all runs of this activity. Type: string (or Expression with resultType string).

instance_pool_id
any

The id of an existing instance pool that will be used for all runs of this activity. Type: string (or Expression with resultType string).

new_cluster_version
any

If not using an existing interactive cluster, this specifies the Spark version of a new job cluster or instance pool nodes created for each run of this activity. Required if instancePoolId is specified. Type: string (or Expression with resultType string).

new_cluster_num_of_worker
any

If not using an existing interactive cluster, this specifies the number of worker nodes to use for the new job cluster or instance pool. For new job clusters, this a string-formatted Int32, like '1' means numOfWorker is 1 or '1:10' means auto-scale from 1 (min) to 10 (max). For instance pools, this is a string-formatted Int32, and can only specify a fixed number of worker nodes, such as '2'. Required if newClusterVersion is specified. Type: string (or Expression with resultType string).

new_cluster_node_type
any

The node type of the new job cluster. This property is required if newClusterVersion is specified and instancePoolId is not specified. If instancePoolId is specified, this property is ignored. Type: string (or Expression with resultType string).

new_cluster_spark_conf
dict[str, any]

A set of optional, user-specified Spark configuration key-value pairs.

new_cluster_spark_env_vars
dict[str, any]

A set of optional, user-specified Spark environment variables key-value pairs.

new_cluster_custom_tags
dict[str, any]

Additional tags for cluster resources. This property is ignored in instance pool configurations.

new_cluster_log_destination
any

Specify a location to deliver Spark driver, worker, and event logs. Type: string (or Expression with resultType string).

new_cluster_driver_node_type
any

The driver node type for the new job cluster. This property is ignored in instance pool configurations. Type: string (or Expression with resultType string).

new_cluster_init_scripts
any

User-defined initialization scripts for the new cluster. Type: array of strings (or Expression with resultType array of strings).

new_cluster_enable_elastic_disk
any

Enable the elastic disk on the new cluster. This property is now ignored, and takes the default elastic disk behavior in Databricks (elastic disks are always enabled). Type: boolean (or Expression with resultType boolean).

encrypted_credential
any

The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).

policy_id
any

The policy id for limiting the ability to configure clusters based on a user defined set of rules. Type: string (or Expression with resultType string).

credential
CredentialReference

The credential reference containing authentication information.