Microsoft.Synapse workspaces/bigDataPools
The workspaces/bigDataPools resource type can be deployed with operations that target:
- Resource groups - See resource group deployment commands
For a list of changed properties in each API version, see change log.
To create a Microsoft.Synapse/workspaces/bigDataPools resource, add the following Bicep to your template.
resource symbolicname 'Microsoft.Synapse/workspaces/bigDataPools@2021-06-01' = {
parent: resourceSymbolicName
location: 'string'
name: 'string'
properties: {
autoPause: {
delayInMinutes: int
enabled: bool
}
autoScale: {
enabled: bool
maxNodeCount: int
minNodeCount: int
}
cacheSize: int
customLibraries: [
{
containerName: 'string'
name: 'string'
path: 'string'
type: 'string'
}
]
defaultSparkLogFolder: 'string'
dynamicExecutorAllocation: {
enabled: bool
maxExecutors: int
minExecutors: int
}
isAutotuneEnabled: bool
isComputeIsolationEnabled: bool
libraryRequirements: {
content: 'string'
filename: 'string'
}
nodeCount: int
nodeSize: 'string'
nodeSizeFamily: 'string'
provisioningState: 'string'
sessionLevelPackagesEnabled: bool
sparkConfigProperties: {
configurationType: 'string'
content: 'string'
filename: 'string'
}
sparkEventsFolder: 'string'
sparkVersion: 'string'
}
tags: {
{customized property}: 'string'
}
}
Name | Description | Value |
---|---|---|
delayInMinutes | Number of minutes of idle time before the Big Data pool is automatically paused. | int |
enabled | Whether auto-pausing is enabled for the Big Data pool. | bool |
Name | Description | Value |
---|---|---|
enabled | Whether automatic scaling is enabled for the Big Data pool. | bool |
maxNodeCount | The maximum number of nodes the Big Data pool can support. | int |
minNodeCount | The minimum number of nodes the Big Data pool can support. | int |
Name | Description | Value |
---|---|---|
autoPause | Auto-pausing properties | AutoPauseProperties |
autoScale | Auto-scaling properties | AutoScaleProperties |
cacheSize | The cache size | int |
customLibraries | List of custom libraries/packages associated with the spark pool. | LibraryInfo[] |
defaultSparkLogFolder | The default folder where Spark logs will be written. | string |
dynamicExecutorAllocation | Dynamic Executor Allocation | DynamicExecutorAllocation |
isAutotuneEnabled | Whether autotune is required or not. | bool |
isComputeIsolationEnabled | Whether compute isolation is required or not. | bool |
libraryRequirements | Library version requirements | LibraryRequirements |
nodeCount | The number of nodes in the Big Data pool. | int |
nodeSize | The level of compute power that each node in the Big Data pool has. | 'Large' 'Medium' 'None' 'Small' 'XLarge' 'XXLarge' 'XXXLarge' |
nodeSizeFamily | The kind of nodes that the Big Data pool provides. | 'HardwareAcceleratedFPGA' 'HardwareAcceleratedGPU' 'MemoryOptimized' 'None' |
provisioningState | The state of the Big Data pool. | string |
sessionLevelPackagesEnabled | Whether session level packages enabled. | bool |
sparkConfigProperties | Spark configuration file to specify additional properties | SparkConfigProperties |
sparkEventsFolder | The Spark events folder | string |
sparkVersion | The Apache Spark version. | string |
Name | Description | Value |
---|---|---|
enabled | Indicates whether Dynamic Executor Allocation is enabled or not. | bool |
maxExecutors | The maximum number of executors alloted | int |
minExecutors | The minimum number of executors alloted | int |
Name | Description | Value |
---|---|---|
containerName | Storage blob container name. | string |
name | Name of the library. | string |
path | Storage blob path of library. | string |
type | Type of the library. | string |
Name | Description | Value |
---|---|---|
content | The library requirements. | string |
filename | The filename of the library requirements file. | string |
Name | Description | Value |
---|---|---|
location | The geo-location where the resource lives | string (required) |
name | The resource name | string (required) |
parent | In Bicep, you can specify the parent resource for a child resource. You only need to add this property when the child resource is declared outside of the parent resource. For more information, see Child resource outside parent resource. |
Symbolic name for resource of type: workspaces |
properties | Big Data pool properties | BigDataPoolResourceProperties |
tags | Resource tags | Dictionary of tag names and values. See Tags in templates |
Name | Description | Value |
---|---|---|
configurationType | The type of the spark config properties file. | 'Artifact' 'File' |
content | The spark config properties. | string |
filename | The filename of the spark config properties file. | string |
Name | Description | Value |
---|
The workspaces/bigDataPools resource type can be deployed with operations that target:
- Resource groups - See resource group deployment commands
For a list of changed properties in each API version, see change log.
To create a Microsoft.Synapse/workspaces/bigDataPools resource, add the following JSON to your template.
{
"type": "Microsoft.Synapse/workspaces/bigDataPools",
"apiVersion": "2021-06-01",
"name": "string",
"location": "string",
"properties": {
"autoPause": {
"delayInMinutes": "int",
"enabled": "bool"
},
"autoScale": {
"enabled": "bool",
"maxNodeCount": "int",
"minNodeCount": "int"
},
"cacheSize": "int",
"customLibraries": [
{
"containerName": "string",
"name": "string",
"path": "string",
"type": "string"
}
],
"defaultSparkLogFolder": "string",
"dynamicExecutorAllocation": {
"enabled": "bool",
"maxExecutors": "int",
"minExecutors": "int"
},
"isAutotuneEnabled": "bool",
"isComputeIsolationEnabled": "bool",
"libraryRequirements": {
"content": "string",
"filename": "string"
},
"nodeCount": "int",
"nodeSize": "string",
"nodeSizeFamily": "string",
"provisioningState": "string",
"sessionLevelPackagesEnabled": "bool",
"sparkConfigProperties": {
"configurationType": "string",
"content": "string",
"filename": "string"
},
"sparkEventsFolder": "string",
"sparkVersion": "string"
},
"tags": {
"{customized property}": "string"
}
}
Name | Description | Value |
---|---|---|
delayInMinutes | Number of minutes of idle time before the Big Data pool is automatically paused. | int |
enabled | Whether auto-pausing is enabled for the Big Data pool. | bool |
Name | Description | Value |
---|---|---|
enabled | Whether automatic scaling is enabled for the Big Data pool. | bool |
maxNodeCount | The maximum number of nodes the Big Data pool can support. | int |
minNodeCount | The minimum number of nodes the Big Data pool can support. | int |
Name | Description | Value |
---|---|---|
autoPause | Auto-pausing properties | AutoPauseProperties |
autoScale | Auto-scaling properties | AutoScaleProperties |
cacheSize | The cache size | int |
customLibraries | List of custom libraries/packages associated with the spark pool. | LibraryInfo[] |
defaultSparkLogFolder | The default folder where Spark logs will be written. | string |
dynamicExecutorAllocation | Dynamic Executor Allocation | DynamicExecutorAllocation |
isAutotuneEnabled | Whether autotune is required or not. | bool |
isComputeIsolationEnabled | Whether compute isolation is required or not. | bool |
libraryRequirements | Library version requirements | LibraryRequirements |
nodeCount | The number of nodes in the Big Data pool. | int |
nodeSize | The level of compute power that each node in the Big Data pool has. | 'Large' 'Medium' 'None' 'Small' 'XLarge' 'XXLarge' 'XXXLarge' |
nodeSizeFamily | The kind of nodes that the Big Data pool provides. | 'HardwareAcceleratedFPGA' 'HardwareAcceleratedGPU' 'MemoryOptimized' 'None' |
provisioningState | The state of the Big Data pool. | string |
sessionLevelPackagesEnabled | Whether session level packages enabled. | bool |
sparkConfigProperties | Spark configuration file to specify additional properties | SparkConfigProperties |
sparkEventsFolder | The Spark events folder | string |
sparkVersion | The Apache Spark version. | string |
Name | Description | Value |
---|---|---|
enabled | Indicates whether Dynamic Executor Allocation is enabled or not. | bool |
maxExecutors | The maximum number of executors alloted | int |
minExecutors | The minimum number of executors alloted | int |
Name | Description | Value |
---|---|---|
containerName | Storage blob container name. | string |
name | Name of the library. | string |
path | Storage blob path of library. | string |
type | Type of the library. | string |
Name | Description | Value |
---|---|---|
content | The library requirements. | string |
filename | The filename of the library requirements file. | string |
Name | Description | Value |
---|---|---|
apiVersion | The api version | '2021-06-01' |
location | The geo-location where the resource lives | string (required) |
name | The resource name | string (required) |
properties | Big Data pool properties | BigDataPoolResourceProperties |
tags | Resource tags | Dictionary of tag names and values. See Tags in templates |
type | The resource type | 'Microsoft.Synapse/workspaces/bigDataPools' |
Name | Description | Value |
---|---|---|
configurationType | The type of the spark config properties file. | 'Artifact' 'File' |
content | The spark config properties. | string |
filename | The filename of the spark config properties file. | string |
Name | Description | Value |
---|
The following Azure Quickstart templates deploy this resource type.
Template | Description |
---|---|
Azure Synapse Proof-of-Concept |
This template creates a proof of concept environment for Azure Synapse, including SQL Pools and optional Apache Spark Pools |
The workspaces/bigDataPools resource type can be deployed with operations that target:
- Resource groups
For a list of changed properties in each API version, see change log.
To create a Microsoft.Synapse/workspaces/bigDataPools resource, add the following Terraform to your template.
resource "azapi_resource" "symbolicname" {
type = "Microsoft.Synapse/workspaces/bigDataPools@2021-06-01"
name = "string"
location = "string"
tags = {
{customized property} = "string"
}
body = {
properties = {
autoPause = {
delayInMinutes = int
enabled = bool
}
autoScale = {
enabled = bool
maxNodeCount = int
minNodeCount = int
}
cacheSize = int
customLibraries = [
{
containerName = "string"
name = "string"
path = "string"
type = "string"
}
]
defaultSparkLogFolder = "string"
dynamicExecutorAllocation = {
enabled = bool
maxExecutors = int
minExecutors = int
}
isAutotuneEnabled = bool
isComputeIsolationEnabled = bool
libraryRequirements = {
content = "string"
filename = "string"
}
nodeCount = int
nodeSize = "string"
nodeSizeFamily = "string"
provisioningState = "string"
sessionLevelPackagesEnabled = bool
sparkConfigProperties = {
configurationType = "string"
content = "string"
filename = "string"
}
sparkEventsFolder = "string"
sparkVersion = "string"
}
}
}
Name | Description | Value |
---|---|---|
delayInMinutes | Number of minutes of idle time before the Big Data pool is automatically paused. | int |
enabled | Whether auto-pausing is enabled for the Big Data pool. | bool |
Name | Description | Value |
---|---|---|
enabled | Whether automatic scaling is enabled for the Big Data pool. | bool |
maxNodeCount | The maximum number of nodes the Big Data pool can support. | int |
minNodeCount | The minimum number of nodes the Big Data pool can support. | int |
Name | Description | Value |
---|---|---|
autoPause | Auto-pausing properties | AutoPauseProperties |
autoScale | Auto-scaling properties | AutoScaleProperties |
cacheSize | The cache size | int |
customLibraries | List of custom libraries/packages associated with the spark pool. | LibraryInfo[] |
defaultSparkLogFolder | The default folder where Spark logs will be written. | string |
dynamicExecutorAllocation | Dynamic Executor Allocation | DynamicExecutorAllocation |
isAutotuneEnabled | Whether autotune is required or not. | bool |
isComputeIsolationEnabled | Whether compute isolation is required or not. | bool |
libraryRequirements | Library version requirements | LibraryRequirements |
nodeCount | The number of nodes in the Big Data pool. | int |
nodeSize | The level of compute power that each node in the Big Data pool has. | 'Large' 'Medium' 'None' 'Small' 'XLarge' 'XXLarge' 'XXXLarge' |
nodeSizeFamily | The kind of nodes that the Big Data pool provides. | 'HardwareAcceleratedFPGA' 'HardwareAcceleratedGPU' 'MemoryOptimized' 'None' |
provisioningState | The state of the Big Data pool. | string |
sessionLevelPackagesEnabled | Whether session level packages enabled. | bool |
sparkConfigProperties | Spark configuration file to specify additional properties | SparkConfigProperties |
sparkEventsFolder | The Spark events folder | string |
sparkVersion | The Apache Spark version. | string |
Name | Description | Value |
---|---|---|
enabled | Indicates whether Dynamic Executor Allocation is enabled or not. | bool |
maxExecutors | The maximum number of executors alloted | int |
minExecutors | The minimum number of executors alloted | int |
Name | Description | Value |
---|---|---|
containerName | Storage blob container name. | string |
name | Name of the library. | string |
path | Storage blob path of library. | string |
type | Type of the library. | string |
Name | Description | Value |
---|---|---|
content | The library requirements. | string |
filename | The filename of the library requirements file. | string |
Name | Description | Value |
---|---|---|
location | The geo-location where the resource lives | string (required) |
name | The resource name | string (required) |
parent_id | The ID of the resource that is the parent for this resource. | ID for resource of type: workspaces |
properties | Big Data pool properties | BigDataPoolResourceProperties |
tags | Resource tags | Dictionary of tag names and values. |
type | The resource type | "Microsoft.Synapse/workspaces/bigDataPools@2021-06-01" |
Name | Description | Value |
---|---|---|
configurationType | The type of the spark config properties file. | 'Artifact' 'File' |
content | The spark config properties. | string |
filename | The filename of the spark config properties file. | string |
Name | Description | Value |
---|