Create an Azure Machine Learning compute cluster
Learn how to create and manage a compute cluster in your Azure Machine Learning workspace.
You can use Azure Machine Learning compute cluster to distribute a training or batch inference process across a cluster of CPU or GPU compute nodes in the cloud. For more information on the VM sizes that include GPUs, see GPU-optimized virtual machine sizes.
In this article, learn how to:
- Create a compute cluster
- Lower your compute cluster cost
- Set up a managed identity for the cluster
An Azure Machine Learning workspace. For more information, see Create an Azure Machine Learning workspace.
What is a compute cluster?
Azure Machine Learning compute cluster is a managed-compute infrastructure that allows you to easily create a single or multi-node compute. The compute is created within your workspace region as a resource that can be shared with other users in your workspace. The compute scales up automatically when a job is submitted, and can be put in an Azure Virtual Network. The compute executes in a containerized environment and packages your model dependencies in a Docker container.
Compute clusters can run jobs securely in a virtual network environment, without requiring enterprises to open up SSH ports. The job executes in a containerized environment and packages your model dependencies in a Docker container.
Do not create multiple, simultaneous attachments to the same compute from your workspace. For example, attaching one compute cluster to a workspace using two different names. Each new attachment will break the previous existing attachment(s).
If you want to re-attach a compute target, for example to change cluster configuration settings, you must first remove the existing attachment.
Azure Machine Learning Compute has default limits, such as the number of cores that can be allocated. For more information, see Manage and request quotas for Azure resources.
Azure allows you to place locks on resources, so that they cannot be deleted or are read only. Do not apply resource locks to the resource group that contains your workspace. Applying a lock to the resource group that contains your workspace will prevent scaling operations for Azure ML compute clusters. For more information on locking resources, see Lock resources to prevent unexpected changes.
Clusters can generally scale up to 100 nodes as long as you have enough quota for the number of cores required. By default clusters are setup with inter-node communication enabled between the nodes of the cluster to support MPI jobs for example. However you can scale your clusters to 1000s of nodes by simply raising a support ticket, and requesting to allow list your subscription, or workspace, or a specific cluster for disabling inter-node communication.
Time estimate: Approximately 5 minutes.
Azure Machine Learning Compute can be reused across runs. The compute can be shared with other users in the workspace and is retained between runs, automatically scaling nodes up or down based on the number of runs submitted, and the max_nodes set on your cluster. The min_nodes setting controls the minimum nodes available.
The dedicated cores per region per VM family quota and total regional quota, which applies to compute cluster creation, is unified and shared with Azure Machine Learning training compute instance quota.
To avoid charges when no jobs are running, set the minimum nodes to 0. This setting allows Azure Machine Learning to de-allocate the nodes when they aren't in use. Any value larger than 0 will keep that number of nodes running, even if they are not in use.
The compute autoscales down to zero nodes when it isn't used. Dedicated VMs are created to run your jobs as needed.
To create a persistent Azure Machine Learning Compute resource in Python, specify the vm_size and max_nodes properties. Azure Machine Learning then uses smart defaults for the other properties.
- vm_size: The VM family of the nodes created by Azure Machine Learning Compute.
- max_nodes: The max number of nodes to autoscale up to when you run a job on Azure Machine Learning Compute.
from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your CPU cluster cpu_cluster_name = "cpucluster" # Verify that cluster does not exist already try: cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2', max_nodes=4) cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config) cpu_cluster.wait_for_completion(show_output=True)
You can also configure several advanced properties when you create Azure Machine Learning Compute. The properties allow you to create a persistent cluster of fixed size, or within an existing Azure Virtual Network in your subscription. See the AmlCompute class for details.
Lower your compute cluster cost
You may also choose to use low-priority VMs to run some or all of your workloads. These VMs do not have guaranteed availability and may be preempted while in use. A preempted job is restarted, not resumed.
Use any of these ways to specify a low-priority VM:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2', vm_priority='lowpriority', max_nodes=4)
Set up managed identity
Azure Machine Learning compute clusters also support managed identities to authenticate access to Azure resources without including credentials in your code. There are two types of managed identities:
- A system-assigned managed identity is enabled directly on the Azure Machine Learning compute cluster. The life cycle of a system-assigned identity is directly tied to the compute cluster. If the compute cluster is deleted, Azure automatically cleans up the credentials and the identity in Azure AD.
- A user-assigned managed identity is a standalone Azure resource provided through Azure Managed Identity service. You can assign a user-assigned managed identity to multiple resources, and it persists for as long as you want.
Configure managed identity in your provisioning configuration:
System assigned managed identity:
# configure cluster with a system-assigned managed identity compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2', max_nodes=5, identity_type="SystemAssigned", )
User-assigned managed identity:
# configure cluster with a user-assigned managed identity compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2', max_nodes=5, identity_type="UserAssigned", identity_id=['/subscriptions/<subcription_id>/resourcegroups/<resource_group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user_assigned_identity>']) cpu_cluster_name = "cpu-cluster" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
Add managed identity to an existing compute cluster
System-assigned managed identity:
# add a system-assigned managed identity cpu_cluster.add_identity(identity_type="SystemAssigned")
User-assigned managed identity:
# add a user-assigned managed identity cpu_cluster.add_identity(identity_type="UserAssigned", identity_id=['/subscriptions/<subcription_id>/resourcegroups/<resource_group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user_assigned_identity>'])
Azure Machine Learning compute clusters support only one system-assigned identity or multiple user-assigned identities, not both concurrently.
Managed identity usage
The default managed identity is the system-assigned managed identity or the first user-assigned managed identity.
During a run there are two applications of an identity:
The system uses an identity to set up the user's storage mounts, container registry, and datastores.
- In this case, the system will use the default-managed identity.
The user applies an identity to access resources from within the code for a submitted run
- In this case, provide the client_id corresponding to the managed identity you want to use to retrieve a credential.
- Alternatively, get the user-assigned identity's client ID through the DEFAULT_IDENTITY_CLIENT_ID environment variable.
For example, to retrieve a token for a datastore with the default-managed identity:
client_id = os.environ.get('DEFAULT_IDENTITY_CLIENT_ID') credential = ManagedIdentityCredential(client_id=client_id) token = credential.get_token('https://storage.azure.com/')
There is a chance that some users who created their Azure Machine Learning workspace from the Azure portal before the GA release might not be able to create AmlCompute in that workspace. You can either raise a support request against the service or create a new workspace through the portal or the SDK to unblock yourself immediately.
If your Azure Machine Learning compute cluster appears stuck at resizing (0 -> 0) for the node state, this may be caused by Azure resource locks.
Azure allows you to place locks on resources, so that they cannot be deleted or are read only. Locking a resource can lead to unexpected results. Some operations that don't seem to modify the resource actually require actions that are blocked by the lock.
For example, applying a delete lock to the resource group for your workspace will prevent scaling operations for Azure ML compute clusters.
For more information on locking resources, see Lock resources to prevent unexpected changes.
Use your compute cluster to: