What are compute targets in Azure Machine Learning?

A compute target is a designated compute resource/environment where you run your training script or host your service deployment. This location may be your local machine or a cloud-based compute resource. Using compute targets make it easy for you to later change your compute environment without having to change your code.

In a typical model development lifecycle, you might:

  1. Start by developing and experimenting on a small amount of data. At this stage, we recommend your local environment (local computer or cloud-based VM) as your compute target.
  2. Scale up to larger data, or do distributed training using one of these training compute targets.
  3. Once your model is ready, deploy it to a web hosting environment or IoT device with one of these deployment compute targets.

The compute resources you use for your compute targets are attached to a workspace. Compute resources other than the local machine are shared by users of the workspace.

Training compute targets

Azure Machine Learning has varying support across different compute resources. You can also attach your own compute resource, although support for various scenarios may vary.

Compute targets can be reused from one training job to the next. For example, once you attach a remote VM to your workspace, you can reuse it for multiple jobs. For machine learning pipelines, use the appropriate pipeline step for each compute target.

Training  targets Automated ML ML pipelines Azure Machine Learning designer
Local computer yes    
Azure Machine Learning compute instance (preview) yes
Azure Machine Learning compute cluster yes &
hyperparameter tuning
yes yes
Remote VM yes &
hyperparameter tuning
yes  
Azure Databricks yes (SDK local mode only) yes  
Azure Data Lake Analytics   yes  
Azure HDInsight   yes  
Azure Batch   yes  

Learn more about setting up and using a compute target for model training.

Deployment targets

The following compute resources can be used to host your model deployment.

Compute target Used for GPU support FPGA support Description
Local web service Testing/debugging     Use for limited testing and troubleshooting. Hardware acceleration depends on use of libraries in the local system.
Azure Machine Learning compute instance web service Testing/debugging     Use for limited testing and troubleshooting.
Azure Kubernetes Service (AKS) Real-time inference Yes (web service deployment) Yes Use for high-scale production deployments. Provides fast response time and autoscaling of the deployed service. Cluster autoscaling isn't supported through the Azure Machine Learning SDK. To change the nodes in the AKS cluster, use the UI for your AKS cluster in the Azure portal. AKS is the only option available for the designer.
Azure Container Instances Testing or development     Use for low-scale CPU-based workloads that require less than 48 GB of RAM.
Azure Machine Learning compute clusters (Preview) Batch inference Yes (machine learning pipeline)   Run batch scoring on serverless compute. Supports normal and low-priority VMs.
Azure Functions (Preview) Real-time inference      
Azure IoT Edge (Preview) IoT module     Deploy and serve ML models on IoT devices.
Azure Data Box Edge Via IoT Edge   Yes Deploy and serve ML models on IoT devices.

Note

Although compute targets like local, Azure Machine Learning compute instance, and Azure Machine Learning compute clusters support GPU for training and experimentation, using GPU for inference when deployed as a web service is supported only on Azure Kubernetes Service.

Using a GPU for inference when scoring with a machine learning pipeline is supported only on Azure Machine Learning Compute.

Learn where and how to deploy your model to a compute target.

Azure Machine Learning compute (managed)

A managed compute resource is created and managed by Azure Machine Learning. This compute is optimized for machine learning workloads. Azure Machine Learning compute clusters and compute instances are the only managed computes. Additional managed compute resources may be added in the future.

You can create Azure Machine Learning compute instances (preview) or compute clusters in:

Azure Machine Learning studio Azure portal SDK Resource Manager template CLI
Compute instance yes yes yes yes
Compute cluster yes yes yes yes yes

When created these compute resources are automatically part of your workspace unlike other kinds of compute targets.

Note

Compute instances are available only for workspaces with a region of North Central US or UK South. If your workspace is in any other region, you can continue to create and use a Notebook VM instead.

Compute clusters

You can use Azure Machine Learning compute clusters for training and for batch inferencing (preview). With this compute resource, you have:

  • Single- or multi-node cluster
  • Autoscales each time you submit a run
  • Automatic cluster management and job scheduling
  • Support for both CPU and GPU resources

Unmanaged compute

An unmanaged compute target is not managed by Azure Machine Learning. You create this type of compute target outside Azure Machine Learning, then attach it to your workspace. Unmanaged compute resources can require additional steps for you to maintain or to improve performance for machine learning workloads.

Next steps

Learn how to: