Use the CLI extension for Azure Machine Learning service

The Azure Machine Learning CLI is an extension to the Azure CLI, a cross-platform command-line interface for the Azure platform. This extension provides commands for working with the Azure Machine Learning service from the command line. It allows you to create scripts that automate your machine learning workflows. For example, you can create scripts that perform the following actions:

  • Run experiments to create machine learning models

  • Register machine learning models for customer usage

  • Package, deploy, and track the lifecycle of your machine learning models

The CLI is not a replacement for the Azure Machine Learning SDK. It is a complementary tool that is optimized to handle highly parameterized tasks such as:

  • Creating compute resources

  • Parameterized experiment submission

  • Model registration

  • Image creation

  • Service deployment


Install the extension

To install the Machine Learning CLI extension, use the following command:

az extension add -s --pip-extra-index-urls

When prompted, select y to install the extension.

To verify that the extension has been installed, use the following command to display a list of ML-specific subcommands:

az ml -h


To update the extension you must remove it, and then install it. This installs the latest version.

Remove the extension

To remove the CLI extension, use the following command:

az extension remove -n azure-cli-ml

Resource management

The following commands demonstrate how to use the CLI to manage resources used by Azure Machine Learning.

  • Create an Azure Machine Learning service workspace:

    az ml workspace create -n myworkspace -g myresourcegroup
  • Set a default workspace:

    az configure --defaults aml_workspace=myworkspace group=myresourcegroup
  • Attach an AKS cluster

    az ml computetarget attach aks -n myaks -i myaksresourceid -g myrg -w myworkspace


The following commands demonstrate how to use the CLI to work with experiments:

  • Attach a project (run configuration) before submitting an experiment:

    az ml project attach --experiment-name myhistory
  • Start a run of your experiment. When using this command, specify the name of the runconfig file that contains the run configuration. The compute target uses the run configuration to create the training environment for the model. In this example, the run configuration is loaded from the ./aml_config/myrunconfig.runconfig file.

    az ml run submit -c myrunconfig

    For more information on the runconfig file, see the RunConfig section.

  • View a list of submitted experiments:

    az ml history list

Model registration, image creation & deployment

The following commands demonstrate how to register a trained model, and then deploy it as a production service:

  • Register a model with Azure Machine Learning:

    az ml model register -n mymodel -m sklearn_regression_model.pkl
  • Create an image that contains your machine learning model and dependencies:

    az ml image create container -n myimage -r python -m mymodel:1 -f -c myenv.yml
  • Deploy an image to a compute target:

    az ml service create aci -n myaciservice --image-id myimage:1

Runconfig file

A run configuration is used to configure the training environment used to train your model. This configuration can be created in-memory using the SDK or it can be loaded from a runconfig file.

The runconfig file is a text document that describes the configuration for the training environment. For example, it lists the name of the training script and the file that contains the conda dependencies needed to train the model.

The Azure Machine Learning CLI creates two default .runconfig files named docker.runconfig and local.runconfig when you attach a project using the az ml project attach command.

If you have code that creates a run configuration using the RunConfiguration class, you can use the save() method to persist it to a .runconfig file.

The following is an example of the contents of a .runconfig file:

# The script to run.
# The arguments to the script file.
arguments: []
# The name of the compute target to use for this run.
target: local
# Framework to execute inside. Allowed values are "Python" ,  "PySpark", "CNTK",  "TensorFlow", and "PyTorch".
framework: PySpark
# Communicator for the given framework. Allowed values are "None" ,  "ParameterServer", "OpenMpi", and "IntelMpi".
communicator: None
# Automatically prepare the run environment as part of the run itself.
autoPrepareEnvironment: true
# Maximum allowed duration for the run.
# Number of nodes to use for running job.
nodeCount: 1
# Environment details.
# Environment variables set for the run.
# Python details
# user_managed_dependencies=True indicates that the environmentwill be user managed. False indicates that AzureML willmanage the user environment.
    userManagedDependencies: false
# The python interpreter path
    interpreterPath: python
# Path to the conda dependencies file to use for this run. If a project
# contains multiple programs with different sets of dependencies, it may be
# convenient to manage those environments with separate files.
    condaDependenciesFile: aml_config/conda_dependencies.yml
# Docker details
# Set True to perform this run inside a Docker container.
    enabled: true
# Base image used for Docker-based runs.
# Set False if necessary to work around shared volume bugs.
    sharedVolumes: true
# Run with NVidia Docker extension to support GPUs.
    gpuSupport: false
# Extra arguments to the Docker run command.
    arguments: []
# Image registry that contains the base image.
# DNS name or IP address of azure container registry(ACR)
# The username for ACR
# The password for ACR
# Spark details
# List of spark repositories.
    - group:
      artifact: mmlspark_2.11
      version: '0.12'
    precachePackages: true
# Databricks details
# List of maven libraries.
    mavenLibraries: []
# List of PyPi libraries
    pypiLibraries: []
# List of RCran libraries
    rcranLibraries: []
# List of JAR libraries
    jarLibraries: []
# List of Egg libraries
    eggLibraries: []
# History details.
# Enable history tracking -- this allows status, logs, metrics, and outputs
# to be collected for a run.
  outputCollection: true
# whether to take snapshots for history.
  snapshotProject: true
# Spark configuration details.
  configuration: Azure ML Experiment
    spark.yarn.maxAppAttempts: 1
# HDI details.
# Yarn deploy mode. Options are cluster and client.
  yarnDeployMode: cluster
# Tensorflow details.
# The number of worker tasks.
  workerCount: 1
# The number of parameter server tasks.
  parameterServerCount: 1
# Mpi details.
# When using MPI, number of processes per node.
  processCountPerNode: 1
# data reference configuration details
dataReferences: {}
# Project share datastore reference.
# AmlCompute details.
# VM size of the Cluster to be created.Allowed values are Azure vm sizes. The list of vm sizes is available in '
# VM priority of the Cluster to be created.Allowed values are "dedicated" , "lowpriority".
# A bool that indicates if the cluster has to be retained after job completion.
  retainCluster: false
# Name of the cluster to be created. If not specified, runId will be used as cluster name.
# Maximum number of nodes in the AmlCompute cluster to be created. Minimum number of nodes will always be set to 0.
  clusterMaxNodeCount: 1