az ml model

Note

This reference is part of the azure-cli-ml extension for the Azure CLI (version 2.0.28 or higher). The extension will automatically install the first time you run an az ml model command. Learn more about extensions.

Manage machine learning models.

Commands

Name Description Type Status
az ml model delete

Delete a model from the workspace.

Extension GA
az ml model deploy

Deploy model(s) from the workspace.

Extension GA
az ml model download

Download a model from the workspace.

Extension GA
az ml model list

List models in the workspace.

Extension GA
az ml model package

Package a model in the workspace.

Extension GA
az ml model profile

Profile model(s) in the workspace.

Extension GA
az ml model register

Register a model to the workspace.

Extension GA
az ml model show

Show a model in the workspace.

Extension GA
az ml model update

Update a model in the workspace.

Extension GA

az ml model delete

Delete a model from the workspace.

az ml model delete --model-id
                   [--path]
                   [--resource-group]
                   [--subscription-id]
                   [--workspace-name]
                   [-v]

Required Parameters

--model-id -i

ID of model to delete.

Optional Parameters

--path

Path to a project folder. Default: current directory.

--resource-group -g

Resource group corresponding to the provided workspace.

--subscription-id

Specifies the subscription Id.

--workspace-name -w

Name of the workspace.

-v

Verbosity flag.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az ml model deploy

Deploy model(s) from the workspace.

az ml model deploy --name
                   [--ae]
                   [--ai]
                   [--ar]
                   [--as]
                   [--at]
                   [--autoscale-max-replicas]
                   [--autoscale-min-replicas]
                   [--base-image]
                   [--base-image-registry]
                   [--cc]
                   [--ccl]
                   [--cf]
                   [--collect-model-data]
                   [--compute-target]
                   [--compute-type]
                   [--cuda-version]
                   [--dc]
                   [--description]
                   [--dn]
                   [--ds]
                   [--ed]
                   [--eg]
                   [--entry-script]
                   [--environment-name]
                   [--environment-version]
                   [--failure-threshold]
                   [--gb]
                   [--gbl]
                   [--gc]
                   [--ic]
                   [--id]
                   [--key-name]
                   [--key-version]
                   [--kp]
                   [--ks]
                   [--lo]
                   [--max-request-wait-time]
                   [--model]
                   [--model-metadata-file]
                   [--namespace]
                   [--no-wait]
                   [--nr]
                   [--overwrite]
                   [--path]
                   [--period-seconds]
                   [--pi]
                   [--po]
                   [--property]
                   [--replica-max-concurrent-requests]
                   [--resource-group]
                   [--rt]
                   [--sc]
                   [--scoring-timeout-ms]
                   [--sd]
                   [--se]
                   [--sk]
                   [--sp]
                   [--st]
                   [--subnet-name]
                   [--subscription-id]
                   [--tag]
                   [--timeout-seconds]
                   [--token-auth-enabled]
                   [--tp]
                   [--vault-base-url]
                   [--version-name]
                   [--vnet-name]
                   [--workspace-name]
                   [-v]

Required Parameters

--name -n

The name of the service deployed.

Optional Parameters

--ae --auth-enabled

Whether or not to enable key auth for this Webservice. Defaults to False.

--ai --enable-app-insights

Whether or not to enable AppInsights for this Webservice. Defaults to False.

--ar --autoscale-refresh-seconds

How often the autoscaler should attempt to scale this Webservice. Defaults to 1.

--as --autoscale-enabled

Whether or not to enable autoscaling for this Webservice. Defaults to True if num_replicas is None.

--at --autoscale-target-utilization

The target utilization (in percent out of 100) the autoscaler should attempt to maintain for this Webservice. Defaults to 70.

--autoscale-max-replicas --ma

The maximum number of containers to use when autoscaling this Webservice. Defaults to 10.

--autoscale-min-replicas --mi

The minimum number of containers to use when autoscaling this Webservice. Defaults to 1.

--base-image --bi

A custom image to be used as base image. If no base image is given then the base image will be used based off of given runtime parameter.

--base-image-registry --ir

Image registry that contains the base image.

--cc --cpu-cores

The number of cpu cores to allocate for this Webservice. Can be a decimal. Defaults to 0.1.

--ccl --cpu-cores-limit

The max number of CPU cores this Webservice is allowed to use. Can be a decimal.

--cf --conda-file

Path to local file containing a conda environment definition to use for the image.

--collect-model-data --md

Whether or not to enable model data collection for this Webservice. Defaults to False.

--compute-target --ct

Name of compute target. Only applicable when deploying to AKS.

--compute-type --cp

Compute type of service to deploy.

--cuda-version --cv

Version of CUDA to install for images that need GPU support. The GPU image must be used on Microsoft Azure Services such as Azure Container Instances, Azure Machine Learning Compute, Azure Virtual Machines, and Azure Kubernetes Service. Supported versions are 9.0, 9.1, and 10.0. If 'enable_gpu' is set, this defaults to '9.1'.

--dc --deploy-config-file

Path to a JSON or YAML file containing deployment metadata.

--description

Description of the service deployed.

--dn --dns-name-label

The dns name for this Webservice.

--ds --extra-docker-file-steps

Path to local file containing additional Docker steps to run when setting up image.

--ed --environment-directory

Directory for Azure Machine Learning Environment for deployment. It is the same directory path as provided in 'az ml environment scaffold' command.

--eg --enable-gpu

Whether or not to enable GPU support in the image. The GPU image must be used on Microsoft Azure Services such as Azure Container Instances, Azure Machine Learning Compute, Azure Virtual Machines, and Azure Kubernetes Service. Defaults to False.

--entry-script --es

Path to local file that contains the code to run for service (relative path from source_directory if one is provided).

--environment-name -e

Name of Azure Machine Learning Environment for deployment.

--environment-version --ev

Version of an existing Azure Machine Learning Environment for deployment.

--failure-threshold --ft

When a Pod starts and the liveness probe fails, Kubernetes will try --failure-threshold times before giving up. Defaults to 3. Minimum value is 1.

--gb --memory-gb

The amount of memory (in GB) to allocate for this Webservice. Can be a decimal.

--gbl --memory-gb-limit

The max amount of memory (in GB) this Webservice is allowed to use. Can be a decimal.

--gc --gpu-cores

The number of gpu cores to allocate for this Webservice. Default is 1.

--ic --inference-config-file

Path to a JSON or YAML file containing inference configuration.

--id --initial-delay-seconds

Number of seconds after the container has started before liveness probes are initiated. Defaults to 310.

--key-name

Key name for for encryption properties in customer-managed keys (CMK) for ACI.

--key-version

Key version for for encryption properties in customer-managed keys (CMK) for ACI.

--kp --primary-key

A primary auth key to use for this Webservice.

--ks --secondary-key

A secondary auth key to use for this Webservice.

--lo --location

The Azure region to deploy this Webservice to. If not specified the Workspace location will be used. More details on available regions can be found here: https://azure.microsoft.com/en-us/global-infrastructure/services/?regions=all&products=container-instances.

--max-request-wait-time --mr

The maximum amount of time a request will stay in the queue (in milliseconds) before returning a 503 error. Defaults to 500.

--model -m

The ID of the model to be deployed. Multiple models can be specified with additional -m arguments. Models need to be registered first.

default value: []
--model-metadata-file -f

Path to a JSON file containing model registration metadata. Multiple models can be provided using multiple -f parameters.

default value: []
--namespace

Kubernetes namespace in which to deploy the service: up to 63 lowercase alphanumeric ('a'-'z', '0'-'9') and hyphen ('-') characters. The first and last characters cannot be hyphens. Only applicable when deploying to AKS.

--no-wait

Flag to not wait for asynchronous calls.

--nr --num-replicas

The number of containers to allocate for this Webservice. No default, if this parameter is not set then the autoscaler is enabled by default.

--overwrite

Overwrite the existing service if name conflicts.

--path

Path to a project folder. Default: current directory.

--period-seconds --ps

How often (in seconds) to perform the liveness probe. Default to 10 seconds. Minimum value is 1.

--pi --profile-input

Path to a JSON file containing profiling results.

--po --port

The local port on which to expose the service's HTTP endpoint.

--property

Key/value property to add (e.g. key=value ). Multiple properties can be specified with multiple --property options.

default value: []
--replica-max-concurrent-requests --rm

The number of maximum concurrent requests per node to allow for this Webservice. Defaults to 1.

--resource-group -g

Resource group corresponding to the provided workspace.

--rt --runtime

Which runtime to use for image. Current supported runtimes are 'spark-py' and 'python'spark-py|python|python-slim.

--sc --ssl-cname

The cname for if SSL is enabled.

--scoring-timeout-ms --tm

A timeout to enforce for scoring calls to this Webservice. Defaults to 60000.

--sd --source-directory

Path to folders that contain all files to create the image.

--se --ssl-enabled

Whether or not to enable SSL for this Webservice. Defaults to False.

--sk --ssl-key-pem-file

The key file needed if SSL is enabled.

--sp --ssl-cert-pem-file

The cert file needed if SSL is enabled.

--st --success-threshold

Minimum consecutive successes for the liveness probe to be considered successful after having failed. Defaults to 1. Minimum value is 1.

--subnet-name

Name of the subnet inside the vnet.

--subscription-id

Specifies the subscription Id.

--tag

Key/value tag to add (e.g. key=value ). Multiple tags can be specified with multiple --tag options.

default value: []
--timeout-seconds --ts

Number of seconds after which the liveness probe times out. Defaults to 2 second. Minimum value is 1.

--token-auth-enabled

Whether or not to enable token auth for this Webservice. Ignored if not deploying to AKS. Defaults to False.

--tp --traffic-percentile

The amount of traffic the version takes in an endpoint. Can be a decimal. Defaults to 0.

--vault-base-url

Vault base url for encryption properties in customer-managed keys (CMK) for ACI.

--version-name --vn

The version name in an endpoint. Defaults to endpoint name for the first version.

--vnet-name

Name of the virtual network.

--workspace-name -w

Name of the workspace.

-v

Verbosity flag.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az ml model download

Download a model from the workspace.

az ml model download --model-id
                     --target-dir
                     [--overwrite]
                     [--path]
                     [--resource-group]
                     [--subscription-id]
                     [--workspace-name]
                     [-v]

Required Parameters

--model-id -i

ID of model.

--target-dir -t

Target directory to download the model file to.

Optional Parameters

--overwrite

Overwrite if the same name file exists in target directory.

--path

Path to a project folder. Default: current directory.

--resource-group -g

Resource group corresponding to the provided workspace.

--subscription-id

Specifies the subscription Id.

--workspace-name -w

Name of the workspace containing model to show.

-v

Verbosity flag.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az ml model list

List models in the workspace.

az ml model list [--dataset-id]
                 [--latest]
                 [--model-name]
                 [--path]
                 [--property]
                 [--resource-group]
                 [--run-id]
                 [--subscription-id]
                 [--tag]
                 [--workspace-name]
                 [-v]

Optional Parameters

--dataset-id

If provided, will only show models with the specified dataset ID.

--latest -l

If provided, will only return models with the latest version.

--model-name -n

An optional model name to filter the list by.

--path

Path to a project folder. Default: current directory.

--property

Key/value property to add (e.g. key=value ). Multiple properties can be specified with multiple --property options.

default value: []
--resource-group -g

Resource group corresponding to the provided workspace.

--run-id

If provided, will only show models with the specified Run ID.

--subscription-id

Specifies the subscription Id.

--tag

Key/value tag to add (e.g. key=value ). Multiple tags can be specified with multiple --tag options.

default value: []
--workspace-name -w

Name of the workspace containing models to list.

-v

Verbosity flag.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az ml model package

Package a model in the workspace.

az ml model package [--cf]
                    [--ed]
                    [--entry-script]
                    [--environment-name]
                    [--environment-version]
                    [--ic]
                    [--il]
                    [--image-name]
                    [--model]
                    [--model-metadata-file]
                    [--no-wait]
                    [--output-path]
                    [--path]
                    [--resource-group]
                    [--rt]
                    [--sd]
                    [--subscription-id]
                    [--workspace-name]
                    [-v]

Optional Parameters

--cf --conda-file

Path to local file containing a conda environment definition to use for the package.

--ed --environment-directory

Directory for Azure Machine Learning Environment for packaging. It is the same directory path as provided in 'az ml environment scaffold' command.

--entry-script --es

Path to local file that contains the code to run for service (relative path from source_directory if one is provided).

--environment-name -e

Name of Azure Machine Learning Environment for packaging.

--environment-version --ev

Version of an existing Azure Machine Learning Environment for packaging.

--ic --inference-config-file

Path to a JSON or YAML file containing inference configuration.

--il --image-label

Label to give the built package image.

--image-name --in

Name to give the built package image.

--model -m

The ID of the model to be packaged. Multiple models can be specified with additional -m arguments. Models need to be registered first.

default value: []
--model-metadata-file -f

Path to a JSON file containing model registration metadata. Multiple models can be provided using multiple -f parameters.

default value: []
--no-wait

Flag to not wait for asynchronous calls.

--output-path

Output path for docker context. If an output path is passed, instead of building an image in the workspace ACR, a dockerfile and the necessary build context will be writen to that path.

--path

Path to a project folder. Default: current directory.

--resource-group -g

Resource group corresponding to the provided workspace.

--rt --runtime

Which runtime to use for package. Current supported runtimes are 'spark-py' and 'python'spark-py|python|python-slim.

--sd --source-directory

Path to folders that contain all files to create the image.

--subscription-id

Specifies the subscription Id.

--workspace-name -w

Name of the workspace.

-v

Verbosity flag.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az ml model profile

Profile model(s) in the workspace.

az ml model profile --name
                    [--base-image]
                    [--base-image-registry]
                    [--cc]
                    [--cf]
                    [--description]
                    [--ed]
                    [--entry-script]
                    [--environment-name]
                    [--environment-version]
                    [--gb]
                    [--ic]
                    [--idi]
                    [--model]
                    [--model-metadata-file]
                    [--output-metadata-file]
                    [--resource-group]
                    [--sd]
                    [--subscription-id]
                    [--workspace-name]
                    [-v]

Required Parameters

--name -n

The name of the model profile.

Optional Parameters

--base-image --bi

A custom image to be used as base image. If no base image is given then the base image will be used based off of given runtime parameter.

--base-image-registry --ir

Image registry that contains the base image.

--cc --cpu-cores

Double value for maximum CPU to use when profiling.

--cf --conda-file

Path to local file containing a conda environment definition to use for the image.

--description

Description of the model profile.

--ed --environment-directory

Directory for Azure Machine Learning Environment for deployment. It is the same directory path as provided in 'az ml environment scaffold' command.

--entry-script --es

Path to local file that contains the code to run for service (relative path from source_directory if one is provided).

--environment-name -e

Name of Azure Machine Learning Environment for deployment.

--environment-version --ev

Version of an existing Azure Machine Learning Environment for deployment.

--gb --memory-in-gb

Double value for maximum Memory to use when profiling.

--ic --inference-config-file

Path to a JSON or YAML file containing inference configuration.

--idi --input-dataset-id

ID of the Tabular Dataset to be used as input for the profile.

--model -m

The ID of the model to be deployed. Multiple models can be specified with additional -m arguments. Models need to be registered first.

default value: []
--model-metadata-file -f

Path to a JSON file containing model registration metadata. Multiple models can be provided using multiple -f parameters.

default value: []
--output-metadata-file -t

Path to a JSON file where profile results metadata will be written. Used as input for model deployment.

--resource-group -g

Resource group corresponding to the provided workspace.

--sd --source-directory

Path to folders that contain all files to create the image.

--subscription-id

Specifies the subscription Id.

--workspace-name -w

Name of the workspace.

-v

Verbosity flag.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az ml model register

Register a model to the workspace.

az ml model register --name
                     [--asset-path]
                     [--cc]
                     [--description]
                     [--experiment-name]
                     [--gb]
                     [--gc]
                     [--model-framework]
                     [--model-framework-version]
                     [--model-path]
                     [--output-metadata-file]
                     [--path]
                     [--property]
                     [--resource-group]
                     [--run-id]
                     [--run-metadata-file]
                     [--sample-input-dataset-id]
                     [--sample-output-dataset-id]
                     [--subscription-id]
                     [--tag]
                     [--workspace-name]
                     [-v]

Required Parameters

--name -n

Name of model to register.

Optional Parameters

--asset-path

The cloud path where the experiement run stores the model file.

--cc --cpu-cores

The default number of CPU cores to allocate for this model. Can be a decimal.

--description -d

Description of the model.

--experiment-name

The name of the experiment.

--gb --memory-gb

The default amount of memory (in GB) to allocate for this model. Can be a decimal.

--gc --gpu-cores

The default number of GPUs to allocate for this model.

--model-framework

Framework of the model to register. Currently supported frameworks: TensorFlow, ScikitLearn, Onnx, Custom, Multi.

--model-framework-version

Framework version of the model to register (e.g. 1.0.0, 2.4.1).

--model-path -p

Full path of the model file to register.

--output-metadata-file -t

Path to a JSON file where model registration metadata will be written. Used as input for model deployment.

--path

Path to a project folder. Default: current directory.

--property

Key/value property to add (e.g. key=value ). Multiple properties can be specified with multiple --property options.

default value: []
--resource-group -g

Resource group corresponding to the provided workspace.

--run-id -r

The ID for the experiment run where model is registered from.

--run-metadata-file -f

Path to a JSON file containing experiement run metadata.

--sample-input-dataset-id

The ID for the sample input dataset.

--sample-output-dataset-id

The ID for the sample output dataset.

--subscription-id

Specifies the subscription Id.

--tag

Key/value tag to add (e.g. key=value ). Multiple tags can be specified with multiple --tag options.

default value: []
--workspace-name -w

Name of the workspace to register this model with.

-v

Verbosity flag.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az ml model show

Show a model in the workspace.

az ml model show [--model-id]
                 [--model-name]
                 [--path]
                 [--resource-group]
                 [--run-id]
                 [--subscription-id]
                 [--version]
                 [--workspace-name]
                 [-v]

Optional Parameters

--model-id -i

ID of model to show.

--model-name -n

Name of model to show.

--path

Path to a project folder. Default: current directory.

--resource-group -g

Resource group corresponding to the provided workspace.

--run-id

If provided, will only show models with the specified Run ID.

--subscription-id

Specifies the subscription Id.

--version

If provided, will only show models with the specified name and version.

--workspace-name -w

Name of the workspace containing model to show.

-v

Verbosity flag.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.

az ml model update

Update a model in the workspace.

az ml model update --model-id
                   [--add-property]
                   [--add-tag]
                   [--cc]
                   [--description]
                   [--gb]
                   [--gc]
                   [--path]
                   [--remove-tag]
                   [--resource-group]
                   [--sample-input-dataset-id]
                   [--sample-output-dataset-id]
                   [--subscription-id]
                   [--workspace-name]
                   [-v]

Required Parameters

--model-id -i

ID of model.

Optional Parameters

--add-property

Key/value property to add (e.g. key=value ). Multiple properties can be specified with multiple --add-property options.

default value: []
--add-tag

Key/value tag to add (e.g. key=value ). Multiple tags can be specified with multiple --add-tag options.

default value: []
--cc --cpu-cores

The default number of CPU cores to allocate for this model. Can be a decimal.

--description

Description to update the model with. Will replace the current description.

--gb --memory-gb

The default amount of memory (in GB) to allocate for this model. Can be a decimal.

--gc --gpu-cores

The default number of GPUs to allocate for this model.

--path

Path to a project folder. Default: current directory.

--remove-tag

Key of tag to remove. Multiple tags can be specified with multiple --remove-tag options.

default value: []
--resource-group -g

Resource group corresponding to the provided workspace.

--sample-input-dataset-id

The ID for the sample input dataset.

--sample-output-dataset-id

The ID for the sample output dataset.

--subscription-id

Specifies the subscription Id.

--workspace-name -w

Name of the workspace.

-v

Verbosity flag.

Global Parameters
--debug

Increase logging verbosity to show all debug logs.

--help -h

Show this help message and exit.

--only-show-errors

Only show errors, suppressing warnings.

--output -o

Output format.

accepted values: json, jsonc, none, table, tsv, yaml, yamlc
default value: json
--query

JMESPath query string. See http://jmespath.org/ for more information and examples.

--subscription

Name or ID of subscription. You can configure the default subscription using az account set -s NAME_OR_ID.

--verbose

Increase logging verbosity. Use --debug for full debug logs.