az ml model

Manage machine learning models.

Commands

az ml model delete Delete a model from the workspace.
az ml model deploy Deploy model(s) from the workspace.
az ml model download Download a model from the workspace.
az ml model list List models in the workspace.
az ml model package Package a model in the workspace.
az ml model profile Profile model(s) in the workspace.
az ml model register Register a model to the workspace.
az ml model show Show a model in the workspace.
az ml model update Update a model in the workspace.

az ml model delete

Delete a model from the workspace.

az ml model delete --model-id
[--path]
[--resource-group]
[--workspace-name]
[-v]

Required Parameters

--model-id -i

ID of model to delete.

Optional Parameters

--path

Path to a project folder. Default: current directory.

--resource-group -g

Resource group corresponding to the provided workspace.

--workspace-name -w

Name of the workspace.

-v

Verbosity flag.

az ml model deploy

Deploy model(s) from the workspace.

az ml model deploy --name
[--ae]
[--ai]
[--ar]
[--as]
[--at]
[--autoscale-max-replicas]
[--autoscale-min-replicas]
[--base-image]
[--base-image-registry]
[--cc]
[--cf]
[--collect-model-data]
[--compute-target]
[--compute-type]
[--cuda-version]
[--dc]
[--description]
[--dn]
[--ds]
[--ed]
[--eg]
[--entry-script]
[--environment-name]
[--environment-version]
[--failure-threshold]
[--gb]
[--gc]
[--ic]
[--id]
[--kp]
[--ks]
[--lo]
[--max-request-wait-time]
[--model]
[--model-metadata-file]
[--namespace]
[--no-wait]
[--nr]
[--overwrite]
[--path]
[--period-seconds]
[--pi]
[--po]
[--property]
[--replica-max-concurrent-requests]
[--resource-group]
[--rt]
[--sc]
[--scoring-timeout-ms]
[--sd]
[--se]
[--sk]
[--sp]
[--st]
[--tag]
[--timeout-seconds]
[--token-auth-enabled]
[--workspace-name]
[-v]

Required Parameters

--name -n

The name of the service deployed.

Optional Parameters

--ae --auth-enabled

Whether or not to enable key auth for this Webservice. Defaults to False.

--ai --enable-app-insights

Whether or not to enable AppInsights for this Webservice. Defaults to False.

--ar --autoscale-refresh-seconds

How often the autoscaler should attempt to scale this Webservice. Defaults to 1.

--as --autoscale-enabled

Whether or not to enable autoscaling for this Webservice. Defaults to True if num_replicas is None.

--at --autoscale-target-utilization

The target utilization (in percent out of 100) the autoscaler should attempt to maintain for this Webservice. Defaults to 70.

--autoscale-max-replicas --ma

The maximum number of containers to use when autoscaling this Webservice. Defaults to 10.

--autoscale-min-replicas --mi

The minimum number of containers to use when autoscaling this Webservice. Defaults to 1.

--base-image --bi

A custom image to be used as base image. If no base image is given then the base image will be used based off of given runtime parameter.

--base-image-registry --ir

Image registry that contains the base image.

--cc --cpu-cores

The number of cpu cores to allocate for this Webservice. Can be a decimal. Defaults to 0.1.

--cf --conda-file

Path to local file containing a conda environment definition to use for the image.

--collect-model-data --md

Whether or not to enable model data collection for this Webservice. Defaults to False.

--compute-target --ct

Name of compute target. Only applicable when deploying to AKS.

--compute-type --cp

Compute type of service to deploy.

--cuda-version --cv

Version of CUDA to install for images that need GPU support. The GPU image must be used on Microsoft Azure Services such as Azure Container Instances, Azure Machine Learning Compute, Azure Virtual Machines, and Azure Kubernetes Service. Supported versions are 9.0, 9.1, and 10.0. If 'enable_gpu' is set, this defaults to '9.1'.

--dc --deploy-config-file

Path to a JSON or YAML file containing deployment metadata.

--description

Description of the service deployed.

--dn --dns-name-label

The dns name for this Webservice.

--ds --extra-docker-file-steps

Path to local file containing additional Docker steps to run when setting up image.

--ed --environment-directory

Directory for Azure Machine Learning Environment for deployment. It is the same directory path as provided in 'az ml environment scaffold' command.

--eg --enable-gpu

Whether or not to enable GPU support in the image. The GPU image must be used on Microsoft Azure Services such as Azure Container Instances, Azure Machine Learning Compute, Azure Virtual Machines, and Azure Kubernetes Service. Defaults to False.

--entry-script --es

Path to local file that contains the code to run for service (relative path from source_directory if one is provided).

--environment-name -e

Name of Azure Machine Learning Environment for deployment.

--environment-version --ev

Version of an existing Azure Machine Learning Environment for deployment.

--failure-threshold --ft

When a Pod starts and the liveness probe fails, Kubernetes will try --failure-threshold times before giving up. Defaults to 3. Minimum value is 1.

--gb --memory-gb

The amount of memory (in GB) to allocate for this Webservice. Can be a decimal.

--gc --gpu-cores

The number of gpu cores to allocate for this Webservice. Default is 1.

--ic --inference-config-file

Path to a JSON or YAML file containing inference configuration.

--id --initial-delay-seconds

Number of seconds after the container has started before liveness probes are initiated. Defaults to 310.

--kp --primary-key

A primary auth key to use for this Webservice.

--ks --secondary-key

A secondary auth key to use for this Webservice.

--lo --location

The Azure region to deploy this Webservice to. If not specified the Workspace location will be used. More details on available regions can be found here: https://azure.microsoft.com/en-us/global-infrastructure/services/?regions=all&products=container-instances.

--max-request-wait-time --mr

The maximum amount of time a request will stay in the queue (in milliseconds) before returning a 503 error. Defaults to 500.

--model -m

The ID of the model to be deployed. Multiple models can be specified with additional -m arguments. Models need to be registered first.

--model-metadata-file -f

Path to a JSON file containing model registration metadata. Multiple models can be provided using multiple -f parameters.

--namespace

Kubernetes namespace in which to deploy the service: up to 63 lowercase alphanumeric ('a'-'z', '0'-'9') and hyphen ('-') characters. The first and last characters cannot be hyphens. Only applicable when deploying to AKS.

--no-wait

Flag to not wait for asynchronous calls.

--nr --num-replicas

The number of containers to allocate for this Webservice. No default, if this parameter is not set then the autoscaler is enabled by default.

--overwrite

Overwrite the existing service if name conflicts.

--path

Path to a project folder. Default: current directory.

--period-seconds --ps

How often (in seconds) to perform the liveness probe. Default to 10 seconds. Minimum value is 1.

--pi --profile-input

Path to a JSON file containing profiling results.

--po --port

The local port on which to expose the service's HTTP endpoint.

--property

Key/value property to add (e.g. key=value ). Multiple properties can be specified with multiple --property options.

--replica-max-concurrent-requests --rm

The number of maximum concurrent requests per node to allow for this Webservice. Defaults to 1.

--resource-group -g

Resource group corresponding to the provided workspace.

--rt --runtime

Which runtime to use for image. Current supported runtimes are 'spark-py' and 'python'spark-py|python|python-slim.

--sc --ssl-cname

The cname for if SSL is enabled.

--scoring-timeout-ms --tm

A timeout to enforce for scoring calls to this Webservice. Defaults to 60000.

--sd --source-directory

Path to folders that contain all files to create the image.

--se --ssl-enabled

Whether or not to enable SSL for this Webservice. Defaults to False.

--sk --ssl-key-pem-file

The key file needed if SSL is enabled.

--sp --ssl-cert-pem-file

The cert file needed if SSL is enabled.

--st --success-threshold

Minimum consecutive successes for the liveness probe to be considered successful after having failed. Defaults to 1. Minimum value is 1.

--tag

Key/value tag to add (e.g. key=value ). Multiple tags can be specified with multiple --tag options.

--timeout-seconds --ts

Number of seconds after which the liveness probe times out. Defaults to 2 second. Minimum value is 1.

--token-auth-enabled

Whether or not to enable token auth for this Webservice. Ignored if not deploying to AKS. Defaults to False.

--workspace-name -w

Name of the workspace.

-v

Verbosity flag.

az ml model download

Download a model from the workspace.

az ml model download --model-id
--target-dir
[--overwrite]
[--path]
[--resource-group]
[--workspace-name]
[-v]

Required Parameters

--model-id -i

ID of model.

--target-dir -t

Target directory to download the model file to.

Optional Parameters

--overwrite

Overwrite if the same name file exists in target directory.

--path

Path to a project folder. Default: current directory.

--resource-group -g

Resource group corresponding to the provided workspace.

--workspace-name -w

Name of the workspace containing model to show.

-v

Verbosity flag.

az ml model list

List models in the workspace.

az ml model list [--model-name]
[--path]
[--property]
[--resource-group]
[--run-id]
[--tag]
[--workspace-name]
[-v]

Optional Parameters

--model-name -n

An optional model name to filter the list by.

--path

Path to a project folder. Default: current directory.

--property

Key/value property to add (e.g. key=value ). Multiple properties can be specified with multiple --property options.

--resource-group -g

Resource group corresponding to the provided workspace.

--run-id

If provided, will only show models with the specified Run ID.

--tag

Key/value tag to add (e.g. key=value ). Multiple tags can be specified with multiple --tag options.

--workspace-name -w

Name of the workspace containing models to list.

-v

Verbosity flag.

az ml model package

Package a model in the workspace.

az ml model package [--cf]
[--ed]
[--entry-script]
[--environment-name]
[--environment-version]
[--ic]
[--model]
[--model-metadata-file]
[--no-wait]
[--output-path]
[--path]
[--resource-group]
[--rt]
[--sd]
[--workspace-name]
[-v]

Optional Parameters

--cf --conda-file

Path to local file containing a conda environment definition to use for the package.

--ed --environment-directory

Directory for Azure Machine Learning Environment for packaging. It is the same directory path as provided in 'az ml environment scaffold' command.

--entry-script --es

Path to local file that contains the code to run for service (relative path from source_directory if one is provided).

--environment-name -e

Name of Azure Machine Learning Environment for packaging.

--environment-version --ev

Version of an existing Azure Machine Learning Environment for packaging.

--ic --inference-config-file

Path to a JSON or YAML file containing inference configuration.

--model -m

The ID of the model to be packaged. Multiple models can be specified with additional -m arguments. Models need to be registered first.

--model-metadata-file -f

Path to a JSON file containing model registration metadata. Multiple models can be provided using multiple -f parameters.

--no-wait

Flag to not wait for asynchronous calls.

--output-path

Output path for docker context. If an output path is passed, instead of building an image in the workspace ACR, a dockerfile and the necessary build context will be writen to that path.

--path

Path to a project folder. Default: current directory.

--resource-group -g

Resource group corresponding to the provided workspace.

--rt --runtime

Which runtime to use for package. Current supported runtimes are 'spark-py' and 'python'spark-py|python|python-slim.

--sd --source-directory

Path to folders that contain all files to create the image.

--workspace-name -w

Name of the workspace.

-v

Verbosity flag.

az ml model profile

Profile model(s) in the workspace.

az ml model profile --name
[--description]
[--ic]
[--input-data]
[--model]
[--model-metadata-file]
[--output-metadata-file]
[--resource-group]
[--workspace-name]
[-v]

Required Parameters

--name -n

The name of the model profile.

Optional Parameters

--description

Description of the model profile.

--ic --inference-config-file

Path to a JSON or YAML file containing inference configuration.

--input-data -d

The data to use for calling the web service.

--model -m

The ID of the model to be deployed. Multiple models can be specified with additional -m arguments. Models need to be registered first.

--model-metadata-file -f

Path to a JSON file containing model registration metadata. Multiple models can be provided using multiple -f parameters.

--output-metadata-file -t

Path to a JSON file where profile results metadata will be written. Used as input for model deployment.

--resource-group -g

Resource group corresponding to the provided workspace.

--workspace-name -w

Name of the workspace.

-v

Verbosity flag.

az ml model register

Register a model to the workspace.

az ml model register --name
[--asset-path]
[--cc]
[--description]
[--experiment-name]
[--gb]
[--gc]
[--model-framework]
[--model-framework-version]
[--model-path]
[--output-metadata-file]
[--path]
[--property]
[--resource-group]
[--run-id]
[--run-metadata-file]
[--sample-input-dataset-id]
[--sample-output-dataset-id]
[--tag]
[--workspace-name]
[-v]

Required Parameters

--name -n

Name of model to register.

Optional Parameters

--asset-path

The cloud path where the experiement run stores the model file.

--cc --cpu-cores

The default number of CPU cores to allocate for this model. Can be a decimal.

--description -d

Description of the model.

--experiment-name

The name of the experiment.

--gb --memory-gb

The default amount of memory (in GB) to allocate for this model. Can be a decimal.

--gc --gpu-cores

The default number of GPUs to allocate for this model.

--model-framework

Framework of the model to register. Currently supported frameworks: TensorFlow, ScikitLearn, Onnx, Custom.

--model-framework-version

Framework version of the model to register (e.g. 1.0.0, 2.4.1).

--model-path -p

Full path of the model file to register.

--output-metadata-file -t

Path to a JSON file where model registration metadata will be written. Used as input for model deployment.

--path

Path to a project folder. Default: current directory.

--property

Key/value property to add (e.g. key=value ). Multiple properties can be specified with multiple --property options.

--resource-group -g

Resource group corresponding to the provided workspace.

--run-id -r

The ID for the experiment run where model is registered from.

--run-metadata-file -f

Path to a JSON file containing experiement run metadata.

--sample-input-dataset-id

The ID for the sample input dataset.

--sample-output-dataset-id

The ID for the sample output dataset.

--tag

Key/value tag to add (e.g. key=value ). Multiple tags can be specified with multiple --tag options.

--workspace-name -w

Name of the workspace to register this model with.

-v

Verbosity flag.

az ml model show

Show a model in the workspace.

az ml model show [--model-id]
[--model-name]
[--path]
[--resource-group]
[--run-id]
[--version]
[--workspace-name]
[-v]

Optional Parameters

--model-id -i

ID of model to show.

--model-name -n

Name of model to show.

--path

Path to a project folder. Default: current directory.

--resource-group -g

Resource group corresponding to the provided workspace.

--run-id

If provided, will only show models with the specified Run ID.

--version

If provided, will only show models with the specified name and version.

--workspace-name -w

Name of the workspace containing model to show.

-v

Verbosity flag.

az ml model update

Update a model in the workspace.

az ml model update --model-id
[--add-property]
[--add-tag]
[--cc]
[--description]
[--gb]
[--gc]
[--path]
[--remove-tag]
[--resource-group]
[--sample-input-dataset-id]
[--sample-output-dataset-id]
[--workspace-name]
[-v]

Required Parameters

--model-id -i

ID of model.

Optional Parameters

--add-property

Key/value property to add (e.g. key=value ). Multiple properties can be specified with multiple --add-property options.

--add-tag

Key/value tag to add (e.g. key=value ). Multiple tags can be specified with multiple --add-tag options.

--cc --cpu-cores

The default number of CPU cores to allocate for this model. Can be a decimal.

--description

Description to update the model with. Will replace the current description.

--gb --memory-gb

The default amount of memory (in GB) to allocate for this model. Can be a decimal.

--gc --gpu-cores

The default number of GPUs to allocate for this model.

--path

Path to a project folder. Default: current directory.

--remove-tag

Key of tag to remove. Multiple tags can be specified with multiple --remove-tag options.

--resource-group -g

Resource group corresponding to the provided workspace.

--sample-input-dataset-id

The ID for the sample input dataset.

--sample-output-dataset-id

The ID for the sample output dataset.

--workspace-name -w

Name of the workspace.

-v

Verbosity flag.