Deploy a model using a custom Docker base image

Learn how to use a custom Docker base image when deploying trained models with Azure Machine Learning.

Azure Machine Learning will use a default base Docker image if none is specified. You can find the specific Docker image used with azureml.core.runconfig.DEFAULT_CPU_IMAGE. You can also use Azure Machine Learning environments to select a specific base image, or use a custom one that you provide.

A base image is used as the starting point when an image is created for a deployment. It provides the underlying operating system and components. The deployment process then adds additional components, such as your model, conda environment, and other assets, to the image.

Typically, you create a custom base image when you want to use Docker to manage your dependencies, maintain tighter control over component versions or save time during deployment. You might also want to install software required by your model, where the installation process takes a long time. Installing the software when creating the base image means that you don't have to install it for each deployment.


When you deploy a model, you cannot override core components such as the web server or IoT Edge components. These components provide a known working environment that is tested and supported by Microsoft.


Microsoft may not be able to help troubleshoot problems caused by a custom image. If you encounter problems, you may be asked to use the default image or one of the images Microsoft provides to see if the problem is specific to your image.

This document is broken into two sections:

  • Create a custom base image: Provides information to admins and DevOps on creating a custom image and configuring authentication to an Azure Container Registry using the Azure CLI and Machine Learning CLI.
  • Deploy a model using a custom base image: Provides information to Data Scientists and DevOps / ML Engineers on using custom images when deploying a trained model from the Python SDK or ML CLI.


Create a custom base image

The information in this section assumes that you are using an Azure Container Registry to store Docker images. Use the following checklist when planning to create custom images for Azure Machine Learning:

  • Will you use the Azure Container Registry created for the Azure Machine Learning workspace, or a standalone Azure Container Registry?

    When using images stored in the container registry for the workspace, you do not need to authenticate to the registry. Authentication is handled by the workspace.


    The Azure Container Registry for your workspace is created the first time you train or deploy a model using the workspace. If you've created a new workspace, but not trained or created a model, no Azure Container Registry will exist for the workspace.

    When using images stored in a standalone container registry, you will need to configure a service principal that has at least read access. You then provide the service principal ID (username) and password to anyone that uses images from the registry. The exception is if you make the container registry publicly accessible.

    For information on creating a private Azure Container Registry, see Create a private container registry.

    For information on using service principals with Azure Container Registry, see Azure Container Registry authentication with service principals.

  • Azure Container Registry and image information: Provide the image name to anyone that needs to use it. For example, an image named myimage, stored in a registry named myregistry, is referenced as when using the image for model deployment

Image requirements

Azure Machine Learning only supports Docker images that provide the following software:

  • Ubuntu 16.04 or greater.
  • Conda 4.5.# or greater.
  • Python 3.6+.

To use Datasets, please install the libfuse-dev package. Also make sure to install any user space packages you may need.

Azure ML maintains a set of CPU and GPU base images published to Microsoft Container Registry that you can optionally leverage (or reference) instead of creating your own custom image. To see the Dockerfiles for those images, refer to the Azure/AzureML-Containers GitHub repository.

For GPU images, Azure ML currently offers both cuda9 and cuda10 base images. The major dependencies installed in these base images are:

Dependencies IntelMPI CPU OpenMPI CPU IntelMPI GPU OpenMPI GPU
miniconda ==4.5.11 ==4.5.11 ==4.5.11 ==4.5.11
mpi intelmpi==2018.3.222 openmpi==3.1.2 intelmpi==2018.3.222 openmpi==3.1.2
cuda - - 9.0/10.0 9.0/10.0/10.1
cudnn - - 7.4/7.5 7.4/7.5
nccl - - 2.4 2.4
git 2.7.4 2.7.4 2.7.4 2.7.4

The CPU images are built from ubuntu16.04. The GPU images for cuda9 are built from nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04. The GPU images for cuda10 are built from nvidia/cuda:10.0-cudnn7-devel-ubuntu16.04.


When using custom Docker images, it is recommended that you pin package versions in order to better ensure reproducibility.

Get container registry information

In this section, learn how to get the name of the Azure Container Registry for your Azure Machine Learning workspace.


The Azure Container Registry for your workspace is created the first time you train or deploy a model using the workspace. If you've created a new workspace, but not trained or created a model, no Azure Container Registry will exist for the workspace.

If you've already trained or deployed models using Azure Machine Learning, a container registry was created for your workspace. To find the name of this container registry, use the following steps:

  1. Open a new shell or command-prompt and use the following command to authenticate to your Azure subscription:

    az login

    Follow the prompts to authenticate to the subscription.


    After logging in, you see a list of subscriptions associated with your Azure account. The subscription information with isDefault: true is the currently activated subscription for Azure CLI commands. This subscription must be the same one that contains your Azure Machine Learning workspace. You can find the subscription ID from the Azure portal by visiting the overview page for your workspace. You can also use the SDK to get the subscription ID from the workspace object. For example, Workspace.from_config().subscription_id.

    To select another subscription, use the az account set -s <subscription name or ID> command and specify the subscription name or ID to switch to. For more information about subscription selection, see Use multiple Azure Subscriptions.

  2. Use the following command to list the container registry for the workspace. Replace <myworkspace> with your Azure Machine Learning workspace name. Replace <resourcegroup> with the Azure resource group that contains your workspace:

    az ml workspace show -w <myworkspace> -g <resourcegroup> --query containerRegistry


    If you get an error message stating that the ml extension isn't installed, use the following command to install it:

    az extension add -n azure-cli-ml

    The information returned is similar to the following text:


    The <registry_name> value is the name of the Azure Container Registry for your workspace.

Build a custom base image

The steps in this section walk-through creating a custom Docker image in your Azure Container Registry. For sample dockerfiles, see the Azure/AzureML-Containers GitHub repo).

  1. Create a new text file named Dockerfile, and use the following text as the contents:

    FROM ubuntu:16.04
    ENV PATH /opt/miniconda/bin:$PATH
    ENV DEBIAN_FRONTEND=noninteractive
    RUN apt-get update --fix-missing && \
        apt-get install -y wget bzip2 && \
        apt-get install -y fuse && \
        apt-get clean -y && \
        rm -rf /var/lib/apt/lists/*
    RUN useradd --create-home dockeruser
    WORKDIR /home/dockeruser
    USER dockeruser
    RUN wget --quiet${CONDA_VERSION} -O ~/ && \
        /bin/bash ~/ -b -p ~/miniconda && \
        rm ~/ && \
        ~/miniconda/bin/conda clean -tipsy
    ENV PATH="/home/dockeruser/miniconda/bin/:${PATH}"
    RUN conda install -y conda=${CONDA_VERSION} python=${PYTHON_VERSION} && \
        pip install azureml-defaults==${AZUREML_SDK_VERSION} inference-schema==${INFERENCE_SCHEMA_VERSION} &&\
        conda clean -aqy && \
        rm -rf ~/miniconda/pkgs && \
        find ~/miniconda/ -type d -name __pycache__ -prune -exec rm -rf {} \;
  2. From a shell or command-prompt, use the following to authenticate to the Azure Container Registry. Replace the <registry_name> with the name of the container registry you want to store the image in:

    az acr login --name <registry_name>
  3. To upload the Dockerfile, and build it, use the following command. Replace <registry_name> with the name of the container registry you want to store the image in:

    az acr build --image myimage:v1 --registry <registry_name> --file Dockerfile .


    In this example, a tag of :v1 is applied to the image. If no tag is provided, a tag of :latest is applied.

    During the build process, information is streamed to back to the command line. If the build is successful, you receive a message similar to the following text:

    Run ID: cda was successful after 2m56s

For more information on building images with an Azure Container Registry, see Build and run a container image using Azure Container Registry Tasks

For more information on uploading existing images to an Azure Container Registry, see Push your first image to a private Docker container registry.

Use a custom base image

To use a custom image, you need the following information:

  • The image name. For example, is the path to a simple Docker Image provided by Microsoft.


    For custom images that you've created, be sure to include any tags that were used with the image. For example, if your image was created with a specific tag, such as :v1. If you did not use a specific tag when creating the image, a tag of :latest was applied.

  • If the image is in a private repository, you need the following information:

    • The registry address. For example,
    • A service principal username and password that has read access to the registry.

    If you do not have this information, speak to the administrator for the Azure Container Registry that contains your image.

Publicly available base images

Microsoft provides several docker images on a publicly accessible repository, which can be used with the steps in this section:

Image Description Core image for Azure Machine Learning Contains ONNX Runtime for CPU inferencing Contains the ONNX Runtime and CUDA for GPU Contains ONNX Runtime and TensorRT for GPU Contains ONNX Runtime and OpenVINO for Intel Vision Accelerator Design based on MovidiusTM MyriadX VPUs Contains ONNX Runtime and OpenVINO for Intel MovidiusTM USB sticks

For more information about the ONNX Runtime base images see the ONNX Runtime dockerfile section in the GitHub repo.


Since these images are publicly available, you do not need to provide an address, username or password when using them.

For more information, see Azure Machine Learning containers repository on GitHub.

Use an image with the Azure Machine Learning SDK

To use an image stored in the Azure Container Registry for your workspace, or a container registry that is publicly accessible, set the following Environment attributes:

  • docker.enabled=True
  • docker.base_image: Set to the registry and path to the image.
from azureml.core.environment import Environment
# Create the environment
myenv = Environment(name="myenv")
# Enable Docker and reference an image
myenv.docker.enabled = True
myenv.docker.base_image = ""

To use an image from a private container registry that is not in your workspace, you must use docker.base_image_registry to specify the address of the repository and a user name and password:

# Set the container registry information
myenv.docker.base_image_registry.address = ""
myenv.docker.base_image_registry.username = "username"
myenv.docker.base_image_registry.password = "password"

myenv.inferencing_stack_version = "latest"  # This will install the inference specific apt packages.

# Define the packages needed by the model and scripts
from azureml.core.conda_dependencies import CondaDependencies
conda_dep = CondaDependencies()
# you must list azureml-defaults as a pip dependency

You must add azureml-defaults with version >= 1.0.45 as a pip dependency. This package contains the functionality needed to host the model as a web service. You must also set inferencing_stack_version property on the environment to "latest", this will install specific apt packages needed by web service.

After defining the environment, use it with an InferenceConfig object to define the inference environment in which the model and web service will run.

from azureml.core.model import InferenceConfig
# Use environment in InferenceConfig
inference_config = InferenceConfig(entry_script="",

At this point, you can continue with deployment. For example, the following code snippet would deploy a web service locally using the inference configuration and custom image:

from azureml.core.webservice import LocalWebservice, Webservice

deployment_config = LocalWebservice.deploy_configuration(port=8890)
service = Model.deploy(ws, "myservice", [model], inference_config, deployment_config)
service.wait_for_deployment(show_output = True)

For more information on deployment, see Deploy models with Azure Machine Learning.

For more information on customizing your Python environment, see Create and manage environments for training and deployment.

Use an image with the Machine Learning CLI


Currently the Machine Learning CLI can use images from the Azure Container Registry for your workspace or publicly accessible repositories. It cannot use images from standalone private registries.

Before deploying a model using the Machine Learning CLI, create an environment that uses the custom image. Then create an inference configuration file that references the environment. You can also define the environment directly in the inference configuration file. The following JSON document demonstrates how to reference an image in a public container registry. In this example, the environment is defined inline:

    "entryScript": "",
    "environment": {
        "docker": {
            "arguments": [],
            "baseDockerfile": null,
            "baseImage": "",
            "enabled": false,
            "sharedVolumes": true,
            "shmSize": null
        "environmentVariables": {
        "name": "my-deploy-env",
        "python": {
            "baseCondaEnvironment": null,
            "condaDependencies": {
                "channels": [
                "dependencies": [
                        "pip": [
                "name": "project_environment"
            "condaDependenciesFile": null,
            "interpreterPath": "python",
            "userManagedDependencies": false
        "version": "1"

This file is used with the az ml model deploy command. The --ic parameter is used to specify the inference configuration file.

az ml model deploy -n myservice -m mymodel:1 --ic inferenceconfig.json --dc deploymentconfig.json --ct akscomputetarget

For more information on deploying a model using the ML CLI, see the "model registration, profiling, and deployment" section of the CLI extension for Azure Machine Learning article.

Next steps