Deploy a model using a custom Docker base image

APPLIES TO: yesBasic edition yesEnterprise edition                    (Upgrade to Enterprise edition)

Learn how to use a custom Docker base image when deploying trained models with Azure Machine Learning.

When you deploy a trained model to a web service or IoT Edge device, a package is created which contains a web server to handle incoming requests.

Azure Machine Learning provides a default Docker base image so you don't have to worry about creating one. You can also use Azure Machine Learning environments to select a specific base image, or use a custom one that you provide.

A base image is used as the starting point when an image is created for a deployment. It provides the underlying operating system and components. The deployment process then adds additional components, such as your model, conda environment, and other assets, to the image before deploying it.

Typically, you create a custom base image when you want to use Docker to manage your dependencies, maintain tighter control over component versions or save time during deployment. For example, you might want to standardize on a specific version of Python, Conda, or other component. You might also want to install software required by your model, where the installation process takes a long time. Installing the software when creating the base image means that you don't have to install it for each deployment.

Important

When you deploy a model, you cannot override core components such as the web server or IoT Edge components. These components provide a known working environment that is tested and supported by Microsoft.

Warning

Microsoft may not be able to help troubleshoot problems caused by a custom image. If you encounter problems, you may be asked to use the default image or one of the images Microsoft provides to see if the problem is specific to your image.

This document is broken into two sections:

  • Create a custom base image: Provides information to admins and DevOps on creating a custom image and configuring authentication to an Azure Container Registry using the Azure CLI and Machine Learning CLI.
  • Deploy a model using a custom base image: Provides information to Data Scientists and DevOps / ML Engineers on using custom images when deploying a trained model from the Python SDK or ML CLI.

Prerequisites

Create a custom base image

The information in this section assumes that you are using an Azure Container Registry to store Docker images. Use the following checklist when planning to create custom images for Azure Machine Learning:

  • Will you use the Azure Container Registry created for the Azure Machine Learning workspace, or a standalone Azure Container Registry?

    When using images stored in the container registry for the workspace, you do not need to authenticate to the registry. Authentication is handled by the workspace.

    Warning

    The Azure Container Registry for your workspace is created the first time you train or deploy a model using the workspace. If you've created a new workspace, but not trained or created a model, no Azure Container Registry will exist for the workspace.

    For information on retrieving the name of the Azure Container Registry for your workspace, see the Get container registry name section of this article.

    When using images stored in a standalone container registry, you will need to configure a service principal that has at least read access. You then provide the service principal ID (username) and password to anyone that uses images from the registry. The exception is if you make the container registry publicly accessible.

    For information on creating a private Azure Container Registry, see Create a private container registry.

    For information on using service principals with Azure Container Registry, see Azure Container Registry authentication with service principals.

  • Azure Container Registry and image information: Provide the image name to anyone that needs to use it. For example, an image named myimage, stored in a registry named myregistry, is referenced as myregistry.azurecr.io/myimage when using the image for model deployment

  • Image requirements: Azure Machine Learning only supports Docker images that provide the following software:

    • Ubuntu 16.04 or greater.
    • Conda 4.5.# or greater.
    • Python 3.5.# or 3.6.#.

Get container registry information

In this section, learn how to get the name of the Azure Container Registry for your Azure Machine Learning workspace.

Warning

The Azure Container Registry for your workspace is created the first time you train or deploy a model using the workspace. If you've created a new workspace, but not trained or created a model, no Azure Container Registry will exist for the workspace.

If you've already trained or deployed models using Azure Machine Learning, a container registry was created for your workspace. To find the name of this container registry, use the following steps:

  1. Open a new shell or command-prompt and use the following command to authenticate to your Azure subscription:

    az login
    

    Follow the prompts to authenticate to the subscription.

    After logging in, you see a list of subscriptions associated with your Azure account. The subscription information with isDefault: true is the currently activated subscription for Azure CLI commands. This subscription must be the same one that contains your Azure Machine Learning workspace. You can find the subscription ID from the Azure portal by visiting the overview page for your workspace. You can also use the SDK to get the subscription ID from the workspace object. For example, Workspace.from_config().subscription_id.

    To select another subscription, use the az account set command with the subscription ID to switch to. For more information about subscription selection, see Use multiple Azure Subscriptions.

  2. Use the following command to list the container registry for the workspace. Replace <myworkspace> with your Azure Machine Learning workspace name. Replace <resourcegroup> with the Azure resource group that contains your workspace:

    az ml workspace show -w <myworkspace> -g <resourcegroup> --query containerRegistry
    

    Tip

    If you get an error message stating that the ml extension isn't installed, use the following command to install it:

    az extension add -n azure-cli-ml
    

    The information returned is similar to the following text:

    /subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.ContainerRegistry/registries/<registry_name>
    

    The <registry_name> value is the name of the Azure Container Registry for your workspace.

Build a custom base image

The steps in this section walk-through creating a custom Docker image in your Azure Container Registry.

  1. Create a new text file named Dockerfile, and use the following text as the contents:

    FROM ubuntu:16.04
    
    ARG CONDA_VERSION=4.5.12
    ARG PYTHON_VERSION=3.6
    
    ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
    ENV PATH /opt/miniconda/bin:$PATH
    
    RUN apt-get update --fix-missing && \
        apt-get install -y wget bzip2 && \
        apt-get clean && \
        rm -rf /var/lib/apt/lists/*
    
    RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-${CONDA_VERSION}-Linux-x86_64.sh -O ~/miniconda.sh && \
        /bin/bash ~/miniconda.sh -b -p /opt/miniconda && \
        rm ~/miniconda.sh && \
        /opt/miniconda/bin/conda clean -tipsy
    
    RUN conda install -y conda=${CONDA_VERSION} python=${PYTHON_VERSION} && \
        conda clean -aqy && \
        rm -rf /opt/miniconda/pkgs && \
        find / -type d -name __pycache__ -prune -exec rm -rf {} \;
    
  2. From a shell or command-prompt, use the following to authenticate to the Azure Container Registry. Replace the <registry_name> with the name of the container registry you want to store the image in:

    az acr login --name <registry_name>
    
  3. To upload the Dockerfile, and build it, use the following command. Replace <registry_name> with the name of the container registry you want to store the image in:

    az acr build --image myimage:v1 --registry <registry_name> --file Dockerfile .
    

    Tip

    In this example, a tag of :v1 is applied to the image. If no tag is provided, a tag of :latest is applied.

    During the build process, information is streamed to back to the command line. If the build is successful, you receive a message similar to the following text:

    Run ID: cda was successful after 2m56s
    

For more information on building images with an Azure Container Registry, see Build and run a container image using Azure Container Registry Tasks

For more information on uploading existing images to an Azure Container Registry, see Push your first image to a private Docker container registry.

Use a custom base image

To use a custom image, you need the following information:

  • The image name. For example, mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda is the path to a basic Docker Image provided by Microsoft.

    Important

    For custom images that you've created, be sure to include any tags that were used with the image. For example, if your image was created with a specific tag, such as :v1. If you did not use a specific tag when creating the image, a tag of :latest was applied.

  • If the image is in a private repository, you need the following information:

    • The registry address. For example, myregistry.azureecr.io.
    • A service principal username and password that has read access to the registry.

    If you do not have this information, speak to the administrator for the Azure Container Registry that contains your image.

Publicly available base images

Microsoft provides several docker images on a publicly accessible repository, which can be used with the steps in this section:

Image Description
mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda Basic image for Azure Machine Learning
mcr.microsoft.com/azureml/onnxruntime:latest Contains ONNX Runtime for CPU inferencing
mcr.microsoft.com/azureml/onnxruntime:latest-cuda Contains the ONNX Runtime and CUDA for GPU
mcr.microsoft.com/azureml/onnxruntime:latest-tensorrt Contains ONNX Runtime and TensorRT for GPU
mcr.microsoft.com/azureml/onnxruntime:latest-openvino-vadm Contains ONNX Runtime and OpenVINO for Intel Vision Accelerator Design based on MovidiusTM MyriadX VPUs
mcr.microsoft.com/azureml/onnxruntime:latest-openvino-myriad Contains ONNX Runtime and OpenVINO for Intel MovidiusTM USB sticks

For more information about the ONNX Runtime base images see the ONNX Runtime dockerfile section in the GitHub repo.

Tip

Since these images are publicly available, you do not need to provide an address, username or password when using them.

For more information, see Azure Machine Learning containers.

Tip

If your model is trained on Azure Machine Learning Compute, using version 1.0.22 or greater of the Azure Machine Learning SDK, an image is created during training. To discover the name of this image, use run.properties["AzureML.DerivedImageName"]. The following example demonstrates how to use this image:

# Use an image built during training with SDK 1.0.22 or greater
image_config.base_image = run.properties["AzureML.DerivedImageName"]

Use an image with the Azure Machine Learning SDK

To use an image stored in the Azure Container Registry for your workspace, or a container registry that is publicly accessible, set the following Environment attributes:

  • docker.enabled=True
  • docker.base_image: Set to the registry and path to the image.
from azureml.core.environment import Environment
# Create the environment
myenv = Environment(name="myenv")
# Enable Docker and reference an image
myenv.docker.enabled = True
myenv.docker.base_image = "mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda"

To use an image from a private container registry that is not in your workspace, you must use docker.base_image_registry to specify the address of the repository and a user name and password:

# Set the container registry information
myenv.docker.base_image_registry.address = "myregistry.azurecr.io"
myenv.docker.base_image_registry.username = "username"
myenv.docker.base_image_registry.password = "password"

myenv.inferencing_stack_version = "latest"  # This will install the inference specific apt packages.

# Define the packages needed by the model and scripts
from azureml.core.conda_dependencies import CondaDependencies
conda_dep = CondaDependencies()
# you must list azureml-defaults as a pip dependency
conda_dep.add_pip_package("azureml-defaults")
myenv.python.conda_dependencies=conda_dep

You must add azureml-defaults with version >= 1.0.45 as a pip dependency. This package contains the functionality needed to host the model as a web service. You must also set inferencing_stack_version property on the environment to "latest", this will install specific apt packages needed by web service.

After defining the environment, use it with an InferenceConfig object to define the inference environment in which the model and web service will run.

from azureml.core.model import InferenceConfig
# Use environment in InferenceConfig
inference_config = InferenceConfig(entry_script="score.py",
                                   environment=myenv)

At this point, you can continue with deployment. For example, the following code snippet would deploy a web service locally using the inference configuration and custom image:

from azureml.core.webservice import LocalWebservice, Webservice

deployment_config = LocalWebservice.deploy_configuration(port=8890)
service = Model.deploy(ws, "myservice", [model], inference_config, deployment_config)
service.wait_for_deployment(show_output = True)
print(service.state)

For more information on deployment, see Deploy models with Azure Machine Learning.

For more information on customizing your Python environment, see Create and manage environments for training and deployment.

Use an image with the Machine Learning CLI

Important

Currently the Machine Learning CLI can use images from the Azure Container Registry for your workspace or publicly accessible repositories. It cannot use images from standalone private registries.

Before deploying a model using the Machine Learning CLI, create an environment that uses the custom image. Then create an inference configuration file that references the environment. You can also define the environment directly in the inference configuration file. The following JSON document demonstrates how to reference an image in a public container registry. In this example, the environment is defined inline:

{
    "entryScript": "score.py",
    "environment": {
        "docker": {
            "arguments": [],
            "baseDockerfile": null,
            "baseImage": "mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda",
            "enabled": false,
            "sharedVolumes": true,
            "shmSize": null
        },
        "environmentVariables": {
            "EXAMPLE_ENV_VAR": "EXAMPLE_VALUE"
        },
        "name": "my-deploy-env",
        "python": {
            "baseCondaEnvironment": null,
            "condaDependencies": {
                "channels": [
                    "conda-forge"
                ],
                "dependencies": [
                    "python=3.6.2",
                    {
                        "pip": [
                            "azureml-defaults",
                            "azureml-telemetry",
                            "scikit-learn",
                            "inference-schema[numpy-support]"
                        ]
                    }
                ],
                "name": "project_environment"
            },
            "condaDependenciesFile": null,
            "interpreterPath": "python",
            "userManagedDependencies": false
        },
        "version": "1"
    }
}

This file is used with the az ml model deploy command. The --ic parameter is used to specify the inference configuration file.

az ml model deploy -n myservice -m mymodel:1 --ic inferenceconfig.json --dc deploymentconfig.json --ct akscomputetarget

For more information on deploying a model using the ML CLI, see the "model registration, profiling, and deployment" section of the CLI extension for Azure Machine Learning article.

Next steps