Deploy a machine learning model to Azure App Service (preview)

APPLIES TO: yesBasic edition yesEnterprise edition                    (Upgrade to Enterprise edition)

Learn how to deploy a model from Azure Machine Learning as a web app in Azure App Service.


While both Azure Machine Learning and Azure App Service are generally available, the ability to deploy a model from the Machine Learning service to App Service is in preview.

With Azure Machine Learning, you can create Docker images from trained machine learning models. This image contains a web service that receives data, submits it to the model, and then returns the response. Azure App Service can be used to deploy the image, and provides the following features:

  • Advanced authentication for enhanced security. Authentication methods include both Azure Active Directory and multi-factor auth.
  • Autoscale without having to redeploy.
  • SSL support for secure communications between clients and the service.

For more information on features provided by Azure App Service, see the App Service overview.


If you need the ability to log the scoring data used with your deployed model, or the results of scoring, you should instead deploy to Azure Kubernetes Service. For more information, see Collect data on your production models.


  • An Azure Machine Learning workspace. For more information, see the Create a workspace article.

  • The Azure CLI.

  • A trained machine learning model registered in your workspace. If you do not have a model, use the Image classification tutorial: train model to train and register one.


    The code snippets in this article assume that you have set the following variables:

    • ws - Your Azure Machine Learning workspace.
    • model - The registered model that will be deployed.
    • inference_config - The inference configuration for the model.

    For more information on setting these variables, see Deploy models with Azure Machine Learning.

Prepare for deployment

Before deploying, you must define what is needed to run the model as a web service. The following list describes the basic items needed for a deployment:

  • An entry script. This script accepts requests, scores the request using the model, and returns the results.


    The entry script is specific to your model; it must understand the format of the incoming request data, the format of the data expected by your model, and the format of the data returned to clients.

    If the request data is in a format that is not usable by your model, the script can transform it into an acceptable format. It may also transform the response before returning to it to the client.


    The Azure Machine Learning SDK does not provide a way for the web service access your datastore or data sets. If you need the deployed model to access data stored outside the deployment, such as in an Azure Storage account, you must develop a custom code solution using the relevant SDK. For example, the Azure Storage SDK for Python.

    Another alternative that may work for your scenario is batch predictions, which does provide access to datastores when scoring.

    For more information on entry scripts, see Deploy models with Azure Machine Learning.

  • Dependencies, such as helper scripts or Python/Conda packages required to run the entry script or model

These entities are encapsulated into an inference configuration. The inference configuration references the entry script and other dependencies.


When creating an inference configuration for use with Azure App Service, you must use an Environment object. Please note that if you are defining a custom environment, you must add azureml-defaults with version >= 1.0.45 as a pip dependency. This package contains the functionality needed to host the model as a web service. The following example demonstrates creating an environment object and using it with an inference configuration:

from azureml.core.environment import Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.model import InferenceConfig

# Create an environment and add conda dependencies to it
myenv = Environment(name="myenv")
# Enable Docker based environment
myenv.docker.enabled = True
# Build conda dependencies
myenv.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'],
inference_config = InferenceConfig(entry_script="", environment=myenv)

For more information on environments, see Create and manage environments for training and deployment.

For more information on inference configuration, see Deploy models with Azure Machine Learning.


When deploying to Azure App Service, you do not need to create a deployment configuration.

Create the image

To create the Docker image that is deployed to Azure App Service, use Model.package. The following code snippet demonstrates how to build a new image from the model and inference configuration:


The code snippet assumes that model contains a registered model, and that inference_config contains the configuration for the inference environment. For more information, see Deploy models with Azure Machine Learning.

from azureml.core import Model

package = Model.package(ws, [model], inference_config)
# Display the package location/ACR path

When show_output=True, the output of the Docker build process is shown. Once the process finishes, the image has been created in the Azure Container Registry for your workspace. Once the image has been built, the location in your Azure Container Registry is displayed. The location returned is in the format <acrinstance><imagename>. For example,


Save the location information, as it is used when deploying the image.

Deploy image as a web app

  1. Use the following command to get the login credentials for the Azure Container Registry that contains the image. Replace <acrinstance> with th e value returned previously from package.location:

    az acr credential show --name <myacr>

    The output of this command is similar to the following JSON document:

    "passwords": [
        "name": "password",
        "value": "Iv0lRZQ9762LUJrFiffo3P4sWgk4q+nW"
        "name": "password2",
        "value": "=pKCxHatX96jeoYBWZLsPR6opszr==mg"
    "username": "myml08024f78fd10"

    Save the value for username and one of the passwords.

  2. If you do not already have a resource group or app service plan to deploy the service, the following commands demonstrate how to create both:

    az group create --name myresourcegroup --location "West Europe"
    az appservice plan create --name myplanname --resource-group myresourcegroup --sku B1 --is-linux

    In this example, a Basic pricing tier (--sku B1) is used.


    Images created by Azure Machine Learning use Linux, so you must use the --is-linux parameter.

  3. To create the web app, use the following command. Replace <app-name> with the name you want to use. Replace <acrinstance> and <imagename> with the values from returned package.location earlier:

    az webapp create --resource-group myresourcegroup --plan myplanname --name <app-name> --deployment-container-image-name <acrinstance><imagename>

    This command returns information similar to the following JSON document:

    "adminSiteName": null,
    "appServicePlanName": "myplanname",
    "geoRegion": "West Europe",
    "hostingEnvironmentProfile": null,
    "id": "/subscriptions/0000-0000/resourceGroups/myResourceGroup/providers/Microsoft.Web/serverfarms/myplanname",
    "kind": "linux",
    "location": "West Europe",
    "maximumNumberOfWorkers": 1,
    "name": "myplanname",
    < JSON data removed for brevity. >
    "targetWorkerSizeId": 0,
    "type": "Microsoft.Web/serverfarms",
    "workerTierName": null


    At this point, the web app has been created. However, since you haven't provided the credentials to the Azure Container Registry that contains the image, the web app is not active. In the next step, you provide the authentication information for the container registry.

  4. To provide the web app with the credentials needed to access the container registry, use the following command. Replace <app-name> with the name you want to use. Replace <acrinstance> and <imagename> with the values from returned package.location earlier. Replace <username> and <password> with the ACR login information retrieved earlier:

    az webapp config container set --name <app-name> --resource-group myresourcegroup --docker-custom-image-name <acrinstance><imagename> --docker-registry-server-url https://<acrinstance> --docker-registry-server-user <username> --docker-registry-server-password <password>

    This command returns information similar to the following JSON document:

        "slotSetting": false,
        "value": "false"
        "slotSetting": false,
        "value": ""
        "slotSetting": false,
        "value": "myml08024f78fd10"
        "slotSetting": false,
        "value": null
        "name": "DOCKER_CUSTOM_IMAGE_NAME",
        "value": "DOCKER|"

At this point, the web app begins loading the image.


It may take several minutes before the image has loaded. To monitor progress, use the following command:

az webapp log tail --name <app-name> --resource-group myresourcegroup

Once the image has been loaded and the site is active, the log displays a message that states Container <container name> for site <app-name> initialized successfully and is ready to serve requests.

Once the image is deployed, you can find the hostname by using the following command:

az webapp show --name <app-name> --resource-group myresourcegroup

This command returns information similar to the following hostname - <app-name> Use this value as part of the base url for the service.

Use the Web App

The web service that passes requests to the model is located at {baseurl}/score. For example, https://<app-name> The following Python code demonstrates how to submit data to the URL and display the response:

import requests
import json

scoring_uri = ""

headers = {'Content-Type':'application/json'}

test_sample = json.dumps({'data': [

response =, data=test_sample, headers=headers)

Next steps