Create a function on Linux using a custom image

Azure Functions lets you host your functions on Linux in your own custom container. You can also host on a default Azure App Service container. This functionality requires the Functions 2.x runtime.

In this tutorial, you learn how to deploy your functions to Azure as a custom Docker image. This pattern is useful when you need to customize the built-in container image. You may want to use a custom image when your functions need a specific language version or require a specific dependency or configuration that isn't provided within the built-in image. Supported base images for Azure Functions are found in the Azure Functions base images repo.

This tutorial walks you through how to use Azure Functions Core Tools to create a function in a custom Linux image. You publish this image to a function app in Azure, which was created using the Azure CLI. Later, you update your function to connect to Azure Queue storage. You also enable.

In this tutorial, you learn how to:

  • Create a function app and Dockerfile using Core Tools.
  • Build a custom image using Docker.
  • Publish a custom image to a container registry.
  • Create an Azure Storage account.
  • Create a Premium hosting plan.
  • Deploy a function app from Docker Hub.
  • Add application settings to the function app.
  • Enable continuous deployment.
  • Enable SSH connections to the container.
  • Add a Queue storage output binding.
  • Add Application Insights monitoring.

The following steps are supported on a Mac, Windows, or Linux computer.

Prerequisites

Before running this sample, you must have the following:

If you don't have an Azure subscription, create a free account before you begin.

Note

Azure CLI commands in this article work in Bash and are verified to run in Azure Cloud Shell. You must modify them to run in a local Windows command prompt.

Create the local project

Run the following command from the command line to create a function app project in the MyFunctionProj folder of the current local directory. For a Python project, you must be running in a virtual environment.

func init MyFunctionProj --docker

When you include the --docker option, a dockerfile is generated for the project. This file is used to create a custom container in which to run the project. The base image used depends on the worker runtime language chosen.

When prompted, choose a worker runtime from the following languages:

  • dotnet: creates a .NET Core class library project (.csproj).
  • node: creates a JavaScript project.
  • python: creates a Python project.

Use the following command to navigate to the new MyFunctionProj project folder.

cd MyFunctionProj

Create a function

The following command creates an HTTP-triggered function named MyHttpTrigger.

func new --name MyHttpTrigger --template "HttpTrigger"

When the command executes, you see something like the following output:

The function "MyHttpTrigger" was created successfully from the "HttpTrigger" template.

Run the function locally

The following command starts the function app. The app runs using the same Azure Functions runtime that is in Azure. The start command varies, depending on your project language.

C#

func start --build

JavaScript

func start

TypeScript

npm install
npm start     

When the Functions host starts, it writes something like the following output, which has been truncated for readability:


                  %%%%%%
                 %%%%%%
            @   %%%%%%    @
          @@   %%%%%%      @@
       @@@    %%%%%%%%%%%    @@@
     @@      %%%%%%%%%%        @@
       @@         %%%%       @@
         @@      %%%       @@
           @@    %%      @@
                %%
                %

...

Content root path: C:\functions\MyFunctionProj
Now listening on: http://0.0.0.0:7071
Application started. Press Ctrl+C to shut down.

...

Http Functions:

        HttpTrigger: http://localhost:7071/api/MyHttpTrigger

[8/27/2018 10:38:27 PM] Host started (29486ms)
[8/27/2018 10:38:27 PM] Job host started

Copy the URL of your HttpTrigger function from the runtime output and paste it into your browser's address bar. Append the query string ?name=<yourname> to this URL and execute the request. The following shows the response in the browser to the GET request returned by the local function:

Test locally in the browser

Now that you have run your function locally, you can create the function app and other required resources in Azure.

Build from the Docker file

Take a look at the Dockerfile in the root folder of the project. This file describes the environment that is required to run the function app on Linux. The following example is a Dockerfile that creates a container that runs a function app on the JavaScript (Node.js) worker runtime:

FROM mcr.microsoft.com/azure-functions/node:2.0

ENV AzureWebJobsScriptRoot=/home/site/wwwroot
COPY . /home/site/wwwroot

Note

The complete list of supported base images for Azure Functions can be found in the Azure Functions base image page.

Run the build command

In the root folder, run the docker build command, and provide a name, mydockerimage, and tag, v1.0.0. Replace <docker-id> with your Docker Hub account ID. This command builds the Docker image for the container.

docker build --tag <docker-id>/mydockerimage:v1.0.0 .

When the command completes, you can run the new container locally.

Run the image locally

Verify that the image you built works by running the Docker image in a local container. Issue the docker run command and pass the name and tag of the image to it. Be sure to specify the port using the -p argument.

docker run -p 8080:80 -it <docker-ID>/mydockerimage:v1.0.0

With the custom image running in a local Docker container, verify the function app and container are functioning correctly by browsing to http://localhost:8080.

Run the function app locally.

Note

At this point, when you try to call your specific HTTP function, you get an HTTP 401 error response. This is because your function runs in the local container as it would in Azure, which means that the function key is required. Because the container hasn't yet been published to a function app, there is no function key available. You'll see later that when you use Core Tools to publish your container, the function keys are shown to you. If you want to test your function running in the local container, you can change the authorization key to anonymous.

After you have verified the function app in the container, stop the execution. Now, you can push the custom image to your Docker Hub account.

Push to Docker Hub

A registry is an application that hosts images and provides services image and container services. To share your image, you must push it to a registry. Docker Hub is a registry for Docker images that allows you to host your own repositories, either public or private.

Before you can push an image, you must sign in to Docker Hub using the docker login command. Replace <docker-id> with your account name and type in your password into the console at the prompt. For other Docker Hub password options, see the docker login command documentation.

docker login --username <docker-id>

A "login succeeded" message confirms that you're logged in. After you have signed in, you push the image to Docker Hub by using the docker push command.

docker push <docker-id>/mydockerimage:v1.0.0

After the push succeeds, you can use the image as the deployment source for a new function app in Azure.

Create a resource group

Create a resource group with the az group create command. An Azure resource group is a logical container into which Azure resources like function apps, databases, and storage accounts are deployed and managed.

The following example creates a resource group named myResourceGroup.
If you aren't using Cloud Shell, sign in first using az login.

az group create --name myResourceGroup --location westeurope

You generally create your resource group and the resources in a region near you.

Create an Azure Storage account

Functions uses a general-purpose account in Azure Storage to maintain state and other information about your functions. Create a general-purpose storage account in the resource group you created by using the az storage account create command.

In the following command, substitute a globally unique storage account name where you see the <storage_name> placeholder. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only.

az storage account create --name <storage_name> --location westeurope --resource-group myResourceGroup --sku Standard_LRS

Create a Premium plan

Linux hosting for custom Functions containers supported on Dedicated (App Service) plans and Premium plans. This tutorial uses a Premium plan, which can scale as needed. To learn more about hosting, see Azure Functions hosting plans comparison.

The following example creates a Premium plan named myPremiumPlan in the Elastic Premium 1 pricing tier (--sku EP1), in the West US region (-location WestUS), and in a Linux container (--is-linux).

az functionapp plan create --resource-group myResourceGroup --name myPremiumPlan \
--location WestUS --number-of-workers 1 --sku EP1 --is-linux

Create an app from the image

The function app manages the execution of your functions in your hosting plan. Create a function app from a Docker Hub image by using the az functionapp create command.

In the following command, substitute a unique function app name where you see the <app_name> placeholder and the storage account name for <storage_name>. The <app_name> is used as the default DNS domain for the function app, and so the name needs to be unique across all apps in Azure. As before, <docker-id> is your Docker account name.

az functionapp create --name <app_name> --storage-account  <storage_name>  --resource-group myResourceGroup \
--plan myPremiumPlan --deployment-container-image-name <docker-id>/mydockerimage:v1.0.0

The deployment-container-image-name parameter indicates the image hosted on Docker Hub to use to create the function app. Use the az functionapp config container show command to view information about the image used for deployment. Use the az functionapp config container set command to deploy from a different image.

Configure the function app

The function needs the connection string to connect to the default storage account. When you're publishing your custom image to a private container account, you should instead set these application settings as environment variables in the Dockerfile using the ENV instruction, or something similar.

In this case, <storage_name> is the name of the storage account you created. Get the connection string with the az storage account show-connection-string command. Add these application settings in the function app with the az functionapp config appsettings set command.

storageConnectionString=$(az storage account show-connection-string \
--resource-group myResourceGroup --name <storage_name> \
--query connectionString --output tsv)

az functionapp config appsettings set --name <app_name> \
--resource-group myResourceGroup \
--settings AzureWebJobsDashboard=$storageConnectionString \
AzureWebJobsStorage=$storageConnectionString

Note

If your container is private, you would have to set the following application settings as well

  • DOCKER_REGISTRY_SERVER_USERNAME
  • DOCKER_REGISTRY_SERVER_PASSWORD

You will have to stop and then start your function app for these values to be picked up

Verify your functions

The HTTP-triggered function you created requires a function key when calling the endpoint. At this time, the easiest way to get your function URL, including the key, is from the Azure portal.

Tip

You can also obtain your function keys by using the Key management APIs, which requires you to present a bearer token for authentication.

Locate your new function app in the Azure portal by typing your function app name in the Search box at the top of the page and selecting the App Service resource.

Select the MyHttpTrigger function, select </> Get function URL > default (Function key) > Copy.

Copy the function URL from the Azure portal

In this URL, the function key is the code query parameter.

Note

Because your function app is deployed as a container, you can't make changes to your function code in the portal. You must instead update the project in local container and republish it to Azure.

Paste the function URL into your browser's address bar. Add the query string value &name=<yourname> to the end of this URL and press the Enter key on your keyboard to execute the request. You should see the response returned by the function displayed in the browser.

The following example shows the response in the browser:

Function response in the browser.

The request URL includes a key that is required, by default, to access your function over HTTP.

Enable continuous deployment

One of the benefits of using containers is support for continuous deployment. Functions lets you automatically deploy updates when your container is updated in the registry. Enable continuous deployment with the az functionapp deployment container config command.

az functionapp deployment container config --enable-cd \
--query CI_CD_URL --output tsv \
--name <app_name> --resource-group myResourceGroup

This command returns the deployment webhook URL after continuous deployment is enabled. You can also use the az functionapp deployment container show-cd-url command to return this URL.

Copy the deployment URL and browse to your DockerHub repo, choose the Webhooks tab, type a Webhook name for the webhook, paste your URL in Webhook URL, and then choose the plus sign (+).

Add the webhook in your DockerHub repo

With the webhook set, any updates to the linked image in DockerHub result in the function app downloading and installing the latest image.

Enable SSH connections

SSH enables secure communication between a container and a client. With SSH enabled, you can connect to your container using App Service Advanced Tools (Kudu). To make it easy to connect to your container using SSH, Functions provide a base image that has SSH already enabled.

Change the base image

In your dockerfile, append the string -appservice to the base image in your FROM instruction, which for a JavaScript project looks like the following.

FROM mcr.microsoft.com/azure-functions/node:2.0-appservice

The differences in the two base images enable SSH connections into your container. These differences are detailed in this App Services tutorial.

Rebuild and redeploy the image

In the root folder, run the docker build command again, as before, replace <docker-id> with your Docker Hub account ID.

docker build --tag <docker-id>/mydockerimage:v1.0.0 .

Push the updated image back to Docker Hub.

docker push <docker-id>/mydockerimage:v1.0.0

The updated image is redeployed to your function app.

Connect to your container in Azure

In the browser, navigate to the following Advanced Tools (Kudu) scm. endpoint for your function app container, replacing <app_name> with the name of your function app.

https://<app_name>.scm.azurewebsites.net/

Sign in to your Azure account, and then select the SSH tab to create an SSH connection into your container.

After the connection is established, run the top command to view the currently running processes.

Linux top command running in an SSH session.

Write to Queue storage

Functions lets you connect Azure services and other resources to functions without having to write your own integration code. These bindings, which represent both input and output, are declared within the function definition. Data from bindings is provided to the function as parameters. A trigger is a special type of input binding. Although a function has only one trigger, it can have multiple input and output bindings. To learn more, see Azure Functions triggers and bindings concepts.

This section shows you how to integrate your function with an Azure Storage queue. The output binding that you add to this function writes data from an HTTP request to a message in the queue.

Download the function app settings

You've already created a function app in Azure, along with the required Storage account. The connection string for this account is stored securely in app settings in Azure. In this article, you write messages to a Storage queue in the same account. To connect to your Storage account when running the function locally, you must download app settings to the local.settings.json file.

From the root of the project, run the following Azure Functions Core Tools command to download settings to local.settings.json, replacing <APP_NAME> with the name of your function app from the previous article:

func azure functionapp fetch-app-settings <APP_NAME>

You might need to sign in to your Azure account.

Important

This command overwrites any existing settings with values from your function app in Azure.

Because it contains secrets, the local.settings.json file never gets published, and it should be excluded from source control.

You need the value AzureWebJobsStorage, which is the Storage account connection string. You use this connection to verify that the output binding works as expected.

Enable extension bundles

Because you are using a Queue storage output binding, you must have the Storage bindings extension installed before you run the project.

The easiest way to install binding extensions is to enable extension bundles. When you enable bundles, a predefined set of extension packages is automatically installed.

To enable extension bundles, open the host.json file and update its contents to match the following code:

{
    "version": "2.0",
    "extensionBundle": {
        "id": "Microsoft.Azure.Functions.ExtensionBundle",
        "version": "[1.*, 2.0.0)"
    }
}

Now, you can add a Storage output binding to your project.

Add an output binding

In Functions, each type of binding requires a direction, type, and a unique name to be defined in the function.json file. The way you define these attributes depends on the language of your function app.

Binding attributes are defined directly in the function.json file. Depending on the binding type, additional properties may be required. The queue output configuration describes the fields required for an Azure Storage queue binding. The extension makes it easy to add bindings to the function.json file.

To create a binding, right-click (Ctrl+click on macOS) the function.json file in your HttpTrigger folder and choose Add binding.... Follow the prompts to define the following binding properties for the new binding:

Prompt Value Description
Select binding direction out The binding is an output binding.
Select binding with direction... Azure Queue Storage The binding is an Azure Storage queue binding.
The name used to identify this binding in your code msg Name that identifies the binding parameter referenced in your code.
The queue to which the message will be sent outqueue The name of the queue that the binding writes to. When the queueName doesn't exist, the binding creates it on first use.
Select setting from "local.setting.json" AzureWebJobsStorage The name of an application setting that contains the connection string for the Storage account. The AzureWebJobsStorage setting contains the connection string for the Storage account you created with the function app.

A binding is added to the bindings array in your function.json file, which should now look like the following example:

{
   ...

  "bindings": [
    {
      "authLevel": "function",
      "type": "httpTrigger",
      "direction": "in",
      "name": "req",
      "methods": [
        "get",
        "post"
      ]
    },
    {
      "type": "http",
      "direction": "out",
      "name": "$return"
    },
    {
      "type": "queue",
      "direction": "out",
      "name": "msg",
      "queueName": "outqueue",
      "connection": "AzureWebJobsStorage"
    }
  ]
}

Add code that uses the output binding

After the binding is defined, you can use the name of the binding to access it as an attribute in the function signature. By using an output binding, you don't have to use the Azure Storage SDK code for authentication, getting a queue reference, or writing data. The Functions runtime and queue output binding do those tasks for you.

Add code that uses the msg output binding object on context.bindings to create a queue message. Add this code before thecontext.res statement.

// Add a message to the Storage queue.
context.bindings.msg = "Name passed to the function: " + 
(req.query.name || req.body.name);

At this point, your function should look as follows:

module.exports = async function (context, req) {
    context.log('JavaScript HTTP trigger function processed a request.');

    if (req.query.name || (req.body && req.body.name)) {
        // Add a message to the Storage queue.
        context.bindings.msg = "Name passed to the function: " + 
        (req.query.name || req.body.name);
        context.res = {
            // status: 200, /* Defaults to 200 */
            body: "Hello " + (req.query.name || req.body.name)
        };
    }
    else {
        context.res = {
            status: 400,
            body: "Please pass a name on the query string or in the request body"
        };
    }
};

Update the hosted container

In the root folder, run the docker build command again, and this time update the version in the tag to v1.0.2. As before, replace <docker-id> with your Docker Hub account ID.

docker build --tag <docker-id>/mydockerimage:v1.0.0 .

Push the updated image back to the repository.

docker push <docker-id>/mydockerimage:v1.0.0

Verify the updates in Azure

Use the same URL as before from the browser to trigger your function. You should see the same response. However, this time the string that you pass as the name parameter is written to the outqueue storage queue.

Set the Storage account connection

Open the local.settings.json file and copy the value of AzureWebJobsStorage, which is the Storage account connection string. Set the AZURE_STORAGE_CONNECTION_STRING environment variable to the connection string by using this Bash command:

AZURE_STORAGE_CONNECTION_STRING="<STORAGE_CONNECTION_STRING>"

When you set the connection string in the AZURE_STORAGE_CONNECTION_STRING environment variable, you can access your Storage account without having to provide authentication each time.

Query the Storage queue

You can use the az storage queue list command to view the Storage queues in your account, as in the following example:

az storage queue list --output tsv

The output from this command includes a queue named outqueue, which is the queue that was created when the function ran.

Next, use the az storage message peek command to view the messages in this queue, as in this example:

echo `echo $(az storage message peek --queue-name outqueue -o tsv --query '[].{Message:content}') | base64 --decode`

The string returned should be the same as the message you sent to test the function.

Note

The previous example decodes the returned string from base64. This is because the Queue storage bindings write to and read from Azure Storage as base64 strings.

Clean up resources

Other quickstarts in this collection build upon this quickstart. If you plan to continue on with subsequent quickstarts or with the tutorials, don't clean up the resources created in this quickstart. If you don't plan to continue, use the following command to delete all resources created in this quickstart:

az group delete --name myResourceGroup

Select y when prompted.

Next steps

Now that you have successfully deployed your custom container to a function app in Azure, consider reading more about the following topics: