Create a function on Linux using a custom container

In this tutorial, you create and deploy your code to Azure Functions as a custom Docker container using a Linux base image. You typically use a custom image when your functions require a specific language version or have a specific dependency or configuration that isn't provided by the built-in image.

You can also use a default Azure App Service container as described on Create your first function hosted on Linux. Supported base images for Azure Functions are found in the Azure Functions base images repo.

In this tutorial, you learn how to:

  • Create a function app and Dockerfile using the Azure Functions Core Tools.
  • Build a custom image using Docker.
  • Publish a custom image to a container registry.
  • Create supporting resources in Azure for the function app
  • Deploy a function app from Docker Hub.
  • Add application settings to the function app.
  • Enable continuous deployment.
  • Enable SSH connections to the container.
  • Add a Queue storage output binding.

You can follow this tutorial on any computer running Windows, macOS, or Linux. Completing the tutorial will incur costs of a few US dollars in your Azure account.

Configure your local environment

Before you begin, you must have the following:

  • Node.js, Active LTS and Maintenance LTS versions (8.11.1 and 10.14.1 recommended).
  • The Java Developer Kit, version 8.

    Important

    • Functions support for Java 11 is currently in preview, and the Maven archetype creates a Java 8 deployment by default. If you want to instead run your function app on Java 11, you must manually update the pom.xml file with Java 11 values. To learn more, see Java versions.
    • The JAVA_HOME environment variable must be set to the install location of the correct version of the JDK to complete this quickstart.
  • Apache Maven, version 3.0 or above.

Prerequisite check

  • In a terminal or command window, run func --version to check that the Azure Functions Core Tools are version 2.7.1846 or later.

  • Run az --version to check that the Azure CLI version is 2.0.76 or later.

  • Run az login to sign in to Azure and verify an active subscription.

  • Run python --version (Linux/MacOS) or py --version (Windows) to check your Python version reports 3.8.x, 3.7.x or 3.6.x.
  • Run docker login to sign in to Docker. This command fails if Docker isn't running, in which case start docker and retry the command.

Create and activate a virtual environment

In a suitable folder, run the following commands to create and activate a virtual environment named .venv. Be sure to use Python 3.8, 3.7 or 3.6, which are supported by Azure Functions.

python -m venv .venv
source .venv/bin/activate

If Python didn't install the venv package on your Linux distribution, run the following command:

sudo apt-get install python3-venv

You run all subsequent commands in this activated virtual environment.

Create and test the local functions project

In a terminal or command prompt, run the following command for your chosen language to create a function app project in a folder named LocalFunctionsProject.

func init LocalFunctionsProject --worker-runtime dotnet --docker
func init LocalFunctionsProject --worker-runtime node --language javascript --docker
func init LocalFunctionsProject --worker-runtime powershell --docker
func init LocalFunctionsProject --worker-runtime python --docker
func init LocalFunctionsProject --worker-runtime node --language typescript --docker

In an empty folder, run the following command to generate the Functions project from a Maven archetype.

mvn archetype:generate -DarchetypeGroupId=com.microsoft.azure -DarchetypeArtifactId=azure-functions-archetype -Ddocker

Maven asks you for values needed to finish generating the project on deployment.
Provide the following values when prompted:

Prompt Value Description
groupId com.fabrikam A value that uniquely identifies your project across all projects, following the package naming rules for Java.
artifactId fabrikam-functions A value that is the name of the jar, without a version number.
version 1.0-SNAPSHOT Choose the default value.
package com.fabrikam.functions A value that is the Java package for the generated function code. Use the default.

Type Y or press Enter to confirm.

Maven creates the project files in a new folder with a name of artifactId, which in this example is fabrikam-functions.

To run on Java 11 in Azure, you must modify the values in the pom.xml file. To learn more, see Java versions.

The --docker option generates a Dockerfile for the project, which defines a suitable custom container for use with Azure Functions and the selected runtime.

Navigate into the project folder:

cd LocalFunctionsProject
cd fabrikam-functions

Add a function to your project by using the following command, where the --name argument is the unique name of your function and the --template argument specifies the function's trigger. func new create a subfolder matching the function name that contains a code file appropriate to the project's chosen language and a configuration file named function.json.

func new --name HttpExample --template "HTTP trigger"

To test the function locally, start the local Azure Functions runtime host in the root of the project folder:

func start --build  
func start  
npm install
npm start
mvn clean package  
mvn azure-functions:run

Once you see the HttpExample endpoint appear in the output, navigate to http://localhost:7071/api/HttpExample?name=Functions. The browser should display a "hello" message that echoes back Functions, the value supplied to the name query parameter.

Use Ctrl-C to stop the host.

Build the container image and test locally

(Optional) Examine the Dockerfile in the root of the project folder. The Dockerfile describes the required environment to run the function app on Linux. The complete list of supported base images for Azure Functions can be found in the Azure Functions base image page.

If you are running on Java 11 (preview), change the JAVA_VERSION build argument in the generated Dockerfile to the following:

ARG JAVA_VERSION=11

In the root project folder, run the docker build command, and provide a name, azurefunctionsimage, and tag, v1.0.0. Replace <DOCKER_ID> with your Docker Hub account ID. This command builds the Docker image for the container.

docker build --tag <DOCKER_ID>/azurefunctionsimage:v1.0.0 .

When the command completes, you can run the new container locally.

To test the build, run the image in a local container using the docker run command, replacing again <DOCKER_ID with your Docker ID and adding the ports argument, -p 8080:80:

docker run -p 8080:80 -it <docker_id>/azurefunctionsimage:v1.0.0

Once the image is running in a local container, open a browser to http://localhost:8080, which should display the placeholder image shown below. The image appears at this point because your function is running in the local container, as it would in Azure, which means that it's protected by an access key as defined in function.json with the "authLevel": "function" property. The container hasn't yet been published to a function app in Azure, however, so the key isn't yet available. If you want to test against the local container, stop docker, change the authorization property to "authLevel": "anonymous", rebuild the image, and restart docker. Then reset "authLevel": "function" in function.json. For more information, see authorization keys.

Placeholder image indicating that the container is running locally

Once the image is running in a local container, browse to http://localhost:8080/api/HttpExample?name=Functions, which should display the same "hello" message as before. Because the Maven archetype generates an HTTP triggered function that uses anonymous authorization, you can still call the function even though it's running in the container.

After you've verified the function app in the container, stop docker with Ctrl+C.

Push the image to Docker Hub

Docker Hub is a container registry that hosts images and provides image and container services. To share your image, which includes deploying to Azure, you must push it to a registry.

  1. If you haven't already signed in to Docker, do so with the docker login command, replacing <docker_id> with your Docker ID. This command prompts you for your username and password. A "Login Succeeded" message confirms that you're signed in.

    docker login
    
  2. After you've signed in, push the image to Docker Hub by using the docker push command, again replacing <docker_id> with your Docker ID.

    docker push <docker_id>/azurefunctionsimage:v1.0.0
    
  3. Depending on your network speed, pushing the image the first time might take a few minutes (pushing subsequent changes is much faster). While you're waiting, you can proceed to the next section and create Azure resources in another terminal.

Create supporting Azure resources for your function

To deploy your function code to Azure, you need to create three resources:

  • A resource group, which is a logical container for related resources.
  • An Azure Storage account, which maintains state and other information about your projects.
  • An Azure functions app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment, and sharing of resources.

You use Azure CLI commands to create these items. Each command provides JSON output upon completion.

  1. Sign in to Azure with the az login command:

    az login
    
  2. Create a resource group with the az group create command. The following example creates a resource group named AzureFunctionsContainers-rg in the westeurope region. (You generally create your resource group and resources in a region near you, using an available region from the az account list-locations command.)

    az group create --name AzureFunctionsContainers-rg --location westeurope
    

    Note

    You can't host Linux and Windows apps in the same resource group. If you have an existing resource group named AzureFunctionsContainers-rg with a Windows function app or web app, you must use a different resource group.

  3. Create a general-purpose storage account in your resource group and region by using the az storage account create command. In the following example, replace <storage_name> with a globally unique name appropriate to you. Names must contain three to 24 characters numbers and lowercase letters only. Standard_LRS specifies a typical general-purpose account.

    az storage account create --name <storage_name> --location westeurope --resource-group AzureFunctionsContainers-rg --sku Standard_LRS
    

    The storage account incurs only a few USD cents for this tutorial.

  4. Use the command to create a Premium plan for Azure Functions named myPremiumPlan in the Elastic Premium 1 pricing tier (--sku EP1), in the West Europe region (-location westeurope, or use a suitable region near you), and in a Linux container (--is-linux).

    az functionapp plan create --resource-group AzureFunctionsContainers-rg --name myPremiumPlan --location westeurope --number-of-workers 1 --sku EP1 --is-linux
    

    Linux hosting for custom functions containers are supported on Dedicated (App Service) plans and Premium plans. We use the Premium plan here, which can scale as needed. To learn more about hosting, see Azure Functions hosting plans comparison. To calculate costs, see the Functions pricing page.

    The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see Monitor Azure Functions. The instance incurs no costs until you activate it.

Create and configure a function app on Azure with the image

A function app on Azure manages the execution of your functions in your hosting plan. In this section, you use the Azure resources from the previous section to create a function app from an image on Docker Hub and configure it with a connection string to Azure Storage.

  1. Create the Functions app using the az functionapp create command. In the following example, replace <storage_name> with the name you used in the previous section for the storage account. Also replace <app_name> with a globally unique name appropriate to you, and <docker_id> with your Docker ID.

    az functionapp create --name <app_name> --storage-account <storage_name> --resource-group AzureFunctionsContainers-rg --plan myPremiumPlan --deployment-container-image-name <docker_id>/azurefunctionsimage:v1.0.0
    

    The deployment-container-image-name parameter specifies the image to use for the function app. You can use the az functionapp config container show command to view information about the image used for deployment. You can also use the az functionapp config container set command to deploy from a different image.

  2. Retrieve the connection string for the storage account you created by using the az storage account show-connection-string command, assigning it to a shell variable storageConnectionString:

    az storage account show-connection-string --resource-group AzureFunctionsContainers-rg --name <storage_name> --query connectionString --output tsv
    
  3. Add this setting to the function app by using the az functionapp config appsettings set command. In the following command, replace <app_name> with the name of your function app, and replace <connection_string> with the connection string from the previous step (a long encoded string that begins with "DefaultEndpointProtocol="):

    az functionapp config appsettings set --name <app_name> --resource-group AzureFunctionsContainers-rg --settings AzureWebJobsStorage=<connection_string>
    
  4. The function can now use this connection string to access the storage account.

    Tip

    In bash, you can use a shell variable to capture the connection string instead of using the clipboard. First, use the following command to create a variable with the connection string:

    storageConnectionString=$(az storage account show-connection-string --resource-group AzureFunctionsContainers-rg --name <storage_name> --query connectionString --output tsv)
    

    Then refer to the variable in the second command:

    az functionapp config appsettings set --name <app_name> --resource-group AzureFunctionsContainers-rg --settings AzureWebJobsStorage=$storageConnectionString
    

Note

If you publish your custom image to a private container account, you should use environment variables in the Dockerfile for the connection string instead. For more information, see the ENV instruction. You should also set the variables DOCKER_REGISTRY_SERVER_USERNAME and DOCKER_REGISTRY_SERVER_PASSWORD. To use the values, then, you must rebuild the image, push the image to the registry, and then restart the function app on Azure.

Verify your functions on Azure

With the image deployed to the function app on Azure, you can now invoke the function through HTTP requests. Because the function.json definition includes the property "authLevel": "function", you must first obtain the access key (also called the "function key") and include it as a URL parameter in any requests to the endpoint.

  1. Retrieve the function URL with the access (function) key by using the Azure portal, or by using the Azure CLI with the az rest command.)

    1. Sign in to the Azure portal, then search for and select Function App.

    2. Select the function you want to verify.

    3. In the left navigation panel, select Functions, and then select the function you want to verify.

      The Get function URL command on the Azure portal

    4. Select Get Function Url.

      The Get function URL command on the Azure portal

    5. In the pop-up window, select default (function key) and then copy the URL to the clipboard. The key is the string of characters following ?code=.

      The Get function URL command on the Azure portal

    Note

    Because your function app is deployed as a container, you can't make changes to your function code in the portal. You must instead update the project in the local image, push the image to the registry again, and then redeploy to Azure. You can set up continuous deployment in a later section.

  2. Paste the function URL into your browser's address bar, adding the parameter &name=Azure to the end of this URL. Text like "Hello, Azure" should appear in the browser.

    Function response in the browser.

  3. To test authorization, remove the code= parameter from the URL and verify that you get no response from the function.

Enable continuous deployment to Azure

You can enable Azure Functions to automatically update your deployment of an image whenever you update the image in the registry.

  1. Enable continuous deployment by using az functionapp deployment container config command, replacing <app_name> with the name of your function app:

    az functionapp deployment container config --enable-cd --query CI_CD_URL --output tsv --name <app_name> --resource-group AzureFunctionsContainers-rg
    

    This command enables continuous deployment and returns the deployment webhook URL. (You can retrieve this URL at any later time by using the az functionapp deployment container show-cd-url command.)

  2. Copy the deployment webhook URL to the clipboard.

  3. Open Docker Hub, sign in, and select Repositories on the nav bar. Locate and select image, select the Webhooks tab, specify a Webhook name, paste your URL in Webhook URL, and then select Create:

    Add the webhook in your DockerHub repo

  4. With the webhook set, Azure Functions redeploys your image whenever you update it in Docker Hub.

Enable SSH connections

SSH enables secure communication between a container and a client. With SSH enabled, you can connect to your container using App Service Advanced Tools (Kudu). To make it easy to connect to your container using SSH, Azure Functions provides a base image that has SSH already enabled. You need only edit your Dockerfile, then rebuild and redeploy the image. You can then connect to the container through the Advanced Tools (Kudu)

  1. In your Dockerfile, append the string -appservice to the base image in your FROM instruction:

    FROM mcr.microsoft.com/azure-functions/dotnet:3.0-appservice
    
    FROM mcr.microsoft.com/azure-functions/node:2.0-appservice
    
    FROM mcr.microsoft.com/azure-functions/powershell:2.0-appservice
    
    FROM mcr.microsoft.com/azure-functions/python:2.0-python3.7-appservice
    
    FROM mcr.microsoft.com/azure-functions/node:2.0-appservice
    
  2. Rebuild the image by using the docker build command again, replacing <docker_id> with your Docker ID:

    docker build --tag <docker_id>/azurefunctionsimage:v1.0.0 .
    
  3. Push the updated image to Docker Hub, which should take considerably less time than the first push only the updated segments of the image need to be uploaded.

    docker push <docker_id>/azurefunctionsimage:v1.0.0
    
  4. Azure Functions automatically redeploys the image to your functions app; the process takes place in less than a minute.

  5. In a browser, open https://<app_name>.scm.azurewebsites.net/, replacing <app_name> with your unique name. This URL is the Advanced Tools (Kudu) endpoint for your function app container.

  6. Sign in to your Azure account, and then select the SSH to establish a connection with the container. Connecting may take a few moments if Azure is still updating the container image.

  7. After a connection is established with your container, run the top command to view the currently running processes.

    Linux top command running in an SSH session

Write to an Azure Storage queue

Azure Functions lets you connect your functions to other Azure services and resources without having to write your own integration code. These bindings, which represent both input and output, are declared within the function definition. Data from bindings is provided to the function as parameters. A trigger is a special type of input binding. Although a function has only one trigger, it can have multiple input and output bindings. To learn more, see Azure Functions triggers and bindings concepts.

This section shows you how to integrate your function with an Azure Storage queue. The output binding that you add to this function writes data from an HTTP request to a message in the queue.

Retrieve the Azure Storage connection string

Earlier, you created an Azure Storage account for use by the function app. The connection string for this account is stored securely in app settings in Azure. By downloading the setting into the local.settings.json file, you can use that connection write to a Storage queue in the same account when running the function locally.

  1. From the root of the project, run the following command, replacing <app_name> with the name of your function app from the previous quickstart. This command will overwrite any existing values in the file.

    func azure functionapp fetch-app-settings <app_name>
    
  2. Open local.settings.json and locate the value named AzureWebJobsStorage, which is the Storage account connection string. You use the name AzureWebJobsStorage and the connection string in other sections of this article.

Important

Because local.settings.json contains secrets downloaded from Azure, always exclude this file from source control. The .gitignore file created with a local functions project excludes the file by default.

Register binding extensions

With the exception of HTTP and timer triggers, bindings are implemented as extension packages. Run the following dotnet add package command in the Terminal window to add the Storage extension package to your project.

dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage --version 3.0.4

Now, you can add the storage output binding to your project.

Add an output binding definition to the function

Although a function can have only one trigger, it can have multiple input and output bindings, which let you connect to other Azure services and resources without writing custom integration code.

You declare these bindings in the function.json file in your function folder. From the previous quickstart, your function.json file in the HttpExample folder contains two bindings in the bindings collection:

"bindings": [
    {
        "authLevel": "function",
        "type": "httpTrigger",
        "direction": "in",
        "name": "req",
        "methods": [
            "get",
            "post"
        ]
    },
    {
        "type": "http",
        "direction": "out",
        "name": "res"
    }
]
"scriptFile": "__init__.py",
"bindings": [
    {
        "authLevel": "function",
        "type": "httpTrigger",
        "direction": "in",
        "name": "req",
        "methods": [
            "get",
            "post"
        ]
    },
    {
        "type": "http",
        "direction": "out",
        "name": "$return"
    }
"bindings": [
  {
    "authLevel": "function",
    "type": "httpTrigger",
    "direction": "in",
    "name": "Request",
    "methods": [
      "get",
      "post"
    ]
  },
  {
    "type": "http",
    "direction": "out",
    "name": "Response"
  }
]

Each binding has at least a type, a direction, and a name. In the example above, the first binding is of type httpTrigger with the direction in. For the in direction, name specifies the name of an input parameter that's sent to the function when invoked by the trigger.

The second binding in the collection is named res. This http binding is an output binding (out) that is used to write the HTTP response.

To write to an Azure Storage queue from this function, add an out binding of type queue with the name msg, as shown in the code below:

    {
      "authLevel": "function",
      "type": "httpTrigger",
      "direction": "in",
      "name": "req",
      "methods": [
        "get",
        "post"
      ]
    },
    {
      "type": "http",
      "direction": "out",
      "name": "res"
    },
    {
      "type": "queue",
      "direction": "out",
      "name": "msg",
      "queueName": "outqueue",
      "connection": "AzureWebJobsStorage"
    }
  ]
}

The second binding in the collection is of type http with the direction out, in which case the special name of $return indicates that this binding uses the function's return value rather than providing an input parameter.

To write to an Azure Storage queue from this function, add an out binding of type queue with the name msg, as shown in the code below:

"bindings": [
  {
    "authLevel": "anonymous",
    "type": "httpTrigger",
    "direction": "in",
    "name": "req",
    "methods": [
      "get",
      "post"
    ]
  },
  {
    "type": "http",
    "direction": "out",
    "name": "$return"
  },
  {
    "type": "queue",
    "direction": "out",
    "name": "msg",
    "queueName": "outqueue",
    "connection": "AzureWebJobsStorage"
  }
]

The second binding in the collection is named res. This http binding is an output binding (out) that is used to write the HTTP response.

To write to an Azure Storage queue from this function, add an out binding of type queue with the name msg, as shown in the code below:

    {
      "authLevel": "function",
      "type": "httpTrigger",
      "direction": "in",
      "name": "Request",
      "methods": [
        "get",
        "post"
      ]
    },
    {
      "type": "http",
      "direction": "out",
      "name": "Response"
    },
    {
      "type": "queue",
      "direction": "out",
      "name": "msg",
      "queueName": "outqueue",
      "connection": "AzureWebJobsStorage"
    }
  ]
}

In this case, msg is given to the function as an output argument. For a queue type, you must also specify the name of the queue in queueName and provide the name of the Azure Storage connection (from local.settings.json) in connection.

In a C# class library project, the bindings are defined as binding attributes on the function method. The function.json file required by Functions is then auto-generated based on these attributes.

Open the HttpExample.cs project file and add the following parameter to the Run method definition:

[Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg,

The msg parameter is an ICollector<T> type, which represents a collection of messages that are written to an output binding when the function completes. In this case, the output is a storage queue named outqueue. The connection string for the Storage account is set by the StorageAccountAttribute. This attribute indicates the setting that contains the Storage account connection string and can be applied at the class, method, or parameter level. In this case, you could omit StorageAccountAttribute because you are already using the default storage account.

The Run method definition should now look like the following:

[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req, 
    [Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg, 
    ILogger log)

In a Java project, the bindings are defined as binding annotations on the function method. The function.json file is then autogenerated based on these annotations.

Browse to the location of your function code under src/main/java, open the Function.java project file, and add the following parameter to the run method definition:

@QueueOutput(name = "msg", queueName = "outqueue", connection = "AzureWebJobsStorage") OutputBinding<String> msg

The msg parameter is an OutputBinding<T> type, which represents a collection of strings that are written as messages to an output binding when the function completes. In this case, the output is a storage queue named outqueue. The connection string for the Storage account is set by the connection method. Rather than the connection string itself, you pass the application setting that contains the Storage account connection string.

The run method definition should now look like the following example:

@FunctionName("HttpTrigger-Java")
public HttpResponseMessage run(
        @HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel = AuthorizationLevel.FUNCTION)  
        HttpRequestMessage<Optional<String>> request, 
        @QueueOutput(name = "msg", queueName = "outqueue", connection = "AzureWebJobsStorage") 
        OutputBinding<String> msg, final ExecutionContext context) {
    ...
}

Add code to use the output binding

With the queue binding defined, you can now update your function to receive the msg output parameter and write messages to the queue.

Update HttpExample\__init__.py to match the following code, adding the msg parameter to the function definition and msg.set(name) under the if name: statement.

import logging

import azure.functions as func


def main(req: func.HttpRequest, msg: func.Out[func.QueueMessage]) -> str:

    name = req.params.get('name')
    if not name:
        try:
            req_body = req.get_json()
        except ValueError:
            pass
        else:
            name = req_body.get('name')

    if name:
        msg.set(name)
        return func.HttpResponse(f"Hello {name}!")
    else:
        return func.HttpResponse(
            "Please pass a name on the query string or in the request body",
            status_code=400
        )

The msg parameter is an instance of the azure.functions.InputStream class. Its set method writes a string message to the queue, in this case the name passed to the function in the URL query string.

Add code that uses the msg output binding object on context.bindings to create a queue message. Add this code before the context.res statement.

context.bindings.msg = (req.query.name || req.body.name);

At this point, your function should look as follows:

module.exports = async function (context, req) {
    context.log('JavaScript HTTP trigger function processed a request.');

    if (req.query.name || (req.body && req.body.name)) {
        // Add a message to the Storage queue,
        // which is the name passed to the function.
        context.bindings.msg = (req.query.name || req.body.name);
        context.res = {
            // status: 200, /* Defaults to 200 */
            body: "Hello " + (req.query.name || req.body.name)
        };
    }
    else {
        context.res = {
            status: 400,
            body: "Please pass a name on the query string or in the request body"
        };
    }
};

Add code that uses the msg output binding object on context.bindings to create a queue message. Add this code before the context.res statement.

context.bindings.msg = name;

At this point, your function should look as follows:

import { AzureFunction, Context, HttpRequest } from "@azure/functions"

const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
    context.log('HTTP trigger function processed a request.');
    const name = (req.query.name || (req.body && req.body.name));

    if (name) {
        // Add a message to the storage queue, 
        // which is the name passed to the function.
        context.bindings.msg = name; 
        // Send a "hello" response.
        context.res = {
            // status: 200, /* Defaults to 200 */
            body: "Hello " + (req.query.name || req.body.name)
        };
    }
    else {
        context.res = {
            status: 400,
            body: "Please pass a name on the query string or in the request body"
        };
    }
};

export default httpTrigger;

Add code that uses the Push-OutputBinding cmdlet to write text to the queue using the msg output binding. Add this code before you set the OK status in the if statement.

$outputMsg = $name
Push-OutputBinding -name msg -Value $outputMsg

At this point, your function should look as follows:

using namespace System.Net

# Input bindings are passed in via param block.
param($Request, $TriggerMetadata)

# Write to the Azure Functions log stream.
Write-Host "PowerShell HTTP trigger function processed a request."

# Interact with query parameters or the body of the request.
$name = $Request.Query.Name
if (-not $name) {
    $name = $Request.Body.Name
}

if ($name) {
    # Write the $name value to the queue, 
    # which is the name passed to the function.
    $outputMsg = $name
    Push-OutputBinding -name msg -Value $outputMsg

    $status = [HttpStatusCode]::OK
    $body = "Hello $name"
}
else {
    $status = [HttpStatusCode]::BadRequest
    $body = "Please pass a name on the query string or in the request body."
}

# Associate values to output bindings by calling 'Push-OutputBinding'.
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
    StatusCode = $status
    Body = $body
})

Add code that uses the msg output binding object to create a queue message. Add this code before the method returns.

if (!string.IsNullOrEmpty(name))
{
    // Add a message to the output collection.
    msg.Add(string.Format("Name passed to the function: {0}", name));
}

At this point, your function should look as follows:

[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req, 
    [Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg, 
    ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");

    string name = req.Query["name"];

    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    dynamic data = JsonConvert.DeserializeObject(requestBody);
    name = name ?? data?.name;

    if (!string.IsNullOrEmpty(name))
    {
        // Add a message to the output collection.
        msg.Add(string.Format("Name passed to the function: {0}", name));
    }
    return name != null
        ? (ActionResult)new OkObjectResult($"Hello, {name}")
        : new BadRequestObjectResult("Please pass a name on the query string or in the request body");
}

Now, you can use the new msg parameter to write to the output binding from your function code. Add the following line of code before the success response to add the value of name to the msg output binding.

msg.setValue(name);

When you use an output binding, you don't have to use the Azure Storage SDK code for authentication, getting a queue reference, or writing data. The Functions runtime and queue output binding do those tasks for you.

Your run method should now look like the following example:

public HttpResponseMessage run(
        @HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS) 
        HttpRequestMessage<Optional<String>> request, 
        @QueueOutput(name = "msg", queueName = "outqueue", 
        connection = "AzureWebJobsStorage") OutputBinding<String> msg, 
        final ExecutionContext context) {
    context.getLogger().info("Java HTTP trigger processed a request.");

    // Parse query parameter
    String query = request.getQueryParameters().get("name");
    String name = request.getBody().orElse(query);

    if (name == null) {
        return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
        .body("Please pass a name on the query string or in the request body").build();
    } else {
        // Write the name to the message queue. 
        msg.setValue(name);

        return request.createResponseBuilder(HttpStatus.OK).body("Hello, " + name).build();
    }
}

Update the tests

Because the archetype also creates a set of tests, you need to update these tests to handle the new msg parameter in the run method signature.

Browse to the location of your test code under src/test/java, open the Function.java project file, and replace the line of code under //Invoke with the following code.

@SuppressWarnings("unchecked")
final OutputBinding<String> msg = (OutputBinding<String>)mock(OutputBinding.class);
final HttpResponseMessage ret = new Function().run(req, msg, context);

Update the image in the registry

  1. In the root folder, run docker build again, and this time update the version in the tag to v1.0.1. As before, replace <docker_id> with your Docker Hub account ID:

    docker build --tag <docker_id>/azurefunctionsimage:v1.0.1
    
  2. Push the updated image back to the repository with docker push:

    docker push <docker_id>/azurefunctionsimage:v1.0.1
    
  3. Because you configured continuous delivery, updating the image in the registry again automatically updates your function app in Azure.

View the message in the Azure Storage queue

In a browser, use the same URL as before to invoke your function. The browser should display the same response as before, because you didn't modify that part of the function code. The added code, however, wrote a message using the name URL parameter to the outqueue storage queue.

You can view the queue in the Azure portal or in the Microsoft Azure Storage Explorer. You can also view the queue in the Azure CLI, as described in the following steps:

  1. Open the function project's local.setting.json file and copy the connection string value. In a terminal or command window, run the following command to create an environment variable named AZURE_STORAGE_CONNECTION_STRING, pasting your specific connection string in place of <MY_CONNECTION_STRING>. (This environment variable means you don't need to supply the connection string to each subsequent command using the --connection-string argument.)

    AZURE_STORAGE_CONNECTION_STRING="<MY_CONNECTION_STRING>"
    
  2. (Optional) Use the az storage queue list command to view the Storage queues in your account. The output from this command should include a queue named outqueue, which was created when the function wrote its first message to that queue.

    az storage queue list --output tsv
    
  3. Use the az storage message get command to read the message from this queue, which should be the first name you used when testing the function earlier. The command reads and removes the first message from the queue.

    echo `echo $(az storage message get --queue-name outqueue -o tsv --query '[].{Message:content}') | base64 --decode`
    

    Because the message body is stored base64 encoded, the message must be decoded before it's displayed. After you execute az storage message get, the message is removed from the queue. If there was only one message in outqueue, you won't retrieve a message when you run this command a second time and instead get an error.

Clean up resources

If you want to continue working with Azure Function using the resources you created in this tutorial, you can leave all those resources in place. Because you created a Premium Plan for Azure Functions, you'll incur one or two USD per day in ongoing costs.

To avoid ongoing costs, delete the AzureFunctionsContainer-rg resource group to clean up all the resources in that group:

az group delete --name AzureFunctionsContainer-rg

Next steps