Deploy a Docker container app to Azure Kubernetes Service
We'll show you how to set up continuous deployment of your containerized application to an Azure Kubernetes Service (AKS) using Azure Pipelines.
After you commit and push a code change, it will be automatically built and deployed to the target Kubernetes cluster.
Get the code
If you want some sample code that works with this guidance, import (into Azure DevOps), or fork (into GitHub), the following repository, based on the desired runtime.
If you already have an app in GitHub that you want to deploy, you can create a pipeline for that code.
If you are a new user, fork this repo in GitHub:
Define your CI build process
You'll need an Azure subscription. You can get one free through Visual Studio Dev Essentials.
Create an AKS cluster to host your app
Sign into Azure at https://portal.azure.com.
In the Azure portal, choose Create a resource, New, Containers, then choose Kubernetes Service.
Select or create a new Resource Group, enter name for your new Kubernetes Service cluster and DNS name prefix.
Choose Review + Create and then, after validation, choose Create.
Wait until the new AKS cluster has been created.
When you use Azure Container Registry (ACR) with Azure Kubernetes Service (AKS), you must establish an authentication mechanism. This can be achieved in two ways:
Grant AKS access to ACR. See Authenticate with Azure Container Registry from Azure Kubernetes Service.
Create a release pipeline
The build pipeline used to set up CI has already built a Docker image and pushed it to an Azure Container Registry. It also packaged and published a Helm chart as an artifact. In the release pipeline, we'll deploy the container image as a Helm application to the AKS cluster.
In Azure Pipelines, or the Build & Release hub in TFS, open the summary for your build.
In the build summary, choose the Release icon to start a new release pipeline.
If you have previously created a release pipeline that uses these build artifacts, you will be prompted to create a new release instead. In that case, go to the Releases page and start a new release pipeline from there by choosing the + icon.
Select the Empty job template.
Open the Tasks page and select Agent job.
Choose + to add a new task and add a Helm tool installer task. This ensures the agent that runs the subsequent tasks has Helm and Kubectl installed on it.
Choose + again and add a Package and deploy Helm charts task. Configure the settings for this task as follows:
Connection Type: Select Azure Resource Manager to connect to an AKS cluster by using an Azure service connection. Alternatively, if you want to connect to any Kubernetes cluster by using kubeconfig or a service account, you can select Kubernetes Service Connection. In this case, you will need to create and select a Kubernetes service connection instead of an Azure subscription for the following setting.
Azure subscription: Select a connection from the list under Available Azure Service Connections or create a more restricted permissions connection to your Azure subscription. If you see an Authorize button next to the input, use it to authorize the connection to your Azure subscription. If you do not see the required Azure subscription in the list of subscriptions, see Create an Azure service connection to manually set up the connection.
Resource group: Enter or select the resource group containing your AKS cluster.
Kubernetes cluster: Enter or select the AKS cluster you created.
Command: Select init as the Helm command. This will install Tiller to your running Kubernetes cluster. It will also set up any necessary local configuration. Tick Use canary image version to install the latest pre-release version of Tiller. You could also choose to upgrade Tiller if it is pre-installed by ticking Upgrade Tiller. If these options are enabled, the task will run
helm init --canary-image --upgrade
Choose + in the Agent job and add another Package and deploy Helm charts task. Configure the settings for this task as follows:
Kubernetes cluster: Enter or select the AKS cluster you created.
Namespace: Enter your Kubernetes cluster namespace where you want to deploy your application. Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. You can use namespaces to create different environments such as dev, test, and staging in the same cluster.
Command: Select upgrade as the Helm command. You can run any Helm command using this task and pass in command options as arguments. When you select the upgrade, the task shows some additional fields:
Chart Type: Select File Path. Alternatively, you can specify Chart Name if you want to specify a URL or a chart name. For example, if the chart name is
stable/mysql, the task will execute
helm upgrade stable/mysql
Chart Path: This can be a path to a packaged chart or a path to an unpacked chart directory. In this example you are publishing the chart using a CI build, so select the file package using file picker or enter
Release Name: Enter a name for your release; for example
Recreate Pods: Tick this checkbox if there is a configuration change during the release and you want to replace a running pod with the new configuration.
Reset Values: Tick this checkbox if you want the values built into the chart to override all values provided by the task.
Force: Tick this checkbox if, should conflicts occur, you want to upgrade and rollback to delete, recreate the resource, and reinstall the full release. This is useful in scenarios where applying patches can fail (for example, for services because the cluster IP address is immutable).
Arguments: Enter the Helm command arguments and their values; for this example
--set image.repository=$(imageRepoName) --set image.tag=$(Build.BuildId)See this section for a description of why we are using these arguments.
Enable TLS: Tick this checkbox to enable strong TLS-based connections between Helm and Tiller.
CA certificate: Specify a CA certificate to be uploaded and used to issue certificates for Tiller and Helm client.
Certificate: Specify the Tiller certificate or Helm client certificate
Key: Specify the Tiller Key or Helm client key
In the Variables page of the pipeline, add a variable named imageRepoName and set the value to the name of your Helm image repository. Typically, this is in the format
Save the release pipeline.
Arguments used in the Helm upgrade task
In the build pipeline, the container image is tagged with
$(Build.BuildId) and this is pushed to an Azure Container Registry.
In a Helm chart you can parameterize the container image details such as the name and tag
because the same chart can be used to deploy to different environments.
These values can also be specified in the values.yaml file or be overridden by a user-supplied values file,
which can in turn be overridden by
--set parameters during the Helm install or upgrade.
In this example, we pass the following arguments:
--set image.repository=$(imageRepoName) --set image.tag=$(Build.BuildId)
The value of
$(imageRepoName) was set in the Variables page (or the variables section of your YAML file).
Alternatively, you can directly replace it with your image repository name in the
--set arguments value or values.yaml file.
image: repository: VALUE_TO_BE_OVERRIDDEN tag: latest
Another alternative is to set the Set Values option of the task to specify the argument values as comma separated key-value pairs.
Create a release to deploy your app
You're now ready to create a release, which means to start the process of running the release pipeline with the artifacts produced by a specific build. This will result in deploying the build:
Choose + Release and select Create a release.
In the Create a new release panel, check that the artifact version you want to use is selected and choose Create.
Choose the release link in the information bar message. For example: "Release Release-1 has been created".
In the pipeline view, choose the status link in the stages of the pipeline to see the logs and agent output.