Create an Application Gateway ingress controller in Azure Kubernetes Service

Azure Kubernetes Service (AKS) manages your hosted Kubernetes environment. AKS makes it quick and easy to deploy and manage containerized applications without container orchestration expertise. AKS also eliminates the burden of taking applications offline for operational and maintenance tasks. Using AKS, these tasks - including provisioning, upgrading and scaling resources - can be accomplished on-demand.

An ingress controller provides various features for Kubernetes services. These features include reverse proxy, configurable traffic routing, and TLS termination. Kubernetes ingress resources are used to configure the ingress rules for individual Kubernetes services. Using an ingress controller and ingress rules, a single IP address can route traffic to multiple services in a Kubernetes cluster. All this functionality is provided by Azure Application Gateway, making it an ideal Ingress controller for Kubernetes on Azure.

In this article, you learn how to do the following tasks:

  • Create a Kubernetes cluster using AKS with Application Gateway as Ingress Controller.
  • Use HCL (HashiCorp Language) to define a Kubernetes cluster.
  • Use Terraform to create Application Gateway resource.
  • Use Terraform and AKS to create a Kubernetes cluster.
  • Use the kubectl tool to test the availability of a Kubernetes cluster.


  • Azure subscription: If you don't have an Azure subscription, create a free account before you begin.
  • Configure Terraform: Follow the directions in the article, Terraform and configure access to Azure

  • Azure resource group: If you don't have an Azure resource group to use for the demo, create an Azure resource group. Take note of the resource group name and location as those values are used in the demo.

  • Azure service principal: Follow the directions in the section of the Create the service principal section in the article, Create an Azure service principal with Azure CLI. Take note of the values for the appId, displayName, and password.

  • Obtain the Service Principal Object ID: Run the following command in Cloud Shell: az ad sp list --display-name <displayName>

Create the directory structure

The first step is to create the directory that holds your Terraform configuration files for the exercise.

  1. Browse to the Azure portal.

  2. Open Azure Cloud Shell.

  3. Change directories to the clouddrive directory.

    cd clouddrive
  4. Create a directory named terraform-aks-appgw-ingress.

    mkdir terraform-aks-appgw-ingress
  5. Change directories to the new directory:

    cd terraform-aks-appgw-ingress

Declare the Azure provider

Create the Terraform configuration file that declares the Azure provider.

  1. In Cloud Shell, create a file named

  2. Paste the following code into the editor:

    provider "azurerm" {
      # The "feature" block is required for AzureRM provider 2.x. 
      # If you are using version 1.x, the "features" block is not allowed.
      version = "~>2.0"
      features {}
    terraform {
        backend "azurerm" {}
  3. Save the file (<Ctrl>S) and exit the editor (<Ctrl>Q).

Define input variables

Create the Terraform configuration file that lists all the variables required for this deployment.

  1. In Cloud Shell, create a file named

  2. Paste the following code into the editor:

    variable "resource_group_name" {
      description = "Name of the resource group."
    variable "location" {
      description = "Location of the cluster."
    variable "aks_service_principal_app_id" {
      description = "Application ID/Client ID  of the service principal. Used by AKS to manage AKS related resources on Azure like vms, subnets."
    variable "aks_service_principal_client_secret" {
      description = "Secret of the service principal. Used by AKS to manage Azure."
    variable "aks_service_principal_object_id" {
      description = "Object ID of the service principal."
    variable "virtual_network_name" {
      description = "Virtual network name"
      default     = "aksVirtualNetwork"
    variable "virtual_network_address_prefix" {
      description = "VNET address prefix"
      default     = ""
    variable "aks_subnet_name" {
      description = "Subnet Name."
      default     = "kubesubnet"
    variable "aks_subnet_address_prefix" {
      description = "Subnet address prefix."
      default     = ""
    variable "app_gateway_subnet_address_prefix" {
      description = "Subnet server IP address."
      default     = ""
    variable "app_gateway_name" {
      description = "Name of the Application Gateway"
      default = "ApplicationGateway1"
    variable "app_gateway_sku" {
      description = "Name of the Application Gateway SKU"
      default = "Standard_v2"
    variable "app_gateway_tier" {
      description = "Tier of the Application Gateway tier"
      default = "Standard_v2"
    variable "aks_name" {
      description = "AKS cluster name"
      default     = "aks-cluster1"
    variable "aks_dns_prefix" {
      description = "Optional DNS prefix to use with hosted Kubernetes API server FQDN."
      default     = "aks"
    variable "aks_agent_os_disk_size" {
      description = "Disk size (in GB) to provision for each of the agent pool nodes. This value ranges from 0 to 1023. Specifying 0 applies the default disk size for that agentVMSize."
      default     = 40
    variable "aks_agent_count" {
      description = "The number of agent nodes for the cluster."
      default     = 3
    variable "aks_agent_vm_size" {
      description = "VM size"
      default     = "Standard_D3_v2"
    variable "kubernetes_version" {
      description = "Kubernetes version"
      default     = "1.11.5"
    variable "aks_service_cidr" {
      description = "CIDR notation IP range from which to assign service cluster IPs"
      default     = ""
    variable "aks_dns_service_ip" {
      description = "DNS server IP address"
      default     = ""
    variable "aks_docker_bridge_cidr" {
      description = "CIDR notation IP for Docker bridge."
      default     = ""
    variable "aks_enable_rbac" {
      description = "Enable RBAC on the AKS cluster. Defaults to false."
      default     = "false"
    variable "vm_user_name" {
      description = "User name for the VM"
      default     = "vmuser1"
    variable "public_ssh_key_path" {
      description = "Public key path for SSH."
      default     = "~/.ssh/"
    variable "tags" {
      type = map(string)
      default = {
        source = "terraform"
  3. Save the file (<Ctrl>S) and exit the editor (<Ctrl>Q).

Define the resources

Create Terraform configuration file that creates all the resources.

  1. In Cloud Shell, create a file named

  2. Paste the following code block to create a locals block for computed variables to reuse:

    # # Locals block for hardcoded names. 
    locals {
        backend_address_pool_name      = "${}-beap"
        frontend_port_name             = "${}-feport"
        frontend_ip_configuration_name = "${}-feip"
        http_setting_name              = "${}-be-htst"
        listener_name                  = "${}-httplstn"
        request_routing_rule_name      = "${}-rqrt"
        app_gateway_subnet_name = "appgwsubnet"
  3. Paste the following code block to create a data source for Resource group, new User identity:

    data "azurerm_resource_group" "rg" {
      name = var.resource_group_name
    # User Assigned Identities 
    resource "azurerm_user_assigned_identity" "testIdentity" {
      resource_group_name =
      location            = data.azurerm_resource_group.rg.location
      name = "identity1"
      tags = var.tags
  4. Paste the following code block to create base networking resources:

    resource "azurerm_virtual_network" "test" {
      name                = var.virtual_network_name
      location            = data.azurerm_resource_group.rg.location
      resource_group_name =
      address_space       = [var.virtual_network_address_prefix]
      subnet {
        name           = var.aks_subnet_name
        address_prefix = var.aks_subnet_address_prefix
      subnet {
        name           = "appgwsubnet"
        address_prefix = var.app_gateway_subnet_address_prefix
      tags = var.tags
    data "azurerm_subnet" "kubesubnet" {
      name                 = var.aks_subnet_name
      virtual_network_name =
      resource_group_name  =
      depends_on = [azurerm_virtual_network.test]
    data "azurerm_subnet" "appgwsubnet" {
      name                 = "appgwsubnet"
      virtual_network_name =
      resource_group_name  =
      depends_on = [azurerm_virtual_network.test]
    # Public Ip 
    resource "azurerm_public_ip" "test" {
      name                         = "publicIp1"
      location                     = data.azurerm_resource_group.rg.location
      resource_group_name          =
      allocation_method            = "Static"
      sku                          = "Standard"
      tags = var.tags
  5. Paste the following code block to create Application Gateway resource:

    resource "azurerm_application_gateway" "network" {
      name                = var.app_gateway_name
      resource_group_name =
      location            = data.azurerm_resource_group.rg.location
      sku {
        name     = var.app_gateway_sku
        tier     = "Standard_v2"
        capacity = 2
      gateway_ip_configuration {
        name      = "appGatewayIpConfig"
        subnet_id =
      frontend_port {
        name = local.frontend_port_name
        port = 80
      frontend_port {
        name = "httpsPort"
        port = 443
      frontend_ip_configuration {
        name                 = local.frontend_ip_configuration_name
        public_ip_address_id =
      backend_address_pool {
        name = local.backend_address_pool_name
      backend_http_settings {
        name                  = local.http_setting_name
        cookie_based_affinity = "Disabled"
        port                  = 80
        protocol              = "Http"
        request_timeout       = 1
      http_listener {
        name                           = local.listener_name
        frontend_ip_configuration_name = local.frontend_ip_configuration_name
        frontend_port_name             = local.frontend_port_name
        protocol                       = "Http"
      request_routing_rule {
        name                       = local.request_routing_rule_name
        rule_type                  = "Basic"
        http_listener_name         = local.listener_name
        backend_address_pool_name  = local.backend_address_pool_name
        backend_http_settings_name = local.http_setting_name
      tags = var.tags
      depends_on = [azurerm_virtual_network.test, azurerm_public_ip.test]
  6. Paste the following code block to create role assignments:

    resource "azurerm_role_assignment" "ra1" {
      scope                =
      role_definition_name = "Network Contributor"
      principal_id         = var.aks_service_principal_object_id 
      depends_on = [azurerm_virtual_network.test]
    resource "azurerm_role_assignment" "ra2" {
      scope                =
      role_definition_name = "Managed Identity Operator"
      principal_id         = var.aks_service_principal_object_id
      depends_on           = [azurerm_user_assigned_identity.testIdentity]
    resource "azurerm_role_assignment" "ra3" {
      scope                =
      role_definition_name = "Contributor"
      principal_id         = azurerm_user_assigned_identity.testIdentity.principal_id
      depends_on           = [azurerm_user_assigned_identity.testIdentity,]
    resource "azurerm_role_assignment" "ra4" {
      scope                =
      role_definition_name = "Reader"
      principal_id         = azurerm_user_assigned_identity.testIdentity.principal_id
      depends_on           = [azurerm_user_assigned_identity.testIdentity,]
  7. Paste the following code block to create the Kubernetes cluster:

    resource "azurerm_kubernetes_cluster" "k8s" {
      name       = var.aks_name
      location   = data.azurerm_resource_group.rg.location
      dns_prefix = var.aks_dns_prefix
      resource_group_name =
      linux_profile {
        admin_username = var.vm_user_name
        ssh_key {
          key_data = file(var.public_ssh_key_path)
      addon_profile {
        http_application_routing {
          enabled = false
      default_node_pool {
        name            = "agentpool"
        node_count      = var.aks_agent_count
        vm_size         = var.aks_agent_vm_size
        os_disk_size_gb = var.aks_agent_os_disk_size
        vnet_subnet_id  =
      service_principal {
        client_id     = var.aks_service_principal_app_id
        client_secret = var.aks_service_principal_client_secret
      network_profile {
        network_plugin     = "azure"
        dns_service_ip     = var.aks_dns_service_ip
        docker_bridge_cidr = var.aks_docker_bridge_cidr
        service_cidr       = var.aks_service_cidr
      role_based_access_control {
        enabled = var.aks_enable_rbac
      depends_on = [azurerm_virtual_network.test,]
      tags       = var.tags
  8. Save the file (<Ctrl>S) and exit the editor (<Ctrl>Q).

The code presented in this section sets the name of the cluster, location, and the resource_group_name. The dns_prefix value - that forms part of the fully qualified domain name (FQDN) used to access the cluster - is set.

The linux_profile record allows you to configure the settings that enable signing into the worker nodes using SSH.

With AKS, you pay only for the worker nodes. The agent_pool_profile record configures the details for these worker nodes. The agent_pool_profile record includes the number of worker nodes to create and the type of worker nodes. If you need to scale up or scale down the cluster in the future, you modify the count value in this record.

Create a Terraform output file

Terraform outputs allow you to define values that are highlighted to the user when Terraform applies a plan, and can be queried using the terraform output command. In this section, you create an output file that allows access to the cluster with kubectl.

  1. In Cloud Shell, create a file named

  2. Paste the following code into the editor:

    output "client_key" {
        value = azurerm_kubernetes_cluster.k8s.kube_config.0.client_key
    output "client_certificate" {
        value = azurerm_kubernetes_cluster.k8s.kube_config.0.client_certificate
    output "cluster_ca_certificate" {
        value = azurerm_kubernetes_cluster.k8s.kube_config.0.cluster_ca_certificate
    output "cluster_username" {
        value = azurerm_kubernetes_cluster.k8s.kube_config.0.username
    output "cluster_password" {
        value = azurerm_kubernetes_cluster.k8s.kube_config.0.password
    output "kube_config" {
        value = azurerm_kubernetes_cluster.k8s.kube_config_raw
    output "host" {
        value =
    output "identity_resource_id" {
        value =
    output "identity_client_id" {
        value = azurerm_user_assigned_identity.testIdentity.client_id
  3. Save the file (<Ctrl>S) and exit the editor (<Ctrl>Q).

Configure Azure storage to store Terraform state

Terraform tracks state locally via the terraform.tfstate file. This pattern works well in a single-person environment. However, in a more practical multi-person environment, you need to track state on the server using Azure storage. In this section, you learn to retrieve the necessary storage account information and create a storage container. The Terraform state information is then stored in that container.

  1. In the Azure portal, under Azure services, select Storage accounts. (If the Storage accounts option isn't visible on the main page, select More services and then locate and select it.)

  2. On the Storage accounts page, select the name of the storage account into which Terraform is to store state. For example, you can use the storage account created when you opened Cloud Shell the first time. The storage account name created by Cloud Shell typically starts with cs followed by a random string of numbers and letters.

    Take note of the storage account you select, as you need it later.

  3. On the storage account page, select Access keys.

    Storage account menu

  4. Make note of the key1 key value. (Selecting the icon to the right of the key copies the value to the clipboard.)

    Storage account access keys

  5. In Cloud Shell, create a container in your Azure storage account. Replace the placeholders with the appropriate values for your Azure storage account.

    az storage container create -n tfstate --account-name <YourAzureStorageAccountName> --account-key <YourAzureStorageAccountKey>

Create the Kubernetes cluster

In this section, you see how to use the terraform init command to create the resources defined the configuration files you created in the previous sections.

  1. In Cloud Shell, initialize Terraform. Replace the placeholders with the appropriate values for your Azure storage account.

    terraform init -backend-config="storage_account_name=<YourAzureStorageAccountName>" -backend-config="container_name=tfstate" -backend-config="access_key=<YourStorageAccountAccessKey>" -backend-config="" 

    The terraform init command displays the success of initializing the backend and provider plug-in:

    Example of "terraform init" results

  2. In Cloud Shell, create a file named terraform.tfvars:

    code terraform.tfvars
  3. Paste the following variables created earlier into the editor. To get the location value for your environment, use az account list-locations.

    resource_group_name = "<Name of the Resource Group already created>"
    location = "<Location of the Resource Group>"
    aks_service_principal_app_id = "<Service Principal AppId>"
    aks_service_principal_client_secret = "<Service Principal Client Secret>"
    aks_service_principal_object_id = "<Service Principal Object Id>"
  4. Save the file (<Ctrl>S) and exit the editor (<Ctrl>Q).

  5. Run the terraform plan command to create the Terraform plan that defines the infrastructure elements.

    terraform plan -out out.plan

    The terraform plan command displays the resources that are created when you run the terraform apply command:

    Example of "terraform plan" results

  6. Run the terraform apply command to apply the plan to create the Kubernetes cluster. The process to create a Kubernetes cluster can take several minutes, resulting in the Cloud Shell session timing out. If the Cloud Shell session times out, you can follow the steps in the section "Recover from a Cloud Shell timeout" to enable you to complete the process.

    terraform apply out.plan

    The terraform apply command displays the results of creating the resources defined in your configuration files:

    Example of "terraform apply" results

  7. In the Azure portal, select Resource Groups in the left menu to see the resources created for your new Kubernetes cluster in the selected resource group.

    Cloud Shell prompt

Recover from a Cloud Shell timeout

If the Cloud Shell session times out, you can use the following steps to recover:

  1. Start a Cloud Shell session.

  2. Change to the directory containing your Terraform configuration files.

    cd /clouddrive/terraform-aks-k8s
  3. Run the following command:

    export KUBECONFIG=./azurek8s

Test the Kubernetes cluster

The Kubernetes tools can be used to verify the newly created cluster.

  1. Get the Kubernetes configuration from the Terraform state and store it in a file that kubectl can read.

    echo "$(terraform output kube_config)" > ./azurek8s
  2. Set an environment variable so that kubectl picks up the correct config.

    export KUBECONFIG=./azurek8s
  3. Verify the health of the cluster.

    kubectl get nodes

    You should see the details of your worker nodes, and they should all have a status Ready, as shown in the following image:

    The kubectl tool allows you to verify the health of your Kubernetes cluster

Install Azure AD Pod Identity

Azure Active Directory Pod Identity provides token-based access to Azure Resource Manager.

Azure AD Pod Identity adds the following components to your Kubernetes cluster:

If RBAC is enabled, run the following command to install Azure AD Pod Identity to your cluster:

kubectl create -f

If RBAC is disabled, run the following command to install Azure AD Pod Identity to your cluster:

kubectl create -f

Install Helm

The code in this section uses Helm - Kubernetes package manager - to install the application-gateway-kubernetes-ingress package:

  1. If RBAC is enabled, run the following set of commands to install and configure Helm:

    kubectl create serviceaccount --namespace kube-system tiller-sa
    kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller-sa
    helm init --tiller-namespace kube-system --service-account tiller-sa
  2. If RBAC is disabled, run the following command to install and configure Helm:

    helm init
  3. Add the AGIC Helm repository:

    helm repo add application-gateway-kubernetes-ingress
    helm repo update

Install Ingress Controller Helm Chart

  1. Download helm-config.yaml to configure AGIC:

    wget -O helm-config.yaml
  2. Edit the helm-config.yaml and enter appropriate values for appgw and armAuth sections.

    code helm-config.yaml

    The values are described as follows:

    • verbosityLevel: Sets the verbosity level of the AGIC logging infrastructure. See Logging Levels for possible values.
    • appgw.subscriptionId: The Azure Subscription ID for the App Gateway. Example: a123b234-a3b4-557d-b2df-a0bc12de1234
    • appgw.resourceGroup: Name of the Azure Resource Group in which App Gateway was created.
    • Name of the Application Gateway. Example: applicationgateway1.
    • appgw.shared: This boolean flag should be defaulted to false. Set to true should you need a Shared App Gateway.
    • kubernetes.watchNamespace: Specify the name space, which AGIC should watch. The namespace can be a single string value, or a comma-separated list of namespaces. Leaving this variable commented out, or setting it to blank or empty string results in Ingress Controller observing all accessible namespaces.
    • armAuth.type: A value of either aadPodIdentity or servicePrincipal.
    • armAuth.identityResourceID: Resource ID of the managed identity.
    • armAuth.identityClientId: The Client ID of the Identity.
    • armAuth.secretJSON: Only needed when Service Principal Secret type is chosen (when armAuth.type has been set to servicePrincipal).

    Key notes:

    • The identityResourceID value is created in the terraform script and can be found by running: echo "$(terraform output identity_resource_id)".
    • The identityClientID value is created in the terraform script and can be found by running: echo "$(terraform output identity_client_id)".
    • The <resource-group> value is the resource group of your App Gateway.
    • The <identity-name> value is the name of the created identity.
    • All identities for a given subscription can be listed using: az identity list.
  3. Install the Application Gateway ingress controller package:

    helm install -f helm-config.yaml application-gateway-kubernetes-ingress/ingress-azure

Install a sample app

Once you have the App Gateway, AKS, and AGIC installed, you can install a sample app via Azure Cloud Shell:

  1. Use the curl command to download the YAML file:

    curl -o aspnetapp.yaml
  2. Apply the YAML file:

    kubectl apply -f aspnetapp.yaml

Clean up resources

When no longer needed, delete the resources created in this article.

Replace the placeholder with the appropriate value. All resources within the specified resource group will be deleted.

az group delete -n <resource-group>


For Terraform-specific support, use one of HashiCorp's community support channels to Terraform:

Next steps