Use TLS with an ingress controller on Azure Kubernetes Service (AKS)

Transport layer security (TLS) is a protocol for providing security in communication, such as encryption, authentication, and integrity, by using certificates. Using TLS with an ingress controller on AKS allows you to secure communication between your applications, while also having the benefits of an ingress controller.

You can bring your own certificates and integrate them with the Secrets Store CSI driver. Alternatively, you can also use cert-manager, which is used to automatically generate and configure Let's Encrypt certificates. Finally, two applications are run in the AKS cluster, each of which is accessible over a single IP address.

Note

There are two open source ingress controllers for Kubernetes based on Nginx: one is maintained by the Kubernetes community (kubernetes/ingress-nginx), and one is maintained by NGINX, Inc. (nginxinc/kubernetes-ingress). This article will be using the Kubernetes community ingress controller.

Before you begin

This article also assumes that you have an ingress controller and applications set up. If you need an ingress controller or example applications, see Create an ingress controller.

This article uses Helm 3 to install the NGINX ingress controller on a supported version of Kubernetes. Make sure that you're using the latest release of Helm and have access to the ingress-nginx and jetstack Helm repositories. The steps outlined in this article may not be compatible with previous versions of the Helm chart, NGINX ingress controller, or Kubernetes.

For more information on configuring and using Helm, see Install applications with Helm in Azure Kubernetes Service (AKS). For upgrade instructions, see the Helm install docs.

In addition, this article assumes you have an existing AKS cluster with an integrated Azure Container Registry (ACR). For more information on creating an AKS cluster with an integrated ACR, see Authenticate with Azure Container Registry from Azure Kubernetes Service.

This article also requires that you're running the Azure CLI version 2.0.64 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI.

Use TLS with your own certificates with Secrets Store CSI Driver

To use TLS with your own certificates with Secrets Store CSI Driver, you'll need an AKS cluster with the Secrets Store CSI Driver configured, and an Azure Key Vault instance. For more information, see Set up Secrets Store CSI Driver to enable NGINX Ingress Controller with TLS.

Use TLS with Let's Encrypt certificates

To use TLS with Let's Encrypt certificates, you'll deploy cert-manager, which is used to automatically generate and configure Let's Encrypt certificates.

Import the cert-manager images used by the Helm chart into your ACR

Use az acr import to import those images into your ACR.

REGISTRY_NAME=<REGISTRY_NAME>
CERT_MANAGER_REGISTRY=quay.io
CERT_MANAGER_TAG=v1.8.0
CERT_MANAGER_IMAGE_CONTROLLER=jetstack/cert-manager-controller
CERT_MANAGER_IMAGE_WEBHOOK=jetstack/cert-manager-webhook
CERT_MANAGER_IMAGE_CAINJECTOR=jetstack/cert-manager-cainjector

az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_CONTROLLER:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_CONTROLLER:$CERT_MANAGER_TAG
az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_WEBHOOK:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_WEBHOOK:$CERT_MANAGER_TAG
az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_CAINJECTOR:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_CAINJECTOR:$CERT_MANAGER_TAG

Note

In addition to importing container images into your ACR, you can also import Helm charts into your ACR. For more information, see Push and pull Helm charts to an Azure Container Registry.

Ingress controller configuration options

By default, an NGINX ingress controller is created with a new public IP address assignment. This public IP address is only static for the life-span of the ingress controller, and is lost if the controller is deleted and re-created.

You have the option of choosing one of the following methods:

  • Using a dynamic public IP address.
  • Using a static public IP address.

Use a static public IP address

A common configuration requirement is to provide the NGINX ingress controller an existing static public IP address. The static public IP address remains if the ingress controller is deleted.

The commands below create an IP address that will be deleted if you delete your AKS cluster.

First get the resource group name of the AKS cluster with the az aks show command:

az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv

Next, create a public IP address with the static allocation method using the az network public-ip create command. The following example creates a public IP address named myAKSPublicIP in the AKS cluster resource group obtained in the previous step:

az network public-ip create --resource-group MC_myResourceGroup_myAKSCluster_eastus --name myAKSPublicIP --sku Standard --allocation-method static --query publicIp.ipAddress -o tsv

Alternatively, you can create an IP address in a different resource group, which can be managed separately from your AKS cluster. If you create an IP address in a different resource group, ensure the following are true:

  • The cluster identity used by the AKS cluster has delegated permissions to the resource group, such as Network Contributor.
  • Add the --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-resource-group"="<RESOURCE_GROUP>" parameter. Replace <RESOURCE_GROUP> with the name of the resource group where the IP address resides.

When you update the ingress controller, you must pass a parameter to the Helm release so the ingress controller is made aware of the static IP address of the load balancer to be allocated to the ingress controller service. For the HTTPS certificates to work correctly, a DNS name label is used to configure an FQDN for the ingress controller IP address.

  1. Add the --set controller.service.loadBalancerIP="<EXTERNAL_IP>" parameter. Specify your own public IP address that was created in the previous step.
  2. Add the --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"="<DNS_LABEL>" parameter. The DNS label can be set either when the ingress controller is first deployed, or it can be configured later.
DNS_LABEL="demo-aks-ingress"
NAMESPACE="ingress-basic"
STATIC_IP=<STATIC_IP>

helm upgrade nginx-ingress ingress-nginx/ingress-nginx \
  --namespace $NAMESPACE \
  --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL \
  --set controller.service.loadBalancerIP=$STATIC_IP

For more information, see Use a static public IP address and DNS label with the AKS load balancer.

Use a dynamic IP address

When the ingress controller is created, an Azure public IP address is created for the ingress controller. This public IP address is static for the life-span of the ingress controller. If you delete the ingress controller, the public IP address assignment is lost. If you then create another ingress controller, a new public IP address is assigned.

To get the public IP address, use the kubectl get service command.

kubectl --namespace ingress-basic get services -o wide -w nginx-ingress-ingress-nginx-controller

The example output shows the details about the ingress controller:

NAME                                     TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)                      AGE   SELECTOR
nginx-ingress-ingress-nginx-controller   LoadBalancer   10.0.74.133   EXTERNAL_IP     80:32486/TCP,443:30953/TCP   44s   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx

If you're using a custom domain, you'll need to add an A record to your DNS zone. Otherwise, you'll need to configure the public IP address with a fully qualified domain name (FQDN).

Add an A record to your DNS zone

Add an A record to your DNS zone with the external IP address of the NGINX service using az network dns record-set a add-record.

az network dns record-set a add-record \
    --resource-group myResourceGroup \
    --zone-name MY_CUSTOM_DOMAIN \
    --record-set-name "*" \
    --ipv4-address MY_EXTERNAL_IP

Configure an FQDN for the ingress controller

Optionally, you can configure an FQDN for the ingress controller IP address instead of a custom domain. Your FQDN will be of the form <CUSTOM LABEL>.<AZURE REGION NAME>.cloudapp.azure.com.

Method 1: Set the DNS label using the Azure CLI

This sample is for a Bash shell.

# Public IP address of your ingress controller
IP="MY_EXTERNAL_IP"

# Name to associate with public IP address
DNSNAME="demo-aks-ingress"

# Get the resource-id of the public ip
PUBLICIPID=$(az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$IP')].[id]" --output tsv)

# Update public ip address with DNS name
az network public-ip update --ids $PUBLICIPID --dns-name $DNSNAME

# Display the FQDN
az network public-ip show --ids $PUBLICIPID --query "[dnsSettings.fqdn]" --output tsv

Method 2: Set the DNS label using helm chart settings

You can pass an annotation setting to your helm chart configuration by using the --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name" parameter. This parameter can be set either when the ingress controller is first deployed, or it can be configured later. The following example shows how to update this setting after the controller has been deployed.

DNS_LABEL="demo-aks-ingress"
NAMESPACE="ingress-basic"

helm upgrade nginx-ingress ingress-nginx/ingress-nginx \
  --namespace $NAMESPACE \
  --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL

Install cert-manager

The NGINX ingress controller supports TLS termination. There are several ways to retrieve and configure certificates for HTTPS. This article demonstrates using cert-manager, which provides automatic Lets Encrypt certificate generation and management functionality.

To install the cert-manager controller:

# Set variable for ACR location to use for pulling images
ACR_URL=<REGISTRY_URL>

# Label the ingress-basic namespace to disable resource validation
kubectl label namespace ingress-basic cert-manager.io/disable-validation=true

# Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io

# Update your local Helm chart repository cache
helm repo update

# Install the cert-manager Helm chart
helm install cert-manager jetstack/cert-manager \
  --namespace ingress-basic \
  --version $CERT_MANAGER_TAG \
  --set installCRDs=true \
  --set nodeSelector."kubernetes\.io/os"=linux \
  --set image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_CONTROLLER \
  --set image.tag=$CERT_MANAGER_TAG \
  --set webhook.image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_WEBHOOK \
  --set webhook.image.tag=$CERT_MANAGER_TAG \
  --set cainjector.image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_CAINJECTOR \
  --set cainjector.image.tag=$CERT_MANAGER_TAG

For more information on cert-manager configuration, see the cert-manager project.

Create a CA cluster issuer

Before certificates can be issued, cert-manager requires an Issuer or ClusterIssuer resource. These Kubernetes resources are identical in functionality, however Issuer works in a single namespace, and ClusterIssuer works across all namespaces. For more information, see the cert-manager issuer documentation.

Create a cluster issuer, such as cluster-issuer.yaml, using the following example manifest. Update the email address with a valid address from your organization:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: MY_EMAIL_ADDRESS
    privateKeySecretRef:
      name: letsencrypt
    solvers:
    - http01:
        ingress:
          class: nginx
          podTemplate:
            spec:
              nodeSelector:
                "kubernetes.io/os": linux

To create the issuer, use the kubectl apply command.

kubectl apply -f cluster-issuer.yaml

Update your ingress routes

You'll need to update your ingress routes to handle traffic to your FQDN or custom domain.

In the following example, traffic to the address hello-world-ingress.MY_CUSTOM_DOMAIN is routed to the aks-helloworld-one service. Traffic to the address hello-world-ingress.MY_CUSTOM_DOMAIN/hello-world-two is routed to the aks-helloworld-two service. Traffic to hello-world-ingress.MY_CUSTOM_DOMAIN/static is routed to the service named aks-helloworld-one for static assets.

Note

If you configured an FQDN for the ingress controller IP address instead of a custom domain, use the FQDN instead of hello-world-ingress.MY_CUSTOM_DOMAIN. For example if your FQDN is demo-aks-ingress.eastus.cloudapp.azure.com, replace hello-world-ingress.MY_CUSTOM_DOMAIN with demo-aks-ingress.eastus.cloudapp.azure.com in hello-world-ingress.yaml.

Create or update the hello-world-ingress.yaml file using below example YAML. Update the spec.tls.hosts and spec.rules.host to the DNS name you created in a previous step.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello-world-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/use-regex: "true"
    cert-manager.io/cluster-issuer: letsencrypt
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - hello-world-ingress.MY_CUSTOM_DOMAIN
    secretName: tls-secret
  rules:
  - host: hello-world-ingress.MY_CUSTOM_DOMAIN
    http:
      paths:
      - path: /hello-world-one(/|$)(.*)
        pathType: Prefix
        backend:
          service:
            name: aks-helloworld-one
            port:
              number: 80
      - path: /hello-world-two(/|$)(.*)
        pathType: Prefix
        backend:
          service:
            name: aks-helloworld-two
            port:
              number: 80
      - path: /(.*)
        pathType: Prefix
        backend:
          service:
            name: aks-helloworld-one
            port:
              number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello-world-ingress-static
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/rewrite-target: /static/$2
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - hello-world-ingress.MY_CUSTOM_DOMAIN
    secretName: tls-secret
  rules:
  - host: hello-world-ingress.MY_CUSTOM_DOMAIN
    http:
      paths:
      - path: /static(/|$)(.*)
        pathType: Prefix
        backend:
          service:
            name: aks-helloworld-one
            port: 
              number: 80

Update the ingress resource using the kubectl apply command.

kubectl apply -f hello-world-ingress.yaml --namespace ingress-basic

Verify a certificate object has been created

Next, a certificate resource must be created. The certificate resource defines the desired X.509 certificate. For more information, see cert-manager certificates. Cert-manager has automatically created a certificate object for you using ingress-shim, which is automatically deployed with cert-manager since v0.2.2. For more information, see the ingress-shim documentation.

To verify that the certificate was created successfully, use the kubectl get certificate --namespace ingress-basic command and verify READY is True, which may take several minutes.

kubectl get certificate --namespace ingress-basic

The example output below shows the certificate's status:

NAME         READY   SECRET       AGE
tls-secret   True    tls-secret   11m

Test the ingress configuration

Open a web browser to hello-world-ingress.MY_CUSTOM_DOMAIN or the FQDN of your Kubernetes ingress controller. Notice you're redirected to use HTTPS and the certificate is trusted and the demo application is shown in the web browser. Add the /hello-world-two path and notice the second demo application with the custom title is shown.

Clean up resources

This article used Helm to install the ingress components, certificates, and sample apps. When you deploy a Helm chart, many Kubernetes resources are created. These resources include pods, deployments, and services. To clean up these resources, you can either delete the entire sample namespace, or the individual resources.

Delete the sample namespace and all resources

To delete the entire sample namespace, use the kubectl delete command and specify your namespace name. All the resources in the namespace are deleted.

kubectl delete namespace ingress-basic

Delete resources individually

Alternatively, a more granular approach is to delete the individual resources created. First, remove the cluster issuer resources:

kubectl delete -f cluster-issuer.yaml --namespace ingress-basic

List the Helm releases with the helm list command. Look for charts named nginx and cert-manager, as shown in the following example output:

$ helm list --namespace ingress-basic

NAME                    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                   APP VERSION
cert-manager            ingress-basic   1               2020-01-15 10:23:36.515514 -0600 CST    deployed        cert-manager-v0.13.0    v0.13.0    
nginx                   ingress-basic   1               2020-01-15 10:09:45.982693 -0600 CST    deployed        nginx-ingress-1.29.1    0.27.0  

Uninstall the releases with the helm uninstall command. The following example uninstalls the NGINX ingress and cert-manager deployments.

$ helm uninstall cert-manager nginx --namespace ingress-basic

release "cert-manager" uninstalled
release "nginx" uninstalled

Next, remove the two sample applications:

kubectl delete -f aks-helloworld-one.yaml --namespace ingress-basic
kubectl delete -f aks-helloworld-two.yaml --namespace ingress-basic

Remove the ingress route that directed traffic to the sample apps:

kubectl delete -f hello-world-ingress.yaml --namespace ingress-basic

Finally, you can delete the itself namespace. Use the kubectl delete command and specify your namespace name:

kubectl delete namespace ingress-basic

Next steps

This article included some external components to AKS. To learn more about these components, see the following project pages:

You can also: