Connect with SSH to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting

Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you may need to access an AKS node. This access could be for maintenance, log collection, or other troubleshooting operations. The AKS nodes are Linux VMs, so you can access them using SSH. For security purposes, the AKS nodes are not exposed to the internet.

This article shows you how to create an SSH connection with an AKS node using their private IP addresses.

Before you begin

This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart using the Azure CLI or using the Azure portal.

You also need the Azure CLI version 2.0.59 or later installed and configured. RunĀ az --version to find the version. If you need to install or upgrade, seeĀ Install Azure CLI.

Add your public SSH key

By default, SSH keys are generated when you create an AKS cluster. If you did not specify your own SSH keys when you created your AKS cluster, add your public SSH keys to the AKS nodes.

To add your SSH key to an AKS node, complete the following steps:

  1. Get the resource group name for your AKS cluster resources using az aks show. Provide your own core resource group and AKS cluster name:

    az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
  2. List the VMs in the AKS cluster resource group using the az vm list command. These VMs are your AKS nodes:

    az vm list --resource-group MC_myResourceGroup_myAKSCluster_eastus -o table

    The following example output shows the AKS nodes:

    Name                      ResourceGroup                                  Location
    ------------------------  ---------------------------------------------  ----------
    aks-nodepool1-79590246-0  MC_myResourceGroupAKS_myAKSClusterRBAC_eastus  eastus
  3. To add your SSH keys to the node, use the az vm user update command. Provide the resource group name and then one of the AKS nodes obtained in the previous step. By default, the username for the AKS nodes is azureuser. Provide the location of your own SSH public key location, such as ~/.ssh/, or paste the contents of your SSH public key:

    az vm user update \
      --resource-group MC_myResourceGroup_myAKSCluster_eastus \
      --name aks-nodepool1-79590246-0 \
      --username azureuser \
      --ssh-key-value ~/.ssh/

Get the AKS node address

The AKS nodes are not publicly exposed to the internet. To SSH to the AKS nodes, you use the private IP address. In the next step, you create a helper pod in your AKS cluster that lets you SSH to this private IP address of the node.

View the private IP address of an AKS cluster node using the az vm list-ip-addresses command. Provide your own AKS cluster resource group name obtained in a previous az-aks-show step:

az vm list-ip-addresses --resource-group MC_myAKSCluster_myAKSCluster_eastus -o table

The following example output shows the private IP addresses of the AKS nodes:

VirtualMachine            PrivateIPAddresses
------------------------  --------------------

Create the SSH connection

To create an SSH connection to an AKS node, you run a helper pod in your AKS cluster. This helper pod provides you with SSH access into the cluster and then additional SSH node access. To create and use this helper pod, complete the following steps:

  1. Run a debian container image and attach a terminal session to it. This container can be used to create an SSH session with any node in the AKS cluster:

    kubectl run -it --rm aks-ssh --image=debian
  2. The base Debian image doesn't include SSH components. Once the terminal session is connected to the container, install an SSH client using apt-get as follows:

    apt-get update && apt-get install openssh-client -y
  3. In a new terminal window, not connected to your container, list the pods on your AKS cluster using the kubectl get pods command. The pod created in the previous step starts with the name aks-ssh, as shown in the following example:

    $ kubectl get pods
    NAME                       READY     STATUS    RESTARTS   AGE
    aks-ssh-554b746bcf-kbwvf   1/1       Running   0          1m
  4. In the first step of this article, you added your public SSH key the AKS node. Now, copy your private SSH key into the pod. This private key is used to create the SSH into the AKS nodes.

    Provide your own aks-ssh pod name obtained in the previous step. If needed, change ~/.ssh/id_rsa to location of your private SSH key:

    kubectl cp ~/.ssh/id_rsa aks-ssh-554b746bcf-kbwvf:/id_rsa
  5. Back in the terminal session to your container, update the permissions on the copied id_rsa private SSH key so that it is user read-only:

    chmod 0600 id_rsa
  6. Now create an SSH connection to your AKS node. Again, the default username for AKS nodes is azureuser. Accept the prompt to continue with the connection as the SSH key is first trusted. You are then provided with the bash prompt of your AKS node:

    $ ssh -i id_rsa azureuser@
    ECDSA key fingerprint is SHA256:A6rnRkfpG21TaZ8XmQCCgdi9G/MYIMc+gFAuY9RUY70.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added '' (ECDSA) to the list of known hosts.
    Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.15.0-1018-azure x86_64)
     * Documentation:
     * Management:
     * Support:
      Get cloud support with Ubuntu Advantage Cloud Guest:

Remove SSH access

When done, exit the SSH session and then exit the interactive container session. When this container session closes, the pod used for SSH access from the AKS cluster is deleted.

Next steps

If you need additional troubleshooting data, you can view the kubelet logs or view the Kubernetes master node logs.