Use Cluster Connect to connect to Azure Arc-enabled Kubernetes clusters

With Cluster Connect, you can securely connect to Azure Arc-enabled Kubernetes clusters without requiring any inbound port to be enabled on the firewall.

Access to the apiserver of the Azure Arc-enabled Kubernetes cluster enables the following scenarios:

  • Interactive debugging and troubleshooting.
  • Cluster access to Azure services for custom locations and other resources created on top of it.

A conceptual overview of this feature is available in Cluster connect - Azure Arc-enabled Kubernetes.

Prerequisites

  • An Azure account with an active subscription. Create an account for free.

  • Install or update Azure CLI to version >= 2.16.0.

  • Install the connectedk8s Azure CLI extension of version >= 1.2.5:

    az extension add --name connectedk8s
    

    If you've already installed the connectedk8s extension, update the extension to the latest version:

    az extension update --name connectedk8s
    
  • An existing Azure Arc-enabled Kubernetes connected cluster.

  • Enable the below endpoints for outbound access in addition to the ones mentioned under connecting a Kubernetes cluster to Azure Arc:

    Endpoint Port
    *.servicebus.windows.net 443
    guestnotificationservice.azure.com, *.guestnotificationservice.azure.com 443
  • Replace the placeholders and run the below command to set the environment variables used in this document:

    CLUSTER_NAME=<cluster-name>
    RESOURCE_GROUP=<resource-group-name>
    ARM_ID_CLUSTER=$(az connectedk8s show -n $CLUSTER_NAME -g $RESOURCE_GROUP --query id -o tsv)
    

Enable Cluster Connect feature

You can enable the Cluster Connect on any Azure Arc-enabled Kubernetes cluster by running the following command on a machine where the kubeconfig file is pointed to the cluster of concern:

az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $RESOURCE_GROUP

Azure Active Directory authentication option

  1. Get the objectId associated with your Azure AD entity.

    • For an Azure AD user account:

      AAD_ENTITY_OBJECT_ID=$(az ad signed-in-user show --query objectId -o tsv)
      
    • For an Azure AD application:

      AAD_ENTITY_OBJECT_ID=$(az ad sp show --id <id> --query objectId -o tsv)
      
  2. Authorize the entity with appropriate permissions.

    • If you are using Kubernetes native ClusterRoleBinding or RoleBinding for authorization checks on the cluster, with the kubeconfig file pointing to the apiserver of your cluster for direct access, you can create one mapped to the Azure AD entity (service principal or user) that needs to access this cluster. Example:

      kubectl create clusterrolebinding admin-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_OBJECT_ID
      
    • If you are using Azure RBAC for authorization checks on the cluster, you can create an Azure role assignment mapped to the Azure AD entity. Example:

      az role assignment create --role "Azure Arc Kubernetes Viewer" --assignee $AAD_ENTITY_OBJECT_ID --scope $ARM_ID_CLUSTER
      

Service account token authentication option

  1. With the kubeconfig file pointing to the apiserver of your Kubernetes cluster, create a service account in any namespace (the following command creates it in the default namespace):

    kubectl create serviceaccount admin-user
    
  2. Create ClusterRoleBinding or RoleBinding to grant this service account the appropriate permissions on the cluster. Example:

    kubectl create clusterrolebinding admin-user-binding --clusterrole cluster-admin --serviceaccount default:admin-user
    
  3. Get the service account's token using the following commands:

    SECRET_NAME=$(kubectl get serviceaccount admin-user -o jsonpath='{$.secrets[0].name}')
    
    TOKEN=$(kubectl get secret ${SECRET_NAME} -o jsonpath='{$.data.token}' | base64 -d | sed $'s/$/\\\n/g')
    

Access your cluster

  1. Set up the Cluster Connect based kubeconfig needed to access your cluster based on the authentication option used:

    • If using Azure Active Directory authentication option, after logging into Azure CLI using the Azure AD entity of interest, get the Cluster Connect kubeconfig needed to communicate with the cluster from anywhere (from even outside the firewall surrounding the cluster):

      az connectedk8s proxy -n $CLUSTER_NAME -g $RESOURCE_GROUP
      
    • If using the service account authentication option, get the Cluster Connect kubeconfig needed to communicate with the cluster from anywhere:

      az connectedk8s proxy -n $CLUSTER_NAME -g $RESOURCE_GROUP --token $TOKEN
      
  2. Use kubectl to send requests to the cluster:

    kubectl get pods
    

You should now see a response from the cluster containing the list of all pods under the default namespace.

Known limitations

When making requests to the Kubernetes cluster, if the Azure AD entity used is a part of more than 200 groups, the following error is observed as this is a known limitation:

You must be logged in to the server (Error:Error while retrieving group info. Error:Overage claim (users with more than 200 group membership) is currently not supported.

To get past this error:

  1. Create a service principal, which is less likely to be a member of more than 200 groups.
  2. Sign in to Azure CLI with the service principal before running the az connectedk8s proxy command.

Next steps