Integrate Azure Active Directory with Azure Arc-enabled Kubernetes clusters

Kubernetes ClusterRoleBinding and RoleBinding object types help to define authorization in Kubernetes natively. By using this feature, you can use Azure Active Directory (Azure AD) and role assignments in Azure to control authorization checks on the cluster. This implies that you can now use Azure role assignments to granularly control who can read, write, and delete Kubernetes objects like deployment, pod, and service.

A conceptual overview of this feature is available in the Azure RBAC on Azure Arc-enabled Kubernetes article.

Important

Azure Arc-enabled Kubernetes preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. Azure Arc-enabled Kubernetes previews are partially covered by customer support on a best-effort basis.

Prerequisites

  • Install or upgrade the Azure CLI to version 2.16.0 or later.

  • Install the connectedk8s Azure CLI extension, version 1.1.0 or later:

    az extension add --name connectedk8s
    

    If the connectedk8s extension is already installed, you can update it to the latest version by using the following command:

    az extension update --name connectedk8s
    
  • Connect an existing Azure Arc-enabled Kubernetes cluster:

Note

You can't set up this feature for managed Kubernetes offerings of cloud providers like Elastic Kubernetes Service or Google Kubernetes Engine where the user doesn't have access to the API server of the cluster. For Azure Kubernetes Service (AKS) clusters, this feature is available natively and doesn't require the AKS cluster to be connected to Azure Arc.

Set up Azure AD applications

Create a server application

  1. Create a new Azure AD application and get its appId value. This value is used in later steps as serverApplicationId.

    CLUSTERNAME="<clusterName>"
    SERVER_APP_ID=$(az ad app create --display-name "${CLUSTERNAME}Server" --identifier-uris "https://${CLUSTERNAME}Server" --query appId -o tsv)
    echo $SERVER_APP_ID
    
  2. Update the application's group membership claims:

    az ad app update --id "${SERVER_APP_ID}" --set groupMembershipClaims=All
    
  3. Create a service principal and get its password field value. This value is required later as serverApplicationSecret when you're enabling this feature on the cluster.

    az ad sp create --id "${SERVER_APP_ID}"
    SERVER_APP_SECRET=$(az ad sp credential reset --name "${SERVER_APP_ID}" --credential-description "ArcSecret" --query password -o tsv)
    
  4. Grant "Sign in and read user profile" API permissions to the application:

    az ad app permission add --id "${SERVER_APP_ID}" --api 00000003-0000-0000-c000-000000000000 --api-permissions e1fe6dd8-ba31-4d61-89e7-88639da4683d=Scope
    az ad app permission grant --id "${SERVER_APP_ID}" --api 00000003-0000-0000-c000-000000000000
    

    Note

    An Azure tenant administrator has to run this step.

    For usage of this feature in production, we recommend that you create a different server application for every cluster.

Create a client application

  1. Create a new Azure AD application and get its appId value. This value is used in later steps as clientApplicationId.

    CLIENT_APP_ID=$(az ad app create --display-name "${CLUSTERNAME}Client" --native-app --reply-urls "https://${CLUSTERNAME}Client" --query appId -o tsv)
    echo $CLIENT_APP_ID
    
  2. Create a service principal for this client application:

    az ad sp create --id "${CLIENT_APP_ID}"
    
  3. Get the oAuthPermissionId value for the server application:

    az ad app show --id "${SERVER_APP_ID}" --query "oauth2Permissions[0].id" -o tsv
    
  4. Grant the required permissions for the client application:

    az ad app permission add --id "${CLIENT_APP_ID}" --api "${SERVER_APP_ID}" --api-permissions <oAuthPermissionId>=Scope
    az ad app permission grant --id "${CLIENT_APP_ID}" --api "${SERVER_APP_ID}"
    

Create a role assignment for the server application

The server application needs the Microsoft.Authorization/*/read permissions to check if the user making the request is authorized on the Kubernetes objects that are a part of the request.

  1. Create a file named accessCheck.json with the following contents:

    {
      "Name": "Read authorization",
      "IsCustom": true,
      "Description": "Read authorization",
      "Actions": ["Microsoft.Authorization/*/read"],
      "NotActions": [],
      "DataActions": [],
      "NotDataActions": [],
      "AssignableScopes": [
        "/subscriptions/<subscription-id>"
      ]
    }
    

    Replace <subscription-id> with the actual subscription ID.

  2. Run the following command to create the new custom role:

    ROLE_ID=$(az role definition create --role-definition ./accessCheck.json --query id -o tsv)
    
  3. Create a role assignment on the server application as assignee by using the role that you created:

    az role assignment create --role "${ROLE_ID}" --assignee "${SERVER_APP_ID}" --scope /subscriptions/<subscription-id>
    

Enable Azure RBAC on the cluster

Enable Azure role-based access control (RBAC) on your Azure Arc-enabled Kubernetes cluster by running the following command:

az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --features azure-rbac --app-id "${SERVER_APP_ID}" --app-secret "${SERVER_APP_SECRET}"

Note

Before you run the preceding command, ensure that the kubeconfig file on the machine is pointing to the cluster on which you'll enable the Azure RBAC feature.

Use --skip-azure-rbac-list with the preceding command for a comma-separated list of usernames, emails, and OpenID connections undergoing authorization checks by using Kubernetes native ClusterRoleBinding and RoleBinding objects instead of Azure RBAC.

Generic cluster where no reconciler is running on the apiserver specification

  1. SSH into every master node of the cluster and take the following steps:

    1. Open the apiserver manifest in edit mode:

      sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
      
    2. Add the following specification under volumes:

      - name: azure-rbac
        secret:
          secretName: azure-arc-guard-manifests
      
    3. Add the following specification under volumeMounts:

      - mountPath: /etc/guard
        name: azure-rbac
        readOnly: true
      
    4. Add the following apiserver arguments:

      - --authentication-token-webhook-config-file=/etc/guard/guard-authn-webhook.yaml
      - --authentication-token-webhook-cache-ttl=5m0s
      - --authorization-webhook-cache-authorized-ttl=5m0s
      - --authorization-webhook-config-file=/etc/guard/guard-authz-webhook.yaml
      - --authorization-webhook-version=v1
      - --authorization-mode=Node,Webhook,RBAC
      

      If the Kubernetes cluster is version 1.19.0 or later, you also need to set the following apiserver argument:

      - --authentication-token-webhook-version=v1
      
    5. Save and close the editor to update the apiserver pod.

Cluster created by using Cluster API

  1. Copy the guard secret that contains authentication and authorization webhook configuration files from the workload cluster onto your machine:

    kubectl get secret azure-arc-guard-manifests -n kube-system -o yaml > azure-arc-guard-manifests.yaml
    
  2. Change the namespace field in the azure-arc-guard-manifests.yaml file to the namespace within the management cluster where you're applying the custom resources for creation of workload clusters.

  3. Apply this manifest:

    kubectl apply -f azure-arc-guard-manifests.yaml
    
  4. Edit the KubeadmControlPlane object by running kubectl edit kcp <clustername>-control-plane:

    1. Add the following snippet under files:

      - contentFrom:
          secret:
            key: guard-authn-webhook.yaml
            name: azure-arc-guard-manifests
        owner: root:root
        path: /etc/kubernetes/guard-authn-webhook.yaml
        permissions: "0644"
      - contentFrom:
          secret:
            key: guard-authz-webhook.yaml
            name: azure-arc-guard-manifests
        owner: root:root
        path: /etc/kubernetes/guard-authz-webhook.yaml
        permissions: "0644"
      
    2. Add the following snippet under apiServer > extraVolumes:

      - hostPath: /etc/kubernetes/guard-authn-webhook.yaml
          mountPath: /etc/guard/guard-authn-webhook.yaml
          name: guard-authn
          readOnly: true
      - hostPath: /etc/kubernetes/guard-authz-webhook.yaml
          mountPath: /etc/guard/guard-authz-webhook.yaml
          name: guard-authz
          readOnly: true
      
    3. Add the following snippet under apiServer > extraArgs:

      authentication-token-webhook-cache-ttl: 5m0s
      authentication-token-webhook-config-file: /etc/guard/guard-authn-webhook.yaml
      authentication-token-webhook-version: v1
      authorization-mode: Node,Webhook,RBAC
      authorization-webhook-cache-authorized-ttl: 5m0s
      authorization-webhook-config-file: /etc/guard/guard-authz-webhook.yaml
      authorization-webhook-version: v1
      
    4. Save and close to update the KubeadmControlPlane object. Wait for these changes to appear on the workload cluster.

Create role assignments for users to access the cluster

Owners of the Azure Arc-enabled Kubernetes resource can use either built-in roles or custom roles to grant other users access to the Kubernetes cluster.

Built-in roles

Role Description
Azure Arc Kubernetes Viewer Allows read-only access to see most objects in a namespace. This role doesn't allow viewing secrets. This is because read permission on secrets would enable access to ServiceAccount credentials in the namespace. These credentials would in turn allow API access through that ServiceAccount value (a form of privilege escalation).
Azure Arc Kubernetes Writer Allows read/write access to most objects in a namespace. This role doesn't allow viewing or modifying roles or role bindings. However, this role allows accessing secrets and running pods as any ServiceAccount value in the namespace. So it can be used to gain the API access levels of any ServiceAccount value in the namespace.
Azure Arc Kubernetes Admin Allows admin access. It's intended to be granted within a namespace through RoleBinding. If you use it in RoleBinding, it allows read/write access to most resources in a namespace, including the ability to create roles and role bindings within the namespace. This role doesn't allow write access to resource quota or to the namespace itself.
Azure Arc Kubernetes Cluster Admin Allows superuser access to execute any action on any resource. When you use it in ClusterRoleBinding, it gives full control over every resource in the cluster and in all namespaces. When you use it in RoleBinding, it gives full control over every resource in the role binding's namespace, including the namespace itself.

You can create role assignments scoped to the Azure Arc-enabled Kubernetes cluster in the Azure portal, on the Access Control (IAM) pane of the cluster resource. You can also use the following Azure CLI commands:

az role assignment create --role "Azure Arc Kubernetes Cluster Admin" --assignee <AZURE-AD-ENTITY-ID> --scope $ARM_ID

In those commands, AZURE-AD-ENTITY-ID can be a username (for example, testuser@mytenant.onmicrosoft.com) or even the appId value of a service principal.

Here's another example of creating a role assignment scoped to a specific namespace within the cluster:

az role assignment create --role "Azure Arc Kubernetes Viewer" --assignee <AZURE-AD-ENTITY-ID> --scope $ARM_ID/namespaces/<namespace-name>

Note

You can create role assignments scoped to the cluster by using either the Azure portal or the Azure CLI, but you can create role assignments scoped to namespaces only by using the CLI.

Custom roles

You can choose to create your own role definition for use in role assignments.

Walk through the following example of a role definition that allows a user to only read deployments. For more information, see the full list of data actions that you can use to construct a role definition.

Copy the following JSON object into a file called custom-role.json. Replace the <subscription-id> placeholder with the actual subscription ID. The custom role uses one of the data actions and lets you view all deployments in the scope (cluster or namespace) where the role assignment is created.

{
    "Name": "Arc Deployment Viewer",
    "Description": "Lets you view all deployments in cluster/namespace.",
    "Actions": [],
    "NotActions": [],
    "DataActions": [
        "Microsoft.Kubernetes/connectedClusters/apps/deployments/read"
    ],
    "NotDataActions": [],
    "assignableScopes": [
        "/subscriptions/<subscription-id>"
    ]
}
  1. Create the role definition by running the following command from the folder where you saved custom-role.json:

    az role definition create --role-definition @custom-role.json
    
  2. Create a role assignment by using this custom role definition:

    az role assignment create --role "Arc Deployment Viewer" --assignee <AZURE-AD-ENTITY-ID> --scope $ARM_ID/namespaces/<namespace-name>
    

Configure kubectl with user credentials

There are two ways to get the kubeconfig file that you need to access the cluster:

  • You use the Cluster Connect feature (az connectedk8s proxy) of the Azure Arc-enabled Kubernetes cluster.
  • The cluster admin shares the kubeconfig file with every other user.

If you're accessing the cluster by using the Cluster Connect feature

Run the following command to start the proxy process:

az connectedk8s proxy -n <clusterName> -g <resourceGroupName>

After the proxy process is running, you can open another tab in your console to start sending your requests to the cluster.

If the cluster admin shared the kubeconfig file with you

  1. Run the following command to set the credentials for the user:

    kubectl config set-credentials <testuser>@<mytenant.onmicrosoft.com> \
    --auth-provider=azure \
    --auth-provider-arg=environment=AzurePublicCloud \
    --auth-provider-arg=client-id=<clientApplicationId> \
    --auth-provider-arg=tenant-id=<tenantId> \
    --auth-provider-arg=apiserver-id=<serverApplicationId>
    
  2. Open the kubeconfig file that you created earlier. Under contexts, verify that the context associated with the cluster points to the user credentials that you created in the previous step.

  3. Add the config-mode setting under user > config:

    name: testuser@mytenant.onmicrosoft.com
    user:
        auth-provider:
        config:
            apiserver-id: $SERVER_APP_ID
            client-id: $CLIENT_APP_ID
            environment: AzurePublicCloud
            tenant-id: $TENANT_ID
            config-mode: "1"
        name: azure
    

Send requests to the cluster

  1. Run any kubectl command. For example:

    • kubectl get nodes
    • kubectl get pods
  2. After you're prompted for a browser-based authentication, copy the device login URL (https://microsoft.com/devicelogin) and open on your web browser.

  3. Enter the code printed on your console. Copy and paste the code on your terminal into the prompt for device authentication input.

  4. Enter the username (testuser@mytenant.onmicrosoft.com) and the associated password.

  5. If you see an error message like this, it means you're unauthorized to access the requested resource:

    Error from server (Forbidden): nodes is forbidden: User "testuser@mytenant.onmicrosoft.com" cannot list resource "nodes" in API group "" at the cluster scope: User doesn't have access to the resource in Azure. Update role assignment to allow access.
    

    An administrator needs to create a new role assignment that authorizes this user to have access on the resource.

Use Conditional Access with Azure AD

When you're integrating Azure AD with your Azure Arc-enabled Kubernetes cluster, you can also use Conditional Access to control access to your cluster.

Note

Azure AD Conditional Access is an Azure AD Premium capability.

To create an example Conditional Access policy to use with the cluster, complete the following steps:

  1. At the top of the Azure portal, search for and select Azure Active Directory.

  2. On the menu for Azure Active Directory on the left side, select Enterprise applications.

  3. On the menu for enterprise applications on the left side, select Conditional Access.

  4. On the menu for Conditional Access on the left side, select Policies > New policy.

    Screenshot that shows the button for adding a conditional access policy.

  5. Enter a name for the policy, such as arc-k8s-policy.

  6. Select Users and groups. Under Include, choose Select users and groups. Then choose the users and groups where you want to apply the policy. For this example, choose the same Azure AD group that has administrative access to your cluster.

    Screenshot that shows selecting users or groups to apply the Conditional Access policy.

  7. Select Cloud apps or actions. Under Include, choose Select apps. Then search for and select the server application that you created earlier.

    Screenshot that shows selecting a server application for applying the Conditional Access policy.

  8. Under Access controls, select Grant. Select Grant access > Require device to be marked as compliant.

    Screenshot that shows selecting to only allow compliant devices for the Conditional Access policy.

  9. Under Enable policy, select On > Create.

    Screenshot that shows enabling the Conditional Access policy.

Access the cluster again. For example, run the kubectl get nodes command to view nodes in the cluster:

kubectl get nodes

Follow the instructions to sign in again. An error message states that you're successfully logged in, but your admin requires the device that's requesting access to be managed by Azure AD to access the resource. Follow these steps:

  1. In the Azure portal, go to Azure Active Directory.

  2. Select Enterprise applications. Then under Activity, select Sign-ins.

  3. An entry at the top shows Failed for Status and Success for Conditional Access. Select the entry, and then select Conditional Access in Details. Notice that your Conditional Access policy is listed.

    Screenshot that shows a failed sign-in entry due to the Conditional Access policy.

Configure just-in-time cluster access with Azure AD

Another option for cluster access control is to use Privileged Identity Management (PIM) for just-in-time requests.

Note

PIM is an Azure AD Premium capability that requires a Premium P2 SKU. For more on Azure AD SKUs, see the pricing guide.

To configure just-in-time access requests for your cluster, complete the following steps:

  1. At the top of the Azure portal, search for and select Azure Active Directory.

  2. Take note of the tenant ID. For the rest of these instructions, we'll refer to that ID as <tenant-id>.

    Screenshot that shows Azure Active Directory tenant details.

  3. On the menu for Azure Active Directory on the left side, under Manage, select Groups > New group.

    Screenshot that shows selections for creating a new group.

  4. Make sure that Security is selected for Group type. Enter a group name, such as myJITGroup. Under Azure AD Roles can be assigned to this group (Preview), select Yes. Finally, select Create.

    Screenshot that shows details for the new group.

  5. You're brought back to the Groups page. Select your newly created group and take note of the object ID. For the rest of these instructions, we'll refer to this ID as <object-id>.

    Screenshot that shows the object identifier for the created group.

  6. Back in the Azure portal, on the menu for Activity on the left side, select Privileged Access (Preview). Then select Enable Privileged Access.

    Screenshot that shows selections for enabling privileged access.

  7. Select Add assignments to begin granting access.

    Screenshot that shows the button for adding active assignments.

  8. Select a role of Member, and select the users and groups to whom you want to grant cluster access. A group admin can modify these assignments at any time. When you're ready to move on, select Next.

    Screenshot that shows adding assignments.

  9. Choose an assignment type of Active, choose the desired duration, and provide a justification. When you're ready to proceed, select Assign. For more on assignment types, see Assign eligibility for a privileged access group (preview) in Privileged Identity Management.

    Screenshot that shows choosing properties for an assignment.

After you've made the assignments, verify that just-in-time access is working by accessing the cluster. For example, use the kubectl get nodes command to view nodes in the cluster:

kubectl get nodes

Note the authentication requirement and follow the steps to authenticate. If authentication is successful, you should see output similar to the following:

To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code AAAAAAAAA to authenticate.

NAME      STATUS   ROLES    AGE      VERSION
node-1    Ready    agent    6m36s    v1.18.14
node-2    Ready    agent    6m42s    v1.18.14
node-3    Ready    agent    6m33s    v1.18.14

Next steps

Securely connect to the cluster by using Cluster Connect.