Quickstart: Connect an existing Kubernetes cluster to Azure Arc

In this quickstart, you'll learn the benefits of Azure Arc-enabled Kubernetes and how to connect an existing Kubernetes cluster to Azure Arc. For a conceptual look at connecting clusters to Azure Arc, see the Azure Arc-enabled Kubernetes Agent Architecture article.

If you don't have an Azure subscription, create an Azure free account before you begin.

Prerequisites

  • Install or upgrade Azure CLI to version >= 2.16.0 and <= 2.29.0

  • Install the connectedk8s Azure CLI extension of version >= 1.2.0:

    az extension add --name connectedk8s
    
  • Log in to Azure CLI using the identity (user or service principal) that you want to use for connecting your cluster to Azure Arc.

    • The identity used needs to at least have 'Read' and 'Write' permissions on the Azure Arc-enabled Kubernetes resource type (Microsoft.Kubernetes/connectedClusters).
    • The Kubernetes Cluster - Azure Arc Onboarding built-in role is useful for at-scale onboarding as it has the granular permissions required to only connect clusters to Azure Arc. This role doesn't have the permissions to update, delete, or modify any other clusters or other Azure resources.
  • An up-and-running Kubernetes cluster. If you don't have one, you can create a cluster using one of these options:

    • Kubernetes in Docker (KIND)

    • Create a Kubernetes cluster using Docker for Mac or Windows

    • Self-managed Kubernetes cluster using Cluster API

    • If you want to connect a OpenShift cluster to Azure Arc, you need to execute the following command just once on your cluster before running az connectedk8s connect:

      oc adm policy add-scc-to-user privileged system:serviceaccount:azure-arc:azure-arc-kube-aad-proxy-sa
      

    Note

    The cluster needs to have at least one node of operating system and architecture type linux/amd64. Clusters with only linux/arm64 nodes aren't yet supported.

  • A kubeconfig file and context pointing to your cluster.

Meet network requirements

Important

Azure Arc agents require the following outbound URLs on https://:443 to function. For *.servicebus.windows.net, websockets need to be enabled for outbound access on firewall and proxy.

Endpoint (DNS) Description
https://management.azure.com (for Azure Cloud), https://management.usgovcloudapi.net (for Azure US Government) Required for the agent to connect to Azure and register the cluster.
https://<region>.dp.kubernetesconfiguration.azure.com (for Azure Cloud), https://<region>.dp.kubernetesconfiguration.azure.us (for Azure US Government) Data plane endpoint for the agent to push status and fetch configuration information.
https://login.microsoftonline.com, login.windows.net (for Azure Cloud), https://login.microsoftonline.us (for Azure US Government) Required to fetch and update Azure Resource Manager tokens.
https://mcr.microsoft.com, https://*.data.mcr.microsoft.com Required to pull container images for Azure Arc agents.
https://gbl.his.arc.azure.com (for Azure Cloud), https://gbl.his.arc.azure.us (for Azure US Government) Required to get the regional endpoint for pulling system-assigned Managed Identity certificates.
https://*.his.arc.azure.com (for Azure Cloud), https://usgv.his.arc.azure.us (for Azure US Government) Required to pull system-assigned Managed Identity certificates.
*.servicebus.windows.net, guestnotificationservice.azure.com, *.guestnotificationservice.azure.com, sts.windows.net For Cluster Connect and for Custom Location based scenarios.
https://k8connecthelm.azureedge.net az connectedk8s connect uses Helm 3 to deploy Azure Arc agents on the Kubernetes cluster. This endpoint is needed for Helm client download to facilitate deployment of the agent helm chart.

1. Register providers for Azure Arc-enabled Kubernetes

  1. Enter the following commands:

    az provider register --namespace Microsoft.Kubernetes
    az provider register --namespace Microsoft.KubernetesConfiguration
    az provider register --namespace Microsoft.ExtendedLocation
    
  2. Monitor the registration process. Registration may take up to 10 minutes.

    az provider show -n Microsoft.Kubernetes -o table
    az provider show -n Microsoft.KubernetesConfiguration -o table
    az provider show -n Microsoft.ExtendedLocation -o table
    

    Once registered, you should see the RegistrationState state for these namespaces change to Registered.

2. Create a resource group

Run the following command:

az group create --name AzureArcTest --location EastUS --output table

Output:

Location    Name
----------  ------------
eastus      AzureArcTest

3. Connect an existing Kubernetes cluster

Run the following command:

az connectedk8s connect --name AzureArcTest1 --resource-group AzureArcTest

Note

If you are logged into Azure CLI using a service principal, an additional parameter needs to be set for enabling the custom location feature on the cluster.

Output:

Helm release deployment succeeded

    {
      "aadProfile": {
        "clientAppId": "",
        "serverAppId": "",
        "tenantId": ""
      },
      "agentPublicKeyCertificate": "xxxxxxxxxxxxxxxxxxx",
      "agentVersion": null,
      "connectivityStatus": "Connecting",
      "distribution": "gke",
      "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/AzureArcTest/providers/Microsoft.Kubernetes/connectedClusters/AzureArcTest1",
      "identity": {
        "principalId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
        "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
        "type": "SystemAssigned"
      },
      "infrastructure": "gcp",
      "kubernetesVersion": null,
      "lastConnectivityTime": null,
      "location": "eastus",
      "managedIdentityCertificateExpirationTime": null,
      "name": "AzureArcTest1",
      "offering": null,
      "provisioningState": "Succeeded",
      "resourceGroup": "AzureArcTest",
      "tags": {},
      "totalCoreCount": null,
      "totalNodeCount": null,
      "type": "Microsoft.Kubernetes/connectedClusters"
    }

Tip

The above command without the location parameter specified creates the Azure Arc-enabled Kubernetes resource in the same location as the resource group. To create the Azure Arc-enabled Kubernetes resource in a different location, specify either --location <region> or -l <region> when running the az connectedk8s connect command.

4a. Connect using an outbound proxy server

If your cluster is behind an outbound proxy server, Azure CLI and the Azure Arc-enabled Kubernetes agents need to route their requests via the outbound proxy server.

  1. Set the environment variables needed for Azure CLI to use the outbound proxy server:

    export HTTP_PROXY=<proxy-server-ip-address>:<port>
    export HTTPS_PROXY=<proxy-server-ip-address>:<port>
    export NO_PROXY=<cluster-apiserver-ip-address>:<port>
    
  2. Run the connect command with proxy parameters specified:

    az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-https https://<proxy-server-ip-address>:<port> --proxy-http http://<proxy-server-ip-address>:<port> --proxy-skip-range <excludedIP>,<excludedCIDR> --proxy-cert <path-to-cert-file>
    

    Note

    • Some network requests such as the ones involving in-cluster service-to-service communication need to be separated from the traffic that is routed via the proxy server for outbound communication. The --proxy-skip-range parameter can be used to specify the CIDR range and endpoints in a comma-separated way so that any communication from the agents to these endpoints do not go via the outbound proxy. At a minimum, the CIDR range of the services in the cluster should be specified as value for this parameter. For example, let's say kubectl get svc -A returns a list of services where all the services have ClusterIP values in the range 10.0.0.0/16. Then the value to specify for --proxy-skip-range is 10.0.0.0/16,kubernetes.default.svc,.svc.cluster.local,.svc.
    • --proxy-http, --proxy-https, and --proxy-skip-range are expected for most outbound proxy environments. --proxy-cert is only required if you need to inject trusted certificates expected by proxy into the trusted certificate store of agent pods.
    • The outbound proxy has to be configured to allow websocket connections.

5. Verify cluster connection

Run the following command:

az connectedk8s list --resource-group AzureArcTest --output table

Output:

Name           Location    ResourceGroup
-------------  ----------  ---------------
AzureArcTest1  eastus      AzureArcTest

Note

After onboarding the cluster, it takes around 5 to 10 minutes for the cluster metadata (cluster version, agent version, number of nodes, etc.) to surface on the overview page of the Azure Arc-enabled Kubernetes resource in Azure portal.

6. View Azure Arc agents for Kubernetes

Azure Arc-enabled Kubernetes deploys a few operators into the azure-arc namespace.

  1. View these deployments and pods using:

    kubectl get deployments,pods -n azure-arc
    
  2. Verify all pods are in a Running state.

    Output:

    
     NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
     deployment.apps/cluster-metadata-operator   1/1     1            1           13d
     deployment.apps/clusterconnect-agent        1/1     1            1           13d
     deployment.apps/clusteridentityoperator     1/1     1            1           13d
     deployment.apps/config-agent                1/1     1            1           13d
     deployment.apps/controller-manager          1/1     1            1           13d
     deployment.apps/extension-manager           1/1     1            1           13d
     deployment.apps/flux-logs-agent             1/1     1            1           13d
     deployment.apps/kube-aad-proxy              1/1     1            1           13d
     deployment.apps/metrics-agent               1/1     1            1           13d
     deployment.apps/resource-sync-agent         1/1     1            1           13d
    
     NAME                                            READY   STATUS    RESTARTS   AGE
     pod/cluster-metadata-operator-9568b899c-2stjn   2/2     Running   0          13d
     pod/clusterconnect-agent-576758886d-vggmv       3/3     Running   0          13d
     pod/clusteridentityoperator-6f59466c87-mm96j    2/2     Running   0          13d
     pod/config-agent-7cbd6cb89f-9fdnt               2/2     Running   0          13d
     pod/controller-manager-df6d56db5-kxmfj          2/2     Running   0          13d
     pod/extension-manager-58c94c5b89-c6q72          2/2     Running   0          13d
     pod/flux-logs-agent-6db9687fcb-rmxww            1/1     Running   0          13d
     pod/kube-aad-proxy-67b87b9f55-bthqv             2/2     Running   0          13d
     pod/metrics-agent-575c565fd9-k5j2t              2/2     Running   0          13d
     pod/resource-sync-agent-6bbd8bcd86-x5bk5        2/2     Running   0          13d
     

A conceptual overview of these agents is available here.

7. Clean up resources

You can delete the Azure Arc-enabled Kubernetes resource, any associated configuration resources, and any agents running on the cluster using Azure CLI using the following command:

az connectedk8s delete --name AzureArcTest1 --resource-group AzureArcTest

Note

Deleting the Azure Arc-enabled Kubernetes resource using Azure portal removes any associated configuration resources, but does not remove any agents running on the cluster. Best practice is to delete the Azure Arc-enabled Kubernetes resource using az connectedk8s delete instead of Azure portal.

Next steps

Advance to the next article to learn how to deploy configurations to your connected Kubernetes cluster using GitOps.