Create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS)

In Azure Kubernetes Service (AKS), nodes of the same configuration are grouped together into node pools. These node pools contain the underlying VMs that run your applications. The initial number of nodes and their size (SKU) is defined when you create an AKS cluster, which creates a system node pool. To support applications that have different compute or storage demands, you can create additional user node pools. System node pools serve the primary purpose of hosting critical system pods such as CoreDNS and tunnelfront. User node pools serve the primary purpose of hosting your application pods. However, application pods can be scheduled on system node pools if you wish to only have one pool in your AKS cluster. User node pools are where you place your application-specific pods. For example, use these additional user node pools to provide GPUs for compute-intensive applications, or access to high-performance SSD storage.

Note

This feature enables higher control over how to create and manage multiple node pools. As a result, separate commands are required for create/update/delete. Previously cluster operations through az aks create or az aks update used the managedCluster API and were the only option to change your control plane and a single node pool. This feature exposes a separate operation set for agent pools through the agentPool API and require use of the az aks nodepool command set to execute operations on an individual node pool.

This article shows you how to create and manage multiple node pools in an AKS cluster.

Before you begin

You need the Azure CLI version 2.2.0 or later installed and configured. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI.

Limitations

The following limitations apply when you create and manage AKS clusters that support multiple node pools:

  • See Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS).
  • You can delete system node pools, provided you have another system node pool to take its place in the AKS cluster.
  • System pools must contain at least one node, and user node pools may contain zero or more nodes.
  • The AKS cluster must use the Standard SKU load balancer to use multiple node pools, the feature is not supported with Basic SKU load balancers.
  • The AKS cluster must use virtual machine scale sets for the nodes.
  • The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter. For Linux node pools the length must be between 1 and 12 characters, for Windows node pools the length must be between 1 and 6 characters.
  • All node pools must reside in the same virtual network.
  • When creating multiple node pools at cluster create time, all Kubernetes versions used by node pools must match the version set for the control plane. This can be updated after the cluster has been provisioned by using per node pool operations.

Create an AKS cluster

Important

If you run a single system node pool for your AKS cluster in a production environment, we recommend you use at least three nodes for the node pool.

To get started, create an AKS cluster with a single node pool. The following example uses the az group create command to create a resource group named myResourceGroup in the eastus region. An AKS cluster named myAKSCluster is then created using the az aks create command.

Note

The Basic load balancer SKU is not supported when using multiple node pools. By default, AKS clusters are created with the Standard load balancer SKU from the Azure CLI and Azure portal.

# Create a resource group in East US
az group create --name myResourceGroup --location eastus

# Create a basic single-node AKS cluster
az aks create \
    --resource-group myResourceGroup \
    --name myAKSCluster \
    --vm-set-type VirtualMachineScaleSets \
    --node-count 2 \
    --generate-ssh-keys \
    --load-balancer-sku standard

It takes a few minutes to create the cluster.

Note

To ensure your cluster operates reliably, you should run at least 2 (two) nodes in the default node pool, as essential system services are running across this node pool.

When the cluster is ready, use the az aks get-credentials command to get the cluster credentials for use with kubectl:

az aks get-credentials --resource-group myResourceGroup --name myAKSCluster

Add a node pool

The cluster created in the previous step has a single node pool. Let's add a second node pool using the az aks nodepool add command. The following example creates a node pool named mynodepool that runs 3 nodes:

az aks nodepool add \
    --resource-group myResourceGroup \
    --cluster-name myAKSCluster \
    --name mynodepool \
    --node-count 3

Note

The name of a node pool must start with a lowercase letter and can only contain alphanumeric characters. For Linux node pools the length must be between 1 and 12 characters, for Windows node pools the length must be between 1 and 6 characters.

To see the status of your node pools, use the az aks node pool list command and specify your resource group and cluster name:

az aks nodepool list --resource-group myResourceGroup --cluster-name myAKSCluster

The following example output shows that mynodepool has been successfully created with three nodes in the node pool. When the AKS cluster was created in the previous step, a default nodepool1 was created with a node count of 2.

[
  {
    ...
    "count": 3,
    ...
    "name": "mynodepool",
    "orchestratorVersion": "1.15.7",
    ...
    "vmSize": "Standard_DS2_v2",
    ...
  },
  {
    ...
    "count": 2,
    ...
    "name": "nodepool1",
    "orchestratorVersion": "1.15.7",
    ...
    "vmSize": "Standard_DS2_v2",
    ...
  }
]

Tip

If no VmSize is specified when you add a node pool, the default size is Standard_D2s_v3 for Windows node pools and Standard_DS2_v2 for Linux node pools. If no OrchestratorVersion is specified, it defaults to the same version as the control plane.

Add a node pool with a unique subnet (preview)

A workload may require splitting a cluster's nodes into separate pools for logical isolation. This isolation can be supported with separate subnets dedicated to each node pool in the cluster. This can address requirements such as having non-contiguous virtual network address space to split across node pools.

Limitations

  • All subnets assigned to nodepools must belong to the same virtual network.
  • System pods must have access to all nodes in the cluster to provide critical functionality such as DNS resolution via coreDNS.
  • Assignment of a unique subnet per node pool is limited to Azure CNI during preview.
  • Using network policies with a unique subnet per node pool is not supported during preview.

To create a node pool with a dedicated subnet, pass the subnet resource ID as an additional parameter when creating a node pool.

az aks nodepool add \
    --resource-group myResourceGroup \
    --cluster-name myAKSCluster \
    --name mynodepool \
    --node-count 3 \
    --vnet-subnet-id <YOUR_SUBNET_RESOURCE_ID>

Upgrade a node pool

Note

Upgrade and scale operations on a cluster or node pool cannot occur simultaneously, if attempted an error is returned. Instead, each operation type must complete on the target resource prior to the next request on that same resource. Read more about this on our troubleshooting guide.

The commands in this section explain how to upgrade a single specific node pool. The relationship between upgrading the Kubernetes version of the control plane and the node pool are explained in the section below.

Note

The node pool OS image version is tied to the Kubernetes version of the cluster. You will only get OS image upgrades, following a cluster upgrade.

Since there are two node pools in this example, we must use az aks nodepool upgrade to upgrade a node pool. To see the available upgrades use az aks get-upgrades

az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster

Let's upgrade the mynodepool. Use the az aks nodepool upgrade command to upgrade the node pool, as shown in the following example:

az aks nodepool upgrade \
    --resource-group myResourceGroup \
    --cluster-name myAKSCluster \
    --name mynodepool \
    --kubernetes-version KUBERNETES_VERSION \
    --no-wait

List the status of your node pools again using the az aks node pool list command. The following example shows that mynodepool is in the Upgrading state to KUBERNETES_VERSION:

az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
[
  {
    ...
    "count": 3,
    ...
    "name": "mynodepool",
    "orchestratorVersion": "KUBERNETES_VERSION",
    ...
    "provisioningState": "Upgrading",
    ...
    "vmSize": "Standard_DS2_v2",
    ...
  },
  {
    ...
    "count": 2,
    ...
    "name": "nodepool1",
    "orchestratorVersion": "1.15.7",
    ...
    "provisioningState": "Succeeded",
    ...
    "vmSize": "Standard_DS2_v2",
    ...
  }
]

It takes a few minutes to upgrade the nodes to the specified version.

As a best practice, you should upgrade all node pools in an AKS cluster to the same Kubernetes version. The default behavior of az aks upgrade is to upgrade all node pools together with the control plane to achieve this alignment. The ability to upgrade individual node pools lets you perform a rolling upgrade and schedule pods between node pools to maintain application uptime within the above constraints mentioned.

Upgrade a cluster control plane with multiple node pools

Note

Kubernetes uses the standard Semantic Versioning versioning scheme. The version number is expressed as x.y.z, where x is the major version, y is the minor version, and z is the patch version. For example, in version 1.12.6, 1 is the major version, 12 is the minor version, and 6 is the patch version. The Kubernetes version of the control plane and the initial node pool are set during cluster creation. All additional node pools have their Kubernetes version set when they are added to the cluster. The Kubernetes versions may differ between node pools as well as between a node pool and the control plane.

An AKS cluster has two cluster resource objects with Kubernetes versions associated.

  1. A cluster control plane Kubernetes version.
  2. A node pool with a Kubernetes version.

A control plane maps to one or many node pools. The behavior of an upgrade operation depends on which Azure CLI command is used.

Upgrading an AKS control plane requires using az aks upgrade. This command upgrades the control plane version and all node pools in the cluster.

Issuing the az aks upgrade command with the --control-plane-only flag upgrades only the cluster control plane. None of the associated node pools in the cluster are changed.

Upgrading individual node pools requires using az aks nodepool upgrade. This command upgrades only the target node pool with the specified Kubernetes version

Validation rules for upgrades

The valid Kubernetes upgrades for a cluster's control plane and node pools are validated by the following sets of rules.

  • Rules for valid versions to upgrade node pools:

    • The node pool version must have the same major version as the control plane.
    • The node pool minor version must be within two minor versions of the control plane version.
    • The node pool version cannot be greater than the control major.minor.patch version.
  • Rules for submitting an upgrade operation:

    • You cannot downgrade the control plane or a node pool Kubernetes version.
    • If a node pool Kubernetes version is not specified, behavior depends on the client being used. Declaration in Resource Manager templates falls back to the existing version defined for the node pool if used, if none is set the control plane version is used to fall back on.
    • You can either upgrade or scale a control plane or a node pool at a given time, you cannot submit multiple operations on a single control plane or node pool resource simultaneously.

Scale a node pool manually

As your application workload demands change, you may need to scale the number of nodes in a node pool. The number of nodes can be scaled up or down.

To scale the number of nodes in a node pool, use the az aks node pool scale command. The following example scales the number of nodes in mynodepool to 5:

az aks nodepool scale \
    --resource-group myResourceGroup \
    --cluster-name myAKSCluster \
    --name mynodepool \
    --node-count 5 \
    --no-wait

List the status of your node pools again using the az aks node pool list command. The following example shows that mynodepool is in the Scaling state with a new count of 5 nodes:

az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
[
  {
    ...
    "count": 5,
    ...
    "name": "mynodepool",
    "orchestratorVersion": "1.15.7",
    ...
    "provisioningState": "Scaling",
    ...
    "vmSize": "Standard_DS2_v2",
    ...
  },
  {
    ...
    "count": 2,
    ...
    "name": "nodepool1",
    "orchestratorVersion": "1.15.7",
    ...
    "provisioningState": "Succeeded",
    ...
    "vmSize": "Standard_DS2_v2",
    ...
  }
]

It takes a few minutes for the scale operation to complete.

Scale a specific node pool automatically by enabling the cluster autoscaler

AKS offers a separate feature to automatically scale node pools with a feature called the cluster autoscaler. This feature can be enabled per node pool with unique minimum and maximum scale counts per node pool. Learn how to use the cluster autoscaler per node pool.

Delete a node pool

If you no longer need a pool, you can delete it and remove the underlying VM nodes. To delete a node pool, use the az aks node pool delete command and specify the node pool name. The following example deletes the mynoodepool created in the previous steps:

Caution

There are no recovery options for data loss that may occur when you delete a node pool. If pods can't be scheduled on other node pools, those applications are unavailable. Make sure you don't delete a node pool when in-use applications don't have data backups or the ability to run on other node pools in your cluster.

az aks nodepool delete -g myResourceGroup --cluster-name myAKSCluster --name mynodepool --no-wait

The following example output from the az aks node pool list command shows that mynodepool is in the Deleting state:

az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
[
  {
    ...
    "count": 5,
    ...
    "name": "mynodepool",
    "orchestratorVersion": "1.15.7",
    ...
    "provisioningState": "Deleting",
    ...
    "vmSize": "Standard_DS2_v2",
    ...
  },
  {
    ...
    "count": 2,
    ...
    "name": "nodepool1",
    "orchestratorVersion": "1.15.7",
    ...
    "provisioningState": "Succeeded",
    ...
    "vmSize": "Standard_DS2_v2",
    ...
  }
]

It takes a few minutes to delete the nodes and the node pool.

Specify a VM size for a node pool

In the previous examples to create a node pool, a default VM size was used for the nodes created in the cluster. A more common scenario is for you to create node pools with different VM sizes and capabilities. For example, you may create a node pool that contains nodes with large amounts of CPU or memory, or a node pool that provides GPU support. In the next step, you use taints and tolerations to tell the Kubernetes scheduler how to limit access to pods that can run on these nodes.

In the following example, create a GPU-based node pool that uses the Standard_NC6 VM size. These VMs are powered by the NVIDIA Tesla K80 card. For information on available VM sizes, see Sizes for Linux virtual machines in Azure.

Create a node pool using the az aks node pool add command again. This time, specify the name gpunodepool, and use the --node-vm-size parameter to specify the Standard_NC6 size:

az aks nodepool add \
    --resource-group myResourceGroup \
    --cluster-name myAKSCluster \
    --name gpunodepool \
    --node-count 1 \
    --node-vm-size Standard_NC6 \
    --no-wait

The following example output from the az aks node pool list command shows that gpunodepool is Creating nodes with the specified VmSize:

az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
[
  {
    ...
    "count": 1,
    ...
    "name": "gpunodepool",
    "orchestratorVersion": "1.15.7",
    ...
    "provisioningState": "Creating",
    ...
    "vmSize": "Standard_NC6",
    ...
  },
  {
    ...
    "count": 2,
    ...
    "name": "nodepool1",
    "orchestratorVersion": "1.15.7",
    ...
    "provisioningState": "Succeeded",
    ...
    "vmSize": "Standard_DS2_v2",
    ...
  }
]

It takes a few minutes for the gpunodepool to be successfully created.

Specify a taint, label, or tag for a node pool

Setting nodepool taints

When creating a node pool, you can add taints, labels, or tags to that node pool. When you add a taint, label, or tag, all nodes within that node pool also get that taint, label, or tag.

To create a node pool with a taint, use az aks nodepool add. Specify the name taintnp and use the --node-taints parameter to specify sku=gpu:NoSchedule for the taint.

az aks nodepool add \
    --resource-group myResourceGroup \
    --cluster-name myAKSCluster \
    --name taintnp \
    --node-count 1 \
    --node-taints sku=gpu:NoSchedule \
    --no-wait

Note

A taint can only be set for node pools during node pool creation.

The following example output from the az aks nodepool list command shows that taintnp is Creating nodes with the specified nodeTaints:

$ az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster

[
  {
    ...
    "count": 1,
    ...
    "name": "taintnp",
    "orchestratorVersion": "1.15.7",
    ...
    "provisioningState": "Creating",
    ...
    "nodeTaints":  [
      "sku=gpu:NoSchedule"
    ],
    ...
  },
 ...
]

The taint information is visible in Kubernetes for handling scheduling rules for nodes. The Kubernetes scheduler can use taints and tolerations to restrict what workloads can run on nodes.

  • A taint is applied to a node that indicates only specific pods can be scheduled on them.
  • A toleration is then applied to a pod that allows them to tolerate a node's taint.

For more information on how to use advanced Kubernetes scheduled features, see Best practices for advanced scheduler features in AKS

In the previous step, you applied the sku=gpu:NoSchedule taint when you created your node pool. The following basic example YAML manifest uses a toleration to allow the Kubernetes scheduler to run an NGINX pod on a node in that node pool.

Create a file named nginx-toleration.yaml and copy in the following example YAML:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - image: mcr.microsoft.com/oss/nginx/nginx:1.15.9-alpine
    name: mypod
    resources:
      requests:
        cpu: 100m
        memory: 128Mi
      limits:
        cpu: 1
        memory: 2G
  tolerations:
  - key: "sku"
    operator: "Equal"
    value: "gpu"
    effect: "NoSchedule"

Schedule the pod using the kubectl apply -f nginx-toleration.yaml command:

kubectl apply -f nginx-toleration.yaml

It takes a few seconds to schedule the pod and pull the NGINX image. Use the kubectl describe pod command to view the pod status. The following condensed example output shows the sku=gpu:NoSchedule toleration is applied. In the events section, the scheduler has assigned the pod to the aks-taintnp-28993262-vmss000000 node:

kubectl describe pod mypod
[...]
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
                 sku=gpu:NoSchedule
Events:
  Type    Reason     Age    From                Message
  ----    ------     ----   ----                -------
  Normal  Scheduled  4m48s  default-scheduler   Successfully assigned default/mypod to aks-taintnp-28993262-vmss000000
  Normal  Pulling    4m47s  kubelet             pulling image "mcr.microsoft.com/oss/nginx/nginx:1.15.9-alpine"
  Normal  Pulled     4m43s  kubelet             Successfully pulled image "mcr.microsoft.com/oss/nginx/nginx:1.15.9-alpine"
  Normal  Created    4m40s  kubelet             Created container
  Normal  Started    4m40s  kubelet             Started container

Only pods that have this toleration applied can be scheduled on nodes in taintnp. Any other pod would be scheduled in the nodepool1 node pool. If you create additional node pools, you can use additional taints and tolerations to limit what pods can be scheduled on those node resources.

Setting nodepool labels

You can also add labels to a node pool during node pool creation. Labels set at the node pool are added to each node in the node pool. These labels are visible in Kubernetes for handling scheduling rules for nodes.

To create a node pool with a label, use az aks nodepool add. Specify the name labelnp and use the --labels parameter to specify dept=IT and costcenter=9999 for labels.

az aks nodepool add \
    --resource-group myResourceGroup \
    --cluster-name myAKSCluster \
    --name labelnp \
    --node-count 1 \
    --labels dept=IT costcenter=9999 \
    --no-wait

Note

Label can only be set for node pools during node pool creation. Labels must also be a key/value pair and have a valid syntax.

The following example output from the az aks nodepool list command shows that labelnp is Creating nodes with the specified nodeLabels:

$ az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster

[
  {
    ...
    "count": 1,
    ...
    "name": "labelnp",
    "orchestratorVersion": "1.15.7",
    ...
    "provisioningState": "Creating",
    ...
    "nodeLabels":  {
      "dept": "IT",
      "costcenter": "9999"
    },
    ...
  },
 ...
]

Setting nodepool Azure tags

You can apply an Azure tag to node pools in your AKS cluster. Tags applied to a node pool are applied to each node within the node pool and are persisted through upgrades. Tags are also applied to new nodes added to a node pool during scale-out operations. Adding a tag can help with tasks such as policy tracking or cost estimation.

Azure tags have keys which are case-insensitive for operations, such as when retrieving a tag by searching the key. In this case a tag with the given key will be updated or retrieved regardless of casing. Tag values are case-sensitive.

In AKS, if multiple tags are set with identical keys but different casing, the tag used is the first in alphabetical order. For example, {"Key1": "val1", "kEy1": "val2", "key1": "val3"} results in Key1 and val1 being set.

Create a node pool using the az aks nodepool add. Specify the name tagnodepool and use the --tag parameter to specify dept=IT and costcenter=9999 for tags.

az aks nodepool add \
    --resource-group myResourceGroup \
    --cluster-name myAKSCluster \
    --name tagnodepool \
    --node-count 1 \
    --tags dept=IT costcenter=9999 \
    --no-wait

Note

You can also use the --tags parameter when using az aks nodepool update command as well as during cluster creation. During cluster creation, the --tags parameter applies the tag to the initial node pool created with the cluster. All tag names must adhere to the limitations in Use tags to organize your Azure resources. Updating a node pool with the --tags parameter updates any existing tag values and appends any new tags. For example, if your node pool had dept=IT and costcenter=9999 for tags and you updated it with team=dev and costcenter=111 for tags, you nodepool would have dept=IT, costcenter=111, and team=dev for tags.

The following example output from the az aks nodepool list command shows that tagnodepool is Creating nodes with the specified tag:

az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
[
  {
    ...
    "count": 1,
    ...
    "name": "tagnodepool",
    "orchestratorVersion": "1.15.7",
    ...
    "provisioningState": "Creating",
    ...
    "tags": {
      "dept": "IT",
      "costcenter": "9999"
    },
    ...
  },
 ...
]

Manage node pools using a Resource Manager template

When you use an Azure Resource Manager template to create and managed resources, you can typically update the settings in your template and redeploy to update the resource. With node pools in AKS, the initial node pool profile can't be updated once the AKS cluster has been created. This behavior means that you can't update an existing Resource Manager template, make a change to the node pools, and redeploy. Instead, you must create a separate Resource Manager template that updates only the node pools for an existing AKS cluster.

Create a template such as aks-agentpools.json and paste the following example manifest. This example template configures the following settings:

  • Updates the Linux node pool named myagentpool to run three nodes.
  • Sets the nodes in the node pool to run Kubernetes version 1.15.7.
  • Defines the node size as Standard_DS2_v2.

Edit these values as need to update, add, or delete node pools as needed:

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "clusterName": {
            "type": "string",
            "metadata": {
                "description": "The name of your existing AKS cluster."
            }
        },
        "location": {
            "type": "string",
            "metadata": {
                "description": "The location of your existing AKS cluster."
            }
        },
        "agentPoolName": {
            "type": "string",
            "defaultValue": "myagentpool",
            "metadata": {
                "description": "The name of the agent pool to create or update."
            }
        },
        "vnetSubnetId": {
            "type": "string",
            "defaultValue": "",
            "metadata": {
                "description": "The Vnet subnet resource ID for your existing AKS cluster."
            }
        }
    },
    "variables": {
        "apiVersion": {
            "aks": "2020-01-01"
        },
        "agentPoolProfiles": {
            "maxPods": 30,
            "osDiskSizeGB": 0,
            "agentCount": 3,
            "agentVmSize": "Standard_DS2_v2",
            "osType": "Linux",
            "vnetSubnetId": "[parameters('vnetSubnetId')]"
        }
    },
    "resources": [
        {
            "apiVersion": "2020-01-01",
            "type": "Microsoft.ContainerService/managedClusters/agentPools",
            "name": "[concat(parameters('clusterName'),'/', parameters('agentPoolName'))]",
            "location": "[parameters('location')]",
            "properties": {
                "maxPods": "[variables('agentPoolProfiles').maxPods]",
                "osDiskSizeGB": "[variables('agentPoolProfiles').osDiskSizeGB]",
                "count": "[variables('agentPoolProfiles').agentCount]",
                "vmSize": "[variables('agentPoolProfiles').agentVmSize]",
                "osType": "[variables('agentPoolProfiles').osType]",
                "storageProfile": "ManagedDisks",
                "type": "VirtualMachineScaleSets",
                "vnetSubnetID": "[variables('agentPoolProfiles').vnetSubnetId]",
                "orchestratorVersion": "1.15.7"
            }
        }
    ]
}

Deploy this template using the az group deployment create command, as shown in the following example. You are prompted for the existing AKS cluster name and location:

az group deployment create \
    --resource-group myResourceGroup \
    --template-file aks-agentpools.json

Tip

You can add a tag to your node pool by adding the tag property in the template, as shown in the following example.

...
"resources": [
{
  ...
  "properties": {
    ...
    "tags": {
      "name1": "val1"
    },
    ...
  }
}
...

It may take a few minutes to update your AKS cluster depending on the node pool settings and operations you define in your Resource Manager template.

Assign a public IP per node for your node pools (preview)

Warning

You must install the CLI preview extension 0.4.43 or greater to use the public IP per node feature.

AKS nodes do not require their own public IP addresses for communication. However, scenarios may require nodes in a node pool to receive their own dedicated public IP addresses. A common scenario is for gaming workloads, where a console needs to make a direct connection to a cloud virtual machine to minimize hops. This scenario can be achieved on AKS by registering for a preview feature, Node Public IP (preview).

To install and update the latest aks-preview extension, use the following Azure CLI commands:

az extension add --name aks-preview
az extension update --name aks-preview
az extension list

Register for the Node Public IP feature with the following Azure CLI command:

az feature register --name NodePublicIPPreview --namespace Microsoft.ContainerService

It may take several minutes for the feature to register. You can check the status with the following command:

 az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/NodePublicIPPreview')].{Name:name,State:properties.state}"

After successful registration, create a new resource group.

az group create --name myResourceGroup2 --location eastus

Create a new AKS cluster and attach a public IP for your nodes. Each of the nodes in the node pool receives a unique public IP. You can verify this by looking at the Virtual Machine Scale Set instances.

az aks create -g MyResourceGroup2 -n MyManagedCluster -l eastus  --enable-node-public-ip

For existing AKS clusters, you can also add a new node pool, and attach a public IP for your nodes.

az aks nodepool add -g MyResourceGroup2 --cluster-name MyManagedCluster -n nodepool2 --enable-node-public-ip

Important

During preview, the Azure Instance Metadata Service doesn't currently support retrieval of public IP addresses for the standard tier VM SKU. Due to this limitation, you can't use kubectl commands to display the public IPs assigned to the nodes. However, the IPs are assigned and function as intended. The public IPs for your nodes are attached to the instances in your Virtual Machine Scale Set.

You can locate the public IPs for your nodes in various ways:

Important

The node resource group contains the nodes and their public IPs. Use the node resource group when executing commands to find the public IPs for your nodes.

az vmss list-instance-public-ips -g MC_MyResourceGroup2_MyManagedCluster_eastus -n YourVirtualMachineScaleSetName

Clean up resources

In this article, you created an AKS cluster that includes GPU-based nodes. To reduce unnecessary cost, you may want to delete the gpunodepool, or the whole AKS cluster.

To delete the GPU-based node pool, use the az aks nodepool delete command as shown in following example:

az aks nodepool delete -g myResourceGroup --cluster-name myAKSCluster --name gpunodepool

To delete the cluster itself, use the az group delete command to delete the AKS resource group:

az group delete --name myResourceGroup --yes --no-wait

You can also delete the additional cluster you created for the public IP for node pools scenario.

az group delete --name myResourceGroup2 --yes --no-wait

Next steps

Learn more about system node pools.

In this article, you learned how to create and manage multiple node pools in an AKS cluster. For more information about how to control pods across node pools, see Best practices for advanced scheduler features in AKS.

To create and use Windows Server container node pools, see Create a Windows Server container in AKS.

Use proximity placement groups to reduce latency for your AKS applications.