Use the Azure disk Container Storage Interface (CSI) drivers in Azure Kubernetes Service (AKS) (preview)

The Azure disk Container Storage Interface (CSI) driver is a CSI specification-compliant driver used by Azure Kubernetes Service (AKS) to manage the lifecycle of Azure disks.

The CSI is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes. By adopting and using CSI, AKS can write, deploy, and iterate plug-ins to expose new or improve existing storage systems in Kubernetes without having to touch the core Kubernetes code and wait for its release cycles.

To create an AKS cluster with CSI driver support, see Enable CSI drivers for Azure disks and Azure Files on AKS.


In-tree drivers refers to the current storage drivers that are part of the core Kubernetes code versus the new CSI drivers, which are plug-ins.

Use CSI persistent volumes with Azure disks

A persistent volume (PV) represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. This article shows you how to dynamically create PVs with Azure disks for use by a single pod in an AKS cluster. For static provisioning, see Manually create and use a volume with Azure disks.


AKS preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. AKS previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. AKS preview features aren't available in Azure Government or Azure China 21Vianet clouds. For more information, see the following support articles:

For more information on Kubernetes volumes, see Storage options for applications in AKS.

Dynamically create Azure disk PVs by using the built-in storage classes

A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see Kubernetes storage classes. When you use storage CSI drivers on AKS, there are two additional built-in StorageClasses that use the Azure disk CSI storage drivers. The additional CSI storage classes are created with the cluster alongside the in-tree default storage classes.

  • managed-csi: Uses Azure Standard SSD locally redundant storage (LRS) to create a managed disk.
  • managed-csi-premium: Uses Azure Premium LRS to create a managed disk.

The reclaim policy in both storage classes ensures that the underlying Azure disk is deleted when the respective PV is deleted. The storage classes also configure the PVs to be expandable. You just need to edit the persistent volume claim (PVC) with the new size.

To leverage these storage classes, create a PVC and respective pod that references and uses them. A PVC is used to automatically provision storage based on a storage class. A PVC can use one of the pre-created storage classes or a user-defined storage class to create an Azure-managed disk for the desired SKU and size. When you create a pod definition, the PVC is specified to request the desired storage.

Create an example pod and respective PVC with the kubectl apply command:

$ kubectl apply -f
$ kubectl apply -f

persistentvolumeclaim/pvc-azuredisk created
pod/nginx-azuredisk created

After the pod is in the running state, create a new file called test.txt.

$ kubectl exec nginx-azuredisk -- touch /mnt/azuredisk/test.txt

You can now validate that the disk is correctly mounted by running the following command and verifying you see the test.txt file in the output:

$ kubectl exec nginx-azuredisk -- ls /mnt/azuredisk


Create a custom storage class

The default storage classes suit the most common scenarios, but not all. For some cases, you might want to have your own storage class customized with your own parameters. For example, we have a scenario where you might want to change the volumeBindingMode class.

You can use a volumeBindingMode: Immediate class that guarantees that occurs immediately once the PVC is created. In cases where your node pools are topology constrained, for example, using availability zones, PVs would be bound or provisioned without knowledge of the pod's scheduling requirements (in this case to be in a specific zone).

To address this scenario, you can use volumeBindingMode: WaitForFirstConsumer, which delays the binding and provisioning of a PV until a pod that uses the PVC is created. In this way, the PV will conform and be provisioned in the availability zone (or other topology) that's specified by the pod's scheduling constraints. The default storage classes use volumeBindingMode: WaitForFirstConsumer class.

Create a file named sc-azuredisk-csi-waitforfirstconsumer.yaml, and paste the following manifest. The storage class is the same as our managed-csi storage class but with a different volumeBindingMode class.

kind: StorageClass
  name: azuredisk-csi-waitforfirstconsumer
  skuname: StandardSSD_LRS 
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

Create the storage class with the kubectl apply command, and specify your sc-azuredisk-csi-waitforfirstconsumer.yaml file:

$ kubectl apply -f sc-azuredisk-csi-waitforfirstconsumer.yaml created

Volume snapshots

The Azure disk CSI driver supports creating snapshots of persistent volumes. As part of this capability, the driver can perform either full or incremental snapshots depending on the value set in the incremental parameter (by default, it's true).

For details on all the parameters, see volume snapshot class parameters.

Create a volume snapshot

For an example of this capability, create a volume snapshot class with the kubectl apply command:

$ kubectl apply -f created

Now let's create a volume snapshot from the PVC that we dynamically created at the beginning of this tutorial, pvc-azuredisk.

$ kubectl apply -f created

Check that the snapshot was created correctly:

$ kubectl describe volumesnapshot azuredisk-volume-snapshot

Name:         azuredisk-volume-snapshot
Namespace:    default
Labels:       <none>
Annotations:  API Version:
Kind:         VolumeSnapshot
  Creation Timestamp:  2020-08-27T05:27:58Z
  Generation:        1
  Resource Version:  714582
  Self Link:         /apis/
  UID:               dd953ab5-6c24-42d4-ad4a-f33180e0ef87
    Persistent Volume Claim Name:  pvc-azuredisk
  Volume Snapshot Class Name:      csi-azuredisk-vsc
  Bound Volume Snapshot Content Name:  snapcontent-dd953ab5-6c24-42d4-ad4a-f33180e0ef87
  Creation Time:                       2020-08-31T05:27:59Z
  Ready To Use:                        true
  Restore Size:                        10Gi
Events:                                <none>

Create a new PVC based on a volume snapshot

You can create a new PVC based on a volume snapshot. Use the snapshot created in the previous step, and create a new PVC and a new pod to consume it.

$ kubectl apply -f

$ kubectl apply -f

persistentvolumeclaim/pvc-azuredisk-snapshot-restored created
pod/nginx-restored created

Finally, let's make sure it's the same PVC created before by checking the contents.

$ kubectl exec nginx-restored -- ls /mnt/azuredisk


As expected, we can still see our previously created test.txt file.

Clone volumes

A cloned volume is defined as a duplicate of an existing Kubernetes volume. For more information on cloning volumes in Kubernetes, see the conceptual documentation for volume cloning.

The CSI driver for Azure disks supports volume cloning. To demonstrate, create a cloned volume of the previously created azuredisk-pvc and a new pod to consume it.

$ kubectl apply -f

$ kubectl apply -f

persistentvolumeclaim/pvc-azuredisk-cloning created
pod/nginx-restored-cloning created

We can now check the content of the cloned volume by running the following command and confirming we still see our test.txt created file.

$ kubectl exec nginx-restored-cloning -- ls /mnt/azuredisk


Resize a persistent volume

You can instead request a larger volume for a PVC. Edit the PVC object, and specify a larger size. This change triggers the expansion of the underlying volume that backs the PV.


A new PV is never created to satisfy the claim. Instead, an existing volume is resized.

In AKS, the built-in managed-csi storage class already allows for expansion, so use the PVC created earlier with this storage class. The PVC requested a 10-Gi persistent volume. We can confirm that by running:

$ kubectl exec -it nginx-azuredisk -- df -h /mnt/azuredisk

Filesystem      Size  Used Avail Use% Mounted on
/dev/sdc        9.8G   42M  9.8G   1% /mnt/azuredisk


Currently, the Azure disk CSI driver only supports resizing PVCs with no pods associated (and the volume not mounted to a specific node).

As such, let's delete the pod we created earlier:

$ kubectl delete -f

pod "nginx-azuredisk" deleted

Let's expand the PVC by increasing the field:

$ kubectl patch pvc pvc-azuredisk --type merge --patch '{"spec": {"resources": {"requests": {"storage": "15Gi"}}}}'

persistentvolumeclaim/pvc-azuredisk patched

Let's confirm the volume is now larger:

$ kubectl get pv

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                     STORAGECLASS   REASON   AGE
pvc-391ea1a6-0191-4022-b915-c8dc4216174a   15Gi       RWO            Delete           Bound    default/pvc-azuredisk                     managed-csi             2d2h


The PVC won't reflect the new size until it has a pod associated to it again.

Let's create a new pod:

$ kubectl apply -f

pod/nginx-azuredisk created

And, finally, confirm the size of the PVC and inside the pod:

$ kubectl get pvc pvc-azuredisk
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-azuredisk   Bound    pvc-391ea1a6-0191-4022-b915-c8dc4216174a   15Gi       RWO            managed-csi    2d2h

$ kubectl exec -it nginx-azuredisk -- df -h /mnt/azuredisk
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdc         15G   46M   15G   1% /mnt/azuredisk

Shared disk

Azure shared disks is an Azure managed disks feature that enables attaching an Azure disk to agent nodes simultaneously. Attaching a managed disk to multiple agent nodes allows you, for example, to deploy new or migrate existing clustered applications to Azure.


Currently, only raw block device (volumeMode: Block) is supported by the Azure disk CSI driver. Applications should manage the coordination and control of writes, reads, locks, caches, mounts, and fencing on the shared disk, which is exposed as a raw block device.

Let's create a file called shared-disk.yaml by copying the following command that contains the shared disk storage class and PVC:

kind: StorageClass
  name: managed-csi-shared
  skuname: Premium_LRS  # Currently shared disk is only available with premium SSD
  maxShares: "2"
  cachingMode: None  # ReadOnly cache is not available for premium SSD with maxShares>1
reclaimPolicy: Delete
kind: PersistentVolumeClaim
apiVersion: v1
  name: pvc-azuredisk-shared
    - ReadWriteMany
      storage: 256Gi  # minimum size of shared disk is 256GB (P15)
  volumeMode: Block
  storageClassName: managed-csi-shared

Create the storage class with the kubectl apply command, and specify your shared-disk.yaml file:

$ kubectl apply -f shared-disk.yaml created
persistentvolumeclaim/pvc-azuredisk-shared created

Now let's create a file called deployment-shared.yml by copying the following command:

apiVersion: apps/v1
kind: Deployment
    app: nginx
  name: deployment-azuredisk
  replicas: 2
      app: nginx
        app: nginx
      name: deployment-azuredisk
        - name: deployment-azuredisk
            - name: azuredisk
              devicePath: /dev/sdx
        - name: azuredisk
            claimName: pvc-azuredisk-shared

Create the deployment with the kubectl apply command, and specify your deployment-shared.yml file:

$ kubectl apply -f deployment-shared.yml

deployment/deployment-azuredisk created

Finally, let's check the block device inside the pod:

# kubectl exec -it deployment-sharedisk-7454978bc6-xh7jp sh
/ # dd if=/dev/zero of=/dev/sdx bs=1024k count=100
100+0 records in
100+0 records out/s

Windows containers

The Azure disk CSI driver also supports Windows nodes and containers. If you want to use Windows containers, follow the Windows containers tutorial to add a Windows node pool.

After you have a Windows node pool, you can now use the built-in storage classes like managed-csi. You can deploy an example Windows-based stateful set that saves timestamps into the file data.txt by deploying the following command with the kubectl apply command:

$ kubectl apply -f

statefulset.apps/busybox-azuredisk created

You can now validate the contents of the volume by running:

$ kubectl exec -it busybox-azuredisk-0 -- cat c:\\mnt\\azuredisk\\data.txt # on Linux/MacOS Bash
$ kubectl exec -it busybox-azuredisk-0 -- cat c:\mnt\azuredisk\data.txt # on Windows Powershell/CMD

2020-08-27 08:13:41Z
2020-08-27 08:13:42Z
2020-08-27 08:13:44Z

Next steps