Deploy a SQL Server container in Kubernetes with Azure Kubernetes Services (AKS)

Applies to: yesSQL Server (all supported versions) - Linux

Learn how to configure a SQL Server instance on Kubernetes in Azure Kubernetes Service (AKS), with persistent storage for high availability (HA). The solution provides resiliency. If the SQL Server instance fails, Kubernetes automatically re-creates it in a new pod. Kubernetes also provides resiliency against a node failure.

This tutorial demonstrates how to configure a highly available SQL Server instance in a container on AKS.

  • Create an SA password
  • Create storage
  • Create the deployment
  • Connect with SQL Server Management Studio (SSMS)
  • Verify failure and recovery

HA solution on Kubernetes running in Azure Kubernetes Service

Kubernetes 1.6 and later has support for storage classes, persistent volume claims, and the Azure disk volume type. You can create and manage your SQL Server instances natively in Kubernetes. The example in this article shows how to create a deployment to achieve a high availability configuration similar to a shared disk failover cluster instance. In this configuration, Kubernetes plays the role of the cluster orchestrator. When a SQL Server instance in a container fails, the orchestrator bootstraps another instance of the container that attaches to the same persistent storage.

SQL Server container on Kubernetes cluster

In the preceding diagram, mssql-server is a container in a pod. Kubernetes orchestrates the resources in the cluster. A replica set ensures that the pod is automatically recovered after a node failure. Applications connect to the service. In this case, the service represents a load balancer that hosts an IP address that stays the same after failure of the mssql-server.

In the following diagram, the mssql-server container has failed. As the orchestrator, Kubernetes guarantees the correct count of healthy instances in the replica set, and starts a new container according to the configuration. The orchestrator starts a new pod on the same node, and mssql-server reconnects to the same persistent storage. The service connects to the re-created mssql-server.

SQL Server pod fail on Kubernetes cluster

In the following diagram, the node hosting the mssql-server container has failed. The orchestrator starts the new pod on a different node, and mssql-server reconnects to the same persistent storage. The service connects to the re-created mssql-server.

SQL Server pod recover on Kubernetes cluster


  • Kubernetes cluster


    To protect against node failure, a Kubernetes cluster requires more than one node.

  • Azure CLI 2.0.23

    • The instructions in this tutorial have been validated against Azure CLI 2.0.23.

Create an SA password

Create an SA password in the Kubernetes cluster. Kubernetes can manage sensitive configuration information, like passwords as secrets.

The following command creates a password for the SA account:

kubectl create secret generic mssql --from-literal=SA_PASSWORD="MyC0m9l&xP@ssw0rd"

Replace MyC0m9l&xP@ssw0rd with a complex password.

To create a secret in Kubernetes named mssql that holds the value MyC0m9l&xP@ssw0rd for the SA_PASSWORD, run the command.

Create storage

Configure a persistent volume and persistent volume claim in the Kubernetes cluster. Complete the following steps:

  1. Create a manifest to define the storage class and the persistent volume claim. The manifest specifies the storage provisioner, parameters, and reclaim policy. The Kubernetes cluster uses this manifest to create the persistent storage.

    The following yaml example defines a storage class and persistent volume claim. The storage class provisioner is azure-disk, because this Kubernetes cluster is in Azure. The storage account type is Standard_LRS. The persistent volume claim is named mssql-data. The persistent volume claim metadata includes an annotation connecting it back to the storage class.

    kind: StorageClass
         name: azure-disk
      storageaccounttype: Standard_LRS
      kind: Managed
    kind: PersistentVolumeClaim
    apiVersion: v1
      name: mssql-data
      annotations: azure-disk
      - ReadWriteOnce
          storage: 8Gi

    Save the file (for example, pvc.yaml).

  2. Create the persistent volume claim in Kubernetes.

    kubectl apply -f <Path to pvc.yaml file>

    <Path to pvc.yaml file> is the location where you saved the file.

    The persistent volume is automatically created as an Azure storage account, and bound to the persistent volume claim.

    Screenshot of persistent volume claim command

  3. Verify the persistent volume claim.

    kubectl describe pvc <PersistentVolumeClaim>

    <PersistentVolumeClaim> is the name of the persistent volume claim.

    In the preceding step, the persistent volume claim is named mssql-data. To see the metadata about the persistent volume claim, run the following command:

    kubectl describe pvc mssql-data

    The returned metadata includes a value called Volume. This value maps to the name of the blob.

    Screenshot of returned metadata, including Volume

    The value for volume matches part of the name of the blob in the following image from the Azure portal:

    Screenshot of Azure portal blob name

  4. Verify the persistent volume.

    kubectl describe pv

    kubectl returns metadata about the persistent volume that was automatically created and bound to the persistent volume claim.

Create the deployment

In this example, the container hosting the SQL Server instance is described as a Kubernetes deployment object. The deployment creates a replica set. The replica set creates the pod.

In this step, create a manifest to describe the container based on the SQL Server mssql-server-linux Docker image. The manifest references the mssql-server persistent volume claim, and the mssql secret that you already applied to the Kubernetes cluster. The manifest also describes a service. This service is a load balancer. The load balancer guarantees that the IP address persists after SQL Server instance is recovered.

  1. Create a manifest (a YAML file) to describe the deployment. The following example describes a deployment, including a container based on the SQL Server container image.

    apiVersion: apps/v1
    kind: Deployment
      name: mssql-deployment
      replicas: 1
           app: mssql
            app: mssql
          terminationGracePeriodSeconds: 30
          hostname: mssqlinst
            fsGroup: 10001
          - name: mssql
            - containerPort: 1433
            - name: MSSQL_PID
              value: "Developer"
            - name: ACCEPT_EULA
              value: "Y"
            - name: SA_PASSWORD
                  name: mssql
                  key: SA_PASSWORD 
            - name: mssqldb
              mountPath: /var/opt/mssql
          - name: mssqldb
              claimName: mssql-data
    apiVersion: v1
    kind: Service
      name: mssql-deployment
        app: mssql
        - protocol: TCP
          port: 1433
          targetPort: 1433
      type: LoadBalancer

    Copy the preceding code into a new file, named sqldeployment.yaml. Update the following values:

    • MSSQL_PID value: "Developer": Sets the container to run SQL Server Developer edition. Developer edition is not licensed for production data. If the deployment is for production use, set the appropriate edition (Enterprise, Standard, or Express).


      For more information, see How to license SQL Server.

    • persistentVolumeClaim: This value requires an entry for claimName: that maps to the name used for the persistent volume claim. This tutorial uses mssql-data.

    • name: SA_PASSWORD: Configures the container image to set the SA password, as defined in this section.

          name: mssql
          key: SA_PASSWORD

    When Kubernetes deploys the container, it refers to the secret named mssql to get the value for the password.

    • securityContext : A securityContext defines privilege and access control settings for a Pod or Container, in this case it is specified at the pod level, so all containers ( in this case only one) adhere to that security context. In the security context we define the fsGroup with the value 10001 ( which is the GID for mssql group) means, all processes of the container are also part of the supplementary group ID 10001(mssql). The owner for volume /var/opt/mssql and any files created in that volume will be Group ID 10001(mssql group).


    By using the LoadBalancer service type, the SQL Server instance is accessible remotely (via the internet) at port 1433.

    Save the file (for example, sqldeployment.yaml).

  2. Create the deployment.

    kubectl apply -f <Path to sqldeployment.yaml file>

    <Path to sqldeployment.yaml file> is the location where you saved the file.

    Screenshot of deployment command

    The deployment and service are created. The SQL Server instance is in a container, connected to persistent storage.

    To view the status of the pod, type kubectl get pod.

    Screenshot of get pod command

    In the preceding image, the pod has a status of Running. This status indicates that the container is ready. This may take several minutes.


    After the deployment is created, it can take a few minutes before the pod is visible. The delay is because the cluster pulls the mssql-server-linux image from the Docker hub. After the image is pulled the first time, subsequent deployments might be faster if the deployment is to a node that already has the image cached on it.

  3. Verify the services are running. Run the following command:

    kubectl get services 

    This command returns services that are running, as well as the internal and external IP addresses for the services. Note the external IP address for the mssql-deployment service. Use this IP address to connect to SQL Server.

    Screenshot of get service command

    For more information about the status of the objects in the Kubernetes cluster, run:

    az aks browse --resource-group <MyResourceGroup> --name <MyKubernetesClustername>
  4. You can also verify the container is running as non-root by running the following command:

    kubectl.exe exec <name of SQL POD> -it -- /bin/bash 

    and then run 'whoami' you should see the username as mssql. Which is a non-root user.


Connect to the SQL Server instance

If you configured the container as described, you can connect with an application from outside the Azure virtual network. Use the sa account and the external IP address for the service. Use the password that you configured as the Kubernetes secret.

You can use the following applications to connect to the SQL Server instance.

  • SSMS

  • SSDT

  • sqlcmd

    To connect with sqlcmd, run the following command:

    sqlcmd -S <External IP Address> -U sa -P "MyC0m9l&xP@ssw0rd"

    Replace the following values:

    • <External IP Address> with the IP address for the mssql-deployment service
    • MyC0m9l&xP@ssw0rd with your password

Verify failure and recovery

To verify failure and recovery, you can delete the pod. Do the following steps:

  1. List the pod running SQL Server.

    kubectl get pods

    Note the name of the pod running SQL Server.

  2. Delete the pod.

    kubectl delete pod mssql-deployment-0

    mssql-deployment-0 is the value returned from the previous step for pod name.

Kubernetes automatically re-creates the pod to recover a SQL Server instance, and connect to the persistent storage. Use kubectl get pods to verify that a new pod is deployed. Use kubectl get services to verify that the IP address for the new container is the same.


In this tutorial, you learned how to deploy SQL Server containers to a Kubernetes cluster for high availability.

  • Create an SA password
  • Create storage
  • Create the deployment
  • Connect with SQL Server Management Studios (SSMS)
  • Verify failure and recovery

Next steps