Create a Kubernetes cluster with Azure Kubernetes Service and Terraform

Azure Kubernetes Service (AKS) manages your hosted Kubernetes environment, making it quick and easy to deploy and manage containerized applications without container orchestration expertise. It also eliminates the burden of ongoing operations and maintenance by provisioning, upgrading, and scaling resources on demand, without taking your applications offline.

In this tutorial, you learn how to perform the following tasks in creating a Kubernetes cluster using Terraform and AKS:

  • Use HCL (HashiCorp Language) to define a Kubernetes cluster
  • Use Terraform and AKS to create a Kubernetes cluster
  • Use the kubectl tool to test the availability of a Kubernetes cluster

Prerequisites

Create the directory structure

The first step is to create the directory that holds your Terraform configuration files for the exercise.

  1. Browse to the Azure portal.

  2. Open Azure Cloud Shell. If you didn't select an environment previously, select Bash as your environment.

    Cloud Shell prompt

  3. Change directories to the clouddrive directory.

    cd clouddrive
    
  4. Create a directory named terraform-aks-k8s.

    mkdir terraform-aks-k8s
    
  5. Change directories to the new directory:

    cd terraform-aks-k8s
    

Declare the Azure provider

Create the Terraform configuration file that declares the Azure provider.

  1. In Cloud Shell, create a file named main.tf.

    vi main.tf
    
  2. Enter insert mode by selecting the I key.

  3. Paste the following code into the editor:

    provider "azurerm" {
        version = "=1.5.0"
    }
    
    terraform {
        backend "azurerm" {}
    }
    
  4. Exit insert mode by selecting the Esc key.

  5. Save the file and exit the vi editor by entering the following command:

    :wq
    

Define a Kubernetes cluster

Create the Terraform configuration file that declares the resources for the Kubernetes cluster.

  1. In Cloud Shell, create a file named k8s.tf.

    vi k8s.tf
    
  2. Enter insert mode by selecting the I key.

  3. Paste the following code into the editor:

    resource "azurerm_resource_group" "k8s" {
        name     = "${var.resource_group_name}"
        location = "${var.location}"
    }
    
    resource "azurerm_kubernetes_cluster" "k8s" {
        name                = "${var.cluster_name}"
        location            = "${azurerm_resource_group.k8s.location}"
        resource_group_name = "${azurerm_resource_group.k8s.name}"
        dns_prefix          = "${var.dns_prefix}"
    
        linux_profile {
            admin_username = "ubuntu"
    
            ssh_key {
            key_data = "${file("${var.ssh_public_key}")}"
            }
        }
    
        agent_pool_profile {
            name            = "default"
            count           = "${var.agent_count}"
            vm_size         = "Standard_D2"
            os_type         = "Linux"
            os_disk_size_gb = 30
        }
    
        service_principal {
            client_id     = "${var.client_id}"
            client_secret = "${var.client_secret}"
        }
    
        tags {
            Environment = "Development"
        }
    }
    

    The preceding code sets the name of the cluster, location, and the resource_group_name. In addition, the dns_prefix value - that forms part of the fully qualified domain name (FQDN) used to access the cluster - is set.

    The linux_profile record allows you to configure the settings that enable signing into the worker nodes using SSH.

    With AKS, you pay only for the worker nodes. The agent_pool_profile record configures the details for these worker nodes. The agent_pool_profile record includes the number of worker nodes to create and the type of worker nodes. If you need to scale up or scale down the cluster in the future, you modify the count value in this record.

  4. Exit insert mode by selecting the Esc key.

  5. Save the file and exit the vi editor by entering the following command:

    :wq
    

Declare the variables

  1. In Cloud Shell, create a file named variables.tf.

    vi variables.tf
    
  2. Enter insert mode by selecting the I key.

  3. Paste the following code into the editor:

    variable "client_id" {}
    variable "client_secret" {}
    
    variable "agent_count" {
        default = 3
    }
    
    variable "ssh_public_key" {
        default = "~/.ssh/id_rsa.pub"
    }
    
    variable "dns_prefix" {
        default = "k8stest"
    }
    
    variable cluster_name {
        default = "k8stest"
    }
    
    variable resource_group_name {
        default = "azure-k8stest"
    }
    
    variable location {
        default = "Central US"
    }
    
  4. Exit insert mode by selecting the Esc key.

  5. Save the file and exit the vi editor by entering the following command:

    :wq
    

Create a Terraform output file

Terraform outputs allow you to define values that will be highlighted to the user when Terraform applies a plan, and can be queried using the terraform output command. In this section, you create an output file that allows access to the cluster with kubectl.

  1. In Cloud Shell, create a file named output.tf.

    vi output.tf
    
  2. Enter insert mode by selecting the I key.

  3. Paste the following code into the editor:

    output "client_key" {
        value = "${azurerm_kubernetes_cluster.k8s.kube_config.0.client_key}"
    }
    
    output "client_certificate" {
        value = "${azurerm_kubernetes_cluster.k8s.kube_config.0.client_certificate}"
    }
    
    output "cluster_ca_certificate" {
        value = "${azurerm_kubernetes_cluster.k8s.kube_config.0.cluster_ca_certificate}"
    }
    
    output "cluster_username" {
        value = "${azurerm_kubernetes_cluster.k8s.kube_config.0.username}"
    }
    
    output "cluster_password" {
        value = "${azurerm_kubernetes_cluster.k8s.kube_config.0.password}"
    }
    
    output "kube_config" {
        value = "${azurerm_kubernetes_cluster.k8s.kube_config_raw}"
    }
    
    output "host" {
        value = "${azurerm_kubernetes_cluster.k8s.kube_config.0.host}"
    }
    
  4. Exit insert mode by selecting the Esc key.

  5. Save the file and exit the vi editor by entering the following command:

    :wq
    

Set up Azure storage to store Terraform state

Terraform tracks state locally via the terraform.tfstate file. This pattern works well in a single-person environment. However, in a more practical multi-person environment, you need to track state on the server utilizing Azure storage. In this section, you retrieve the necessary storage account information (account name and account key), and create a storage container into which the Terraform state information will be stored.

  1. In the Azure portal, select All services in the left menu.

  2. Select Storage accounts.

  3. On the Storage accounts tab, select the name of the storage account into which Terraform is to store state. For example, you can use the storage account created when you opened Cloud Shell the first time. The storage account name created by Cloud Shell typically starts with cs followed by a random string of numbers and letters. Remember the name of the storage account you select, as it is needed later.

  4. On the storage account tab, select Access keys.

    Storage account menu

  5. Make note of the key1 key value. (Selecting the icon to the right of the key copies the value to the clipboard.)

    Storage account access keys

  6. In Cloud Shell, create a container in your Azure storage account (replace the <YourAzureStorageAccountName> and <YourAzureStorageAccountAccessKey> placeholders with the appropriate values for your Azure storage account).

    az storage container create -n tfstate --account-name <YourAzureStorageAccountName> --account-key <YourAzureStorageAccountKey>
    

Create the Kubernetes cluster

In this section, you see how to use the terraform init command to create the resources defined the configuration files you created in the previous sections.

  1. In Cloud Shell, initialize Terraform (replace the <YourAzureStorageAccountName> and <YourAzureStorageAccountAccessKey> placeholders with the appropriate values for your Azure storage account).

    terraform init -backend-config="storage_account_name=<YourAzureStorageAccountName>" -backend-config="container_name=tfstate" -backend-config="access_key=<YourStorageAccountAccessKey>" -backend-config="key=codelab.microsoft.tfstate" 
    

    The terraform init command displays the success of initializing the backend and provider plugin:

    Example of "terraform init" results

  2. Run the terraform plan command to create the Terraform plan that defines the infrastructure elements. The command will request two values: var.client_id and var.client_secret. For the var.client_id variable, enter the appId value associated with your service principal. For the var.client_secret variable, enter the password value associated with your service principal.

    terraform plan -out out.plan
    

    The terraform plan command displays the resources that will be created when you run the terraform apply command:

    Example of "terraform plan" results

  3. Run the terraform apply command to apply the plan to create the Kubernetes cluster. The process to create a Kubernetes cluster can take several minutes, resulting in the Cloud Shell session timing out. If the Cloud Shell session times out, you can follow the steps in the section "Recover from a Cloud Shell timeout" to enable you to complete the tutorial.

    terraform apply out.plan
    

    The terraform apply command displays the results of creating the resources defined in your configuration files:

    Example of "terraform apply" results

  4. In the Azure portal, select All services in the left menu to see the resources created for your new Kubernetese cluster.

    Cloud Shell prompt

Recover from a Cloud Shell timeout

If the Cloud Shell session times out, you can perform the following steps to recover:

  1. Start a Cloud Shell session.

  2. Change to the directory containing your Terraform configuration files.

    cd /clouddrive/terraform-aks-k8s
    
  3. Run the following command:

    export KUBECONFIG=./azurek8s
    

Test the Kubernetes cluster

The Kubernetes tools can be used to verify the newly created cluster.

  1. Get the Kubernetes configuration from the Terraform state and store it in a file that kubectl can read.

    echo "$(terraform output kube_config)" > ./azurek8s
    
  2. Set an environment variable so that kubectl picks up the correct config.

    export KUBECONFIG=./azurek8s
    
  3. Verify the health of the cluster.

    kubectl get nodes
    

    You should see the details of your worker nodes, and they should all have a status Ready, as shown in the following image:

    The kubectl tool allows you to verify the health of your Kubernetes cluster

Next steps

In this article, you learned how to use Terraform and AKS to create a Kubernetes cluster. Here are some additional resources to help you learn more about Terraform on Azure:

Terraform Hub in Microsoft.com
Terraform Azure provider documentation
Terraform Azure provider source
Terraform Azure modules