Create a Kubernetes cluster with Azure Kubernetes Service using Terraform
Azure Kubernetes Service (AKS) manages your hosted Kubernetes environment. AKS allows you to deploy and manage containerized applications without container orchestration expertise. AKS also enables you to do many common maintenance operations without taking your app offline. These operations include provisioning, upgrading, and scaling resources on demand.
In this article, you learn how to do the following tasks:
- Use HCL (HashiCorp Language) to define a Kubernetes cluster
- Use Terraform and AKS to create a Kubernetes cluster
- Use the kubectl tool to test the availability of a Kubernetes cluster
Prerequisites
- Azure subscription: If you don't have an Azure subscription, create a free account before you begin.
Configure Terraform: Follow the directions in the article, Terraform and configure access to Azure
Azure service principal: Follow the directions in the Create the service principal section in the article, Create an Azure service principal with Azure CLI. Take note of the values for the
appId
,displayName
,password
, andtenant
.
Create the directory structure
The first step is to create the directory that holds your Terraform configuration files for the exercise.
Browse to the Azure portal.
Open Azure Cloud Shell. If you didn't select an environment previously, select Bash as your environment.
Change directories to the
clouddrive
directory.cd clouddrive
Create a directory named
terraform-aks-k8s
.mkdir terraform-aks-k8s
Change directories to the new directory:
cd terraform-aks-k8s
Declare the Azure provider
Create the Terraform configuration file that declares the Azure provider.
In Cloud Shell, create a file named
main.tf
.code main.tf
Paste the following code into the editor:
provider "azurerm" { # The "feature" block is required for AzureRM provider 2.x. # If you are using version 1.x, the "features" block is not allowed. version = "~>2.0" features {} } terraform { backend "azurerm" {} }
Save the file (<Ctrl>S) and exit the editor (<Ctrl>Q).
Define a Kubernetes cluster
Create the Terraform configuration file that declares the resources for the Kubernetes cluster.
In Cloud Shell, create a file named
k8s.tf
.code k8s.tf
Paste the following code into the editor:
resource "azurerm_resource_group" "k8s" { name = var.resource_group_name location = var.location } resource "random_id" "log_analytics_workspace_name_suffix" { byte_length = 8 } resource "azurerm_log_analytics_workspace" "test" { # The WorkSpace name has to be unique across the whole of azure, not just the current subscription/tenant. name = "${var.log_analytics_workspace_name}-${random_id.log_analytics_workspace_name_suffix.dec}" location = var.log_analytics_workspace_location resource_group_name = azurerm_resource_group.k8s.name sku = var.log_analytics_workspace_sku } resource "azurerm_log_analytics_solution" "test" { solution_name = "ContainerInsights" location = azurerm_log_analytics_workspace.test.location resource_group_name = azurerm_resource_group.k8s.name workspace_resource_id = azurerm_log_analytics_workspace.test.id workspace_name = azurerm_log_analytics_workspace.test.name plan { publisher = "Microsoft" product = "OMSGallery/ContainerInsights" } } resource "azurerm_kubernetes_cluster" "k8s" { name = var.cluster_name location = azurerm_resource_group.k8s.location resource_group_name = azurerm_resource_group.k8s.name dns_prefix = var.dns_prefix linux_profile { admin_username = "ubuntu" ssh_key { key_data = file(var.ssh_public_key) } } default_node_pool { name = "agentpool" node_count = var.agent_count vm_size = "Standard_D2_v2" } service_principal { client_id = var.client_id client_secret = var.client_secret } addon_profile { oms_agent { enabled = true log_analytics_workspace_id = azurerm_log_analytics_workspace.test.id } } network_profile { load_balancer_sku = "Standard" network_plugin = "kubenet" } tags = { Environment = "Development" } }
The preceding code sets the name of the cluster, location, and the resource group name. The prefix for the fully qualified domain name (FQDN) is also set. The FQDN is used to access the cluster.
The
linux_profile
record allows you to configure the settings that enable signing into the worker nodes using SSH.With AKS, you pay only for the worker nodes. The
default_node_pool
record configures the details for these worker nodes. Thedefault_node_pool record
includes the number of worker nodes to create and the type of worker nodes. If you need to scale up or scale down the cluster in the future, you modify thecount
value in this record.Save the file (<Ctrl>S) and exit the editor (<Ctrl>Q).
Declare the variables
In Cloud Shell, create a file named
variables.tf
.code variables.tf
Paste the following code into the editor:
variable "client_id" {} variable "client_secret" {} variable "agent_count" { default = 3 } variable "ssh_public_key" { default = "~/.ssh/id_rsa.pub" } variable "dns_prefix" { default = "k8stest" } variable cluster_name { default = "k8stest" } variable resource_group_name { default = "azure-k8stest" } variable location { default = "Central US" } variable log_analytics_workspace_name { default = "testLogAnalyticsWorkspaceName" } # refer https://azure.microsoft.com/global-infrastructure/services/?products=monitor for log analytics available regions variable log_analytics_workspace_location { default = "eastus" } # refer https://azure.microsoft.com/pricing/details/monitor/ for log analytics pricing variable log_analytics_workspace_sku { default = "PerGB2018" }
Save the file (<Ctrl>S) and exit the editor (<Ctrl>Q).
Create a Terraform output file
Terraform outputs allow you to define values that will be highlighted to the user when Terraform applies a plan, and can be queried using the terraform output
command. In this section, you create an output file that allows access to the cluster with kubectl.
In Cloud Shell, create a file named
output.tf
.code output.tf
Paste the following code into the editor:
output "client_key" { value = azurerm_kubernetes_cluster.k8s.kube_config.0.client_key } output "client_certificate" { value = azurerm_kubernetes_cluster.k8s.kube_config.0.client_certificate } output "cluster_ca_certificate" { value = azurerm_kubernetes_cluster.k8s.kube_config.0.cluster_ca_certificate } output "cluster_username" { value = azurerm_kubernetes_cluster.k8s.kube_config.0.username } output "cluster_password" { value = azurerm_kubernetes_cluster.k8s.kube_config.0.password } output "kube_config" { value = azurerm_kubernetes_cluster.k8s.kube_config_raw } output "host" { value = azurerm_kubernetes_cluster.k8s.kube_config.0.host }
Save the file (<Ctrl>S) and exit the editor (<Ctrl>Q).
Set up Azure storage to store Terraform state
Terraform tracks state locally via the terraform.tfstate
file. This pattern works well in a single-person environment. In a multi-person environment, Azure storage is used to track state.
In this section, you see how to do the following tasks:
- Retrieve storage account information (account name and account key)
- Create a storage container into which Terraform state information will be stored.
In the Azure portal, select All services in the left menu.
Select Storage accounts.
On the Storage accounts tab, select the name of the storage account into which Terraform is to store state. For example, you can use the storage account created when you opened Cloud Shell the first time. The storage account name created by Cloud Shell typically starts with
cs
followed by a random string of numbers and letters. Take note of the storage account you select. This value is needed later.On the storage account tab, select Access keys.
Make note of the key1 key value. (Selecting the icon to the right of the key copies the value to the clipboard.)
In Cloud Shell, create a container in your Azure storage account. Replace the placeholders with appropriate values for your environment.
az storage container create -n tfstate --account-name <YourAzureStorageAccountName> --account-key <YourAzureStorageAccountKey>
Create the Kubernetes cluster
In this section, you see how to use the terraform init
command to create the resources defined in the configuration files you created in the previous sections.
In Cloud Shell, initialize Terraform. Replace the placeholders with appropriate values for your environment.
terraform init -backend-config="storage_account_name=<YourAzureStorageAccountName>" -backend-config="container_name=tfstate" -backend-config="access_key=<YourStorageAccountAccessKey>" -backend-config="key=codelab.microsoft.tfstate"
The
terraform init
command displays the success of initializing the backend and provider plug-in:Export your service principal credentials. Replace the placeholders with appropriate values from your service principal.
export TF_VAR_client_id=<service-principal-appid> export TF_VAR_client_secret=<service-principal-password>
Run the
terraform plan
command to create the Terraform plan that defines the infrastructure elements.terraform plan -out out.plan
The
terraform plan
command displays the resources that will be created when you run theterraform apply
command:Run the
terraform apply
command to apply the plan to create the Kubernetes cluster. The process to create a Kubernetes cluster can take several minutes, resulting in the Cloud Shell session timing out. If the Cloud Shell session times out, you can follow the steps in the section "Recover from a Cloud Shell timeout" to enable you to complete the process.terraform apply out.plan
The
terraform apply
command displays the results of creating the resources defined in your configuration files:In the Azure portal, select All resources in the left menu to see the resources created for your new Kubernetes cluster.
Recover from a Cloud Shell timeout
If the Cloud Shell session times out, you can do the following steps to recover:
Start a Cloud Shell session.
Change to the directory containing your Terraform configuration files.
cd /clouddrive/terraform-aks-k8s
Run the following command:
export KUBECONFIG=./azurek8s
Test the Kubernetes cluster
The Kubernetes tools can be used to verify the newly created cluster.
Get the Kubernetes configuration from the Terraform state and store it in a file that kubectl can read.
echo "$(terraform output kube_config)" > ./azurek8s
Set an environment variable so that kubectl picks up the correct config.
export KUBECONFIG=./azurek8s
Verify the health of the cluster.
kubectl get nodes
You should see the details of your worker nodes, and they should all have a status Ready, as shown in the following image:
Monitor health and logs
When the AKS cluster was created, monitoring was enabled to capture health metrics for both the cluster nodes and pods. These health metrics are available in the Azure portal. For more information on container health monitoring, see Monitor Azure Kubernetes Service health.
Troubleshooting
For Terraform-specific support, use one of HashiCorp's community support channels to Terraform:
- Questions, use-cases, and useful patterns: Terraform section of the HashiCorp community portal
- Provider-related questions: Terraform Providers section of the HashiCorp community portal