以 Terraform 和 HCL 建立 VM 叢集Create a VM cluster with Terraform and HCL

本教學課程示範如何使用 HashiCorp Configuration Language (英文) (HCL) 來建立小型計算叢集。This tutorial demonstrates creating a small compute cluster using the HashiCorp Configuration Language (HCL). 此設定會在可用性設定組中建立一個負載平衡器、兩個 Linux VM,以及所有必要的網路資源。The configuration creates a load balancer, two Linux VMs in an availability set, and all necessary networking resources.

在本教學課程中,您:In this tutorial, you:

  • 設定 Azure 驗證Set up Azure authentication
  • 建立 Terraform 設定檔Create a Terraform configuration file
  • 初始化 TerraformInitialize Terraform
  • 建立 Terraform 執行計畫Create a Terraform execution plan
  • 套用 Terraform 執行計畫Apply the Terraform execution plan

1.設定 Azure 驗證1. Set up Azure authentication

注意

如果您是使用 Terraform 環境變數,或是在 Azure Cloud Shell 中執行本教學課程,請略過此小節。If you use Terraform environment variables, or run this tutorial in the Azure Cloud Shell, skip this section.

在本節中,您會產生一個 Azure 服務主體,以及兩個包含來自安全性主體之認證的 Terraform 設定檔。In this section, you generate an Azure service principal, and two Terraform configuration files containing the credentials from the security principal.

  1. 設定 Azure AD 服務主體以讓 Terraform 將資源佈建至 Azure。Set up an Azure AD service principal to enable Terraform to provision resources into Azure. 建立主體時,請記下訂用帳戶識別碼、租用戶、應用程式識別碼及密碼的值。While creating the principal, Make note of the values for the subscription ID, tenant, appId, and password.

  2. 開啟命令提示字元。Open a command prompt.

  3. 建立空目錄以在其中儲存您的 Terraform 檔案。Create an empty directory in which to store your Terraform files.

  4. 建立會保存您變數宣告的新檔案。Create a new file that holds your variables declarations. 您可以隨意命名此檔案,再加上 .tf 副檔名。You can name this file anything you like with a .tf extension.

  5. 將下列程式碼複製到您的變數宣告檔案:Copy the following code into your variable declaration file:

    variable subscription_id {}
    variable tenant_id {}
    variable client_id {}
    variable client_secret {}
    
    provider "azurerm" {
       subscription_id = "${var.subscription_id}"
       tenant_id = "${var.tenant_id}"
       client_id = "${var.client_id}"
       client_secret = "${var.client_secret}"
    }
    
  6. 建立包含您 Terraform 變數值的新檔案。Create a new file that contains the values for your Terraform variables. 常見的作法是將您的 Terraform 變數檔案命名為 terraform.tfvars,因為 Terraform 會自動載入目前目錄中名為 terraform.tfvars (或遵循 *.auto.tfvars 的模式) 的檔案 (若存在的話)。It is common to name your Terraform variable file terraform.tfvars as Terraform automatically loads any file named terraform.tfvars (or following a pattern of *.auto.tfvars) if present in the current directory.

  7. 將下列程式碼複製到您的變數檔案。Copy the following code into your variables file. 請務必依照下列方式取代預留位置:針對 subscription_id,請使用您在執行 az account set 時所指定的 Azure 訂用帳戶識別碼。Make sure to replace the placeholders as follows: For subscription_id, use the Azure subscription ID you specified when running az account set. 針對 tenant_id,請使用 az ad sp create-for-rbac 所傳回的 tenant 值。For tenant_id, use the tenant value returned from az ad sp create-for-rbac. 針對 client_id,請使用 az ad sp create-for-rbac 所傳回的 appId 值。For client_id, use the appId value returned from az ad sp create-for-rbac. 針對 client_secret,請使用 az ad sp create-for-rbac 所傳回的 password 值。For client_secret, use the password value returned from az ad sp create-for-rbac.

    subscription_id = "<azure-subscription-id>"
    tenant_id = "<tenant-returned-from-creating-a-service-principal>"
    client_id = "<appId-returned-from-creating-a-service-principal>"
    client_secret = "<password-returned-from-creating-a-service-principal>"
    

2.建立 Terraform 設定檔2. Create a Terraform configuration file

在本節中,您會建立包含您基礎結構之資源定義的檔案。In this section, you create a file that contains resource definitions for your infrastructure.

  1. 建立名為 main.tf 的新檔案。Create a new file named main.tf.

  2. 將下列範例資源定義複製到新建立的 main.tf 檔案:Copy following sample resource definitions into the newly created main.tf file:

    resource "azurerm_resource_group" "test" {
     name     = "acctestrg"
     location = "West US 2"
    }
    
    resource "azurerm_virtual_network" "test" {
     name                = "acctvn"
     address_space       = ["10.0.0.0/16"]
     location            = "${azurerm_resource_group.test.location}"
     resource_group_name = "${azurerm_resource_group.test.name}"
    }
    
    resource "azurerm_subnet" "test" {
     name                 = "acctsub"
     resource_group_name  = "${azurerm_resource_group.test.name}"
     virtual_network_name = "${azurerm_virtual_network.test.name}"
     address_prefix       = "10.0.2.0/24"
    }
    
    resource "azurerm_public_ip" "test" {
     name                         = "publicIPForLB"
     location                     = "${azurerm_resource_group.test.location}"
     resource_group_name          = "${azurerm_resource_group.test.name}"
     public_ip_address_allocation = "static"
    }
    
    resource "azurerm_lb" "test" {
     name                = "loadBalancer"
     location            = "${azurerm_resource_group.test.location}"
     resource_group_name = "${azurerm_resource_group.test.name}"
    
     frontend_ip_configuration {
       name                 = "publicIPAddress"
       public_ip_address_id = "${azurerm_public_ip.test.id}"
     }
    }
    
    resource "azurerm_lb_backend_address_pool" "test" {
     resource_group_name = "${azurerm_resource_group.test.name}"
     loadbalancer_id     = "${azurerm_lb.test.id}"
     name                = "BackEndAddressPool"
    }
    
    resource "azurerm_network_interface" "test" {
     count               = 2
     name                = "acctni${count.index}"
     location            = "${azurerm_resource_group.test.location}"
     resource_group_name = "${azurerm_resource_group.test.name}"
    
     ip_configuration {
       name                          = "testConfiguration"
       subnet_id                     = "${azurerm_subnet.test.id}"
       private_ip_address_allocation = "dynamic"
       load_balancer_backend_address_pools_ids = ["${azurerm_lb_backend_address_pool.test.id}"]
     }
    }
    
    resource "azurerm_managed_disk" "test" {
     count                = 2
     name                 = "datadisk_existing_${count.index}"
     location             = "${azurerm_resource_group.test.location}"
     resource_group_name  = "${azurerm_resource_group.test.name}"
     storage_account_type = "Standard_LRS"
     create_option        = "Empty"
     disk_size_gb         = "1023"
    }
    
    resource "azurerm_availability_set" "avset" {
     name                         = "avset"
     location                     = "${azurerm_resource_group.test.location}"
     resource_group_name          = "${azurerm_resource_group.test.name}"
     platform_fault_domain_count  = 2
     platform_update_domain_count = 2
     managed                      = true
    }
    
    resource "azurerm_virtual_machine" "test" {
     count                 = 2
     name                  = "acctvm${count.index}"
     location              = "${azurerm_resource_group.test.location}"
     availability_set_id   = "${azurerm_availability_set.avset.id}"
     resource_group_name   = "${azurerm_resource_group.test.name}"
     network_interface_ids = ["${element(azurerm_network_interface.test.*.id, count.index)}"]
     vm_size               = "Standard_DS1_v2"
    
     # Uncomment this line to delete the OS disk automatically when deleting the VM
     # delete_os_disk_on_termination = true
    
     # Uncomment this line to delete the data disks automatically when deleting the VM
     # delete_data_disks_on_termination = true
    
     storage_image_reference {
       publisher = "Canonical"
       offer     = "UbuntuServer"
       sku       = "16.04-LTS"
       version   = "latest"
     }
    
     storage_os_disk {
       name              = "myosdisk${count.index}"
       caching           = "ReadWrite"
       create_option     = "FromImage"
       managed_disk_type = "Standard_LRS"
     }
    
     # Optional data disks
     storage_data_disk {
       name              = "datadisk_new_${count.index}"
       managed_disk_type = "Standard_LRS"
       create_option     = "Empty"
       lun               = 0
       disk_size_gb      = "1023"
     }
    
     storage_data_disk {
       name            = "${element(azurerm_managed_disk.test.*.name, count.index)}"
       managed_disk_id = "${element(azurerm_managed_disk.test.*.id, count.index)}"
       create_option   = "Attach"
       lun             = 1
       disk_size_gb    = "${element(azurerm_managed_disk.test.*.disk_size_gb, count.index)}"
     }
    
     os_profile {
       computer_name  = "hostname"
       admin_username = "testadmin"
       admin_password = "Password1234!"
     }
    
     os_profile_linux_config {
       disable_password_authentication = false
     }
    
     tags {
       environment = "staging"
     }
    }
    

3.初始化 Terraform3. Initialize Terraform

terraform init 命令 (英文) 是用來初始化包含 Terraform 設定檔 (也就是您在先前小節建立的檔案) 的目錄。The terraform init command is used to initialize a directory that contains the Terraform configuration files - the files you created with the previous sections. 撰寫新的 Terraform 設定之後,最好一律執行 terraform init 命令。It is a good practice to always run the terraform init command after writing a new Terraform configuration.

提示

terraform init 命令具有等冪性,表示它能夠被重複呼叫並不斷產生相同的結果。The terraform init command is idempotent meaning that it can be called repeatedly while producing the same result. 因此,如果您是在共同作業環境中工作,且認為設定檔可能已經被他人變更,最好先一律呼叫 terraform init 命令,然後再執行或套用計畫。Therefore, if you're working in a collaborative environment, and you think the configuration files might have been changed, it's always a good idea to call the terraform init command before executing or applying a plan.

若要初始化 Terraform,請執行下列命令:To initialize Terraform, run the following command:

terraform init

正在初始化 Terraform

4.建立 Terraform 執行計畫4. Create a Terraform execution plan

terraform plan 命令 (英文) 是用來建立執行計畫。The terraform plan command is used to create an execution plan. 若要產生執行計畫,Terraform 會彙總目前目錄中的所有 .tf 檔案。To generate an execution plan, Terraform aggregates all the .tf files in the current directory.

如果您是在共同作業環境中工作,其中設定可能會在您建立執行計畫和套用執行計畫之間的時間發生變更,您應該使用 terraform plan 命令的 -out 參數 (英文) 來將執行計畫儲存為檔案。If you are working in a collaborative environment where the configuration might change between the time you create the execution plan and the time you apply the execution plan, you should use the terraform plan command's -out parameter to save the execution plan to a file. 反之,如果您是在單一人員環境中工作,便可忽略 -out 參數。Otherwise, if you are working in a single-person environment, you can omit the -out parameter.

如果您的 Terraform 變數檔案名稱不是 terraform.tfvars,且並未遵循 *.auto.tfvars 模式,則執行 terraform plan 命令時,您需要使用 terraform plan 命令的 -var-file 參數 (英文) 來指定檔案名稱。If the name of your Terraform variables file is not terraform.tfvars and it doesn't follow the *.auto.tfvars pattern, you need to specify the file name using the terraform plan command's -var-file parameter when running the terraform plan command.

處理 terraform plan 命令時,Terraform 會執行重新整理,然後判斷必須執行哪些動作才能達成設定檔中指定的所需狀態。When processing the terraform plan command, Terraform performs a refresh and determines what actions are necessary to achieve the desired state specified in your configuration files.

如果您不需要儲存執行計畫,請執行下列命令:If you do not need to save your execution plan, run the following command:

terraform plan

如果您需要儲存執行計畫,請執行下列命令 (將 <path> 預留位置取代為所需的輸出路徑):If you need to save your execution plan, run the following command (replacing the <path> placeholder with the desired output path):

terraform plan -out=<path>

正在建立 Terraform 執行計畫

5.套用 Terraform 執行計畫5. Apply the Terraform execution plan

本教學課程的最後一個步驟是使用 terraform apply 命令 (英文) 來套用由 terraform plan 命令所產生的一組動作。The final step of this tutorial is to use the terraform apply command to apply the set of actions generated by the terraform plan command.

如果您想要套用最新的執行計畫,請執行下列命令:If you want to apply the latest execution plan, run the following command:

terraform apply

如果您想要套用先前儲存的執行計畫,請執行下列命令 (將 <path> 預留位置取代為執行計畫的儲存路徑):If you want to apply a previously saved execution plan, run the following command (replacing the <path> placeholder with the path that contains the saved execution plan):

terraform apply <path>

正在套用 Terraform 執行計畫

後續步驟Next steps