Use Ansible to scale onboarding Amazon Web Services Amazon Elastic Compute Cloud instances to Azure Arc
This article provides guidance for using Ansible to scale onboarding Amazon Web Services (AWS) Amazon Elastic Compute Cloud (EC2) instances to Azure Arc.
This guide assumes that you have a basic understanding of Ansible. A basic Ansible playbook and configuration is provided that uses the amazon.aws.aws_ec2 plug-in for dynamic loading of EC2 server inventory.
This guide can be used even if you do not already have an existing Ansible test environment and includes a Terraform plan that will create a sample AWS EC2 server inventory comprised of four Windows Server 2019 servers and four Ubuntu servers along with a basic CentOS 7 Ansible control server with a simple configuration.
Warning
The provided Ansible sample workbook uses WinRM with password authentication and HTTP to configure Windows-based servers. This is not advisable for production environments. If you are planning to use Ansible with Windows hosts in a production environment then you should use WinRM over HTTPS with a certificate.
Prerequisites
Clone the Azure Arc Jumpstart repository.
git clone https://github.com/microsoft/azure_arc.gitInstall or update Azure CLI to version 2.7 and above. Use the following command to check your current installed version.
az --versionGenerate SSH key (or use an existing SSH key)
-
Create an Azure service principal.
To connect the AWS virtual machine to Azure Arc, an Azure service principal assigned with the Contributor role is required. To create it, sign in to your Azure account and run the following command. You can also run this command in Azure Cloud Shell.
az login az ad sp create-for-rbac -n "<Unique SP Name>" --role contributorFor example:
az ad sp create-for-rbac -n "http://AzureArcAWS" --role contributorOutput should look like this:
{ "appId": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX", "displayName": "AzureArcAWS", "name": "http://AzureArcAWS", "password": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX", "tenant": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX" }Note
We highly recommend that you scope the service principal to a specific Azure subscription and resource group.
Create an AWS identity
In order for Terraform to create resources in AWS, we'll need to create a new AWS IAM role with appropriate permissions and configure Terraform to use it.
Sign in to the AWS management console
After signing in, select the Services dropdown list in the top left. Under Security, Identity, and Compliance, select IAM to access the identity and access management page


Select Users from the left menu, and then select Add user to create a new IAM user.

On the Add User page, name the user Terraform and select the Programmatic Access checkbox, and then select Next.

On the next page, Set Permissions, select Attach existing policies directly then select the box next to AmazonEC2FullAccess as shown in the screenshot, and then select Next.

On the Tags page, assign a tag with a key of
azure-arc-demoand select Next to proceed to the Review page.
Verify that everything is correct and select Create user.

After the user is created, you will see the user's access key ID and secret access key. Copy these values down before selecting Close. On the next page, you can see an example of what this should look like. Once you have these keys, you will be able to use them with Terraform to create AWS resources.

Option 1: Create a sample AWS server inventory and Ansible control server using Terraform and onboard the servers to Azure Arc
Note
If you already have an existing AWS server inventory and Ansible server, skip to option 2.
Configure Terraform
Before executing the Terraform plan, you must export the environment variables which will be used by the plan. These variables are based on your Azure subscription and tenant, the Azure service principal, and the AWS IAM user and keys you just created.
Retrieve your Azure subscription ID and tenant ID using the
az account listcommand.The Terraform plan creates resources in both Microsoft Azure and AWS. It then executes a script on an AWS EC2 virtual machine to install Ansible and all necessary artifacts. This Terraform plan requires certain information about your AWS and Azure environments which it accesses using environment variables. Edit
scripts/vars.shand update each of the variables with the appropriate values.TF_VAR_subscription_id= your Azure subscription IDTF_VAR_client_id= your Azure service principal application IDTF_VAR_client_secret= your Azure service principal passwordTF_VAR_tenant_id= your Azure tenant IDAWS_ACCESS_KEY_ID= AWS access keyAWS_SECRET_ACCESS_KEY= AWS secret key
From your shell, navigate to the
azure_arc_servers_jumpstart/aws/scaled_deployment/ansible/terraform) directory of the cloned repository.Export the environment variables you edited by running
scripts/vars.shwith the source command as shown below. Terraform requires these to be set for the plan to execute properly.source ./scripts/vars.shMake sure your SSH keys are available in
~/.sshand namedid_rsa.pubandid_rsa. If you followed the SSH keygen guide above to create your key then this should already be set up correctly. If not, you may need to modifyaws_infra.tfto use a key with a different path.Run the
terraform initcommand which will download the required Terraform providers.
Deploy server infrastructure
From the
azure_arc_servers_jumpstart/aws/scaled_deployment/ansible/terraformdirectory, runterraform apply --auto-approveand wait for the plan to finish. Upon successful completion, you will have four Windows Server 2019 servers, four Ubuntu servers, and one CentOS 7 Ansible control server.Open the AWS console and verify that you can see the created servers.

Run the Ansible playbook to onboard the AWS EC2 instances as Azure Arc-enabled servers
When the Terraform plan completes, it displays the public IP of the Ansible control server in an output variable named
ansible_ip. SSH into the Ansible server by runningssh centos@xx.xx.xx.xx, wherexx.xx.xx.xxis substituted for your Ansible server's IP address.
Change directory to the
ansibledirectory by runningcd ansible. This folder contains the sample Ansible configuration and the playbook we will use to onboard the servers to Azure Arc.
The
aw-ec2Ansible plug-in requires AWS credentials to dynamically read your AWS server inventory. We will export these as environment variables. Run the following commands, replacing the values forAWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEYwith the AWS credentials you created earlier.export AWS_ACCESS_KEY_ID="XXXXXXXXXXXXXXXXX" export AWS_SECRET_ACCESS_KEY="XXXXXXXXXXXXXXX"Replace the placeholder values for Azure tenant ID and subscription ID in the
group-vars/all.ymlfile with the appropriate values for your environment.
Run the Ansible playbook by executing the following command, substituting your Azure service principal ID and service principal secret.
ansible-playbook arc_agent.yml -i ansible_plugins/inventory-uswest2-aws_ec2.yml --extra-vars '{"service_principal_id": "XXXXXXX-XXXXX-XXXXXXX", "service_principal_secret": "XXXXXXXXXXXXXXXXXXXXXXXX"}'If the playbook run is successful, you should see output similar to the following screenshot.

Open the Azure portal and navigate to the
arc-aws-demoresource group. You should see the Azure Arc-enabled servers listed.
Clean up environment by deleting resources
To delete all the resources you created as part of this demo, use the terraform destroy --auto-approve command as shown.

Option 2: Onboarding an existing AWS server inventory to Azure Arc using your own Ansible control server
Note
If you do not have an existing AWS server inventory and Ansible server, navigate back to option 1.
Review provided Ansible configuration and playbook
Navigate to the
ansible_configdirectory and review the provided configuration. The provided configuration contains a basicansible.cfgfile. This file enables theamazon.aws.aws_ec2Ansible plug-in which dynamically loads your server inventory by using an AWS IAM role. Ensure that the IAM role you are using has sufficient privileges to access the inventory you wish to onboard.
The file
inventory-uswest2-aws_ec2.ymlconfigures theaws_ec2plug-in to pull inventory fromuswest2region and group assets by applied tags. Adjust this file as needed to support onboarding your server inventory, such as changing the region or adjusting groups or filters.The files in
./ansible-config/group-varsshould be adjusted to provide the credentials you wish to use to onboard various Ansible host groups.When you have adjusted the provided configuration to support your environment, run the Ansible playbook by executing the following command, substituting your Azure service principal ID and service principal secret.
ansible-playbook arc_agent.yml -i ansible_plugins/inventory-uswest2-aws_ec2.yml --extra-vars '{"service_principal_id": "XXXXXXX-XXXXX-XXXXXXX", "service_principal_secret": "XXXXXXXXXXXXXXXXXXXXXXXX"}'As earlier, if the playbook run is successful, you should see an output that similar to the following screenshot:

As earlier, open Azure portal and navigate to the
arc-aws-demoresource group. You should see the Azure Arc-enabled servers listed.