High performance compute VM sizes
The A8-A11 and H-series sizes are also known as compute-intensive instances. The hardware that runs these sizes is designed and optimized for compute-intensive and network-intensive applications, including high-performance computing (HPC) cluster applications, modeling, and simulations. The A8-A11 series uses Intel Xeon E5-2670 @ 2.6 GHZ and the H-series uses Intel Xeon E5-2667 v3 @ 3.2 GHz. This article provides information about the number of vCPUs, data disks, and NICs as well as storage throughput and network bandwidth for each size in this grouping.
Azure H-series virtual machines are the latest in high performance computing VMs aimed at high end computational needs, like molecular modeling, and computational fluid dynamics. These 8 and 16 vCPU VMs are built on the Intel Haswell E5-2667 V3 processor technology featuring DDR4 memory and SSD-based temporary storage.
In addition to the substantial CPU power, the H-series offers diverse options for low latency RDMA networking using FDR InfiniBand and several memory configurations to support memory intensive computational requirements.
|Size||vCPU||Memory: GiB||Temp storage (SSD) GiB||Max data disks||Max disk throughput: IOPS||Max NICs|
|Standard_H8||8||56||1000||32||32 x 500||2|
|Standard_H16||16||112||2000||64||64 x 500||4|
|Standard_H8m||8||112||1000||32||32 x 500||2|
|Standard_H16m||16||224||2000||64||64 x 500||4|
|Standard_H16r 1||16||112||2000||64||64 x 500||4|
|Standard_H16mr 1||16||224||2000||64||64 x 500||4|
1 For MPI applications, dedicated RDMA backend network is enabled by FDR InfiniBand network, which delivers ultra-low-latency and high bandwidth.
A-series - compute-intensive instances
|Size||vCPU||Memory: GiB||Temp storage (HDD): GiB||Max data disks||Max data disk throughput: IOPS||Max NICs|
1For MPI applications, dedicated RDMA backend network is enabled by FDR InfiniBand network, which delivers ultra-low-latency and high bandwidth.
Size table definitions
- Storage capacity is shown in units of GiB or 1024^3 bytes. When comparing disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB
- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
- If you want to get the best performance for your VMs, you should limit the number of data disks to 2 disks per vCPU.
- Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. Upper limits are not guaranteed, but are intended to provide guidance for selecting the right VM type for the intended application. Actual network performance will depend on a variety of factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see Optimizing network throughput for Windows and Linux. To achieve the expected network performance on Linux or Windows, it may be necessary to select a specific version or optimize your VM. For more information, see How to reliably test for virtual machine throughput.
Azure subscription – To deploy more than a few compute-intensive instances, consider a pay-as-you-go subscription or other purchase options. If you're using an Azure free account, you can use only a limited number of Azure compute cores.
Pricing and availability - These VM sizes are offered only in the Standard pricing tier. Check Products available by region for availability in Azure regions.
Cores quota – You might need to increase the cores quota in your Azure subscription from the default value. Your subscription might also limit the number of cores you can deploy in certain VM size families, including the H-series. To request a quota increase, open an online customer support request at no charge. (Default limits may vary depending on your subscription category.)
Contact Azure Support if you have large-scale capacity needs. Azure quotas are credit limits, not capacity guarantees. Regardless of your quota, you are only charged for cores that you use.
- Virtual network – An Azure virtual network is not required to use the compute-intensive instances. However, for many deployments you need at least a cloud-based Azure virtual network, or a site-to-site connection if you need to access on-premises resources. When needed, create a new virtual network to deploy the instances. Adding compute-intensive VMs to a virtual network in an affinity group is not supported.
- Resizing – Because of their specialized hardware, you can only resize compute-intensive instances within the same size family (H-series or compute-intensive A-series). For example, you can only resize an H-series VM from one H-series size to another. In addition, resizing from a non-compute-intensive size to a compute-intensive size is not supported.
A subset of the compute-intensive instances (H16r, H16mr, A8, and A9) feature a network interface for remote direct memory access (RDMA) connectivity. (Selected N-series sizes designated with 'r' such as NC24r are also RDMA-capable.) This interface is in addition to the standard Azure network interface available to other VM sizes.
This interface allows the RDMA-capable instances to communicate over an InfiniBand (IB) network, operating at FDR rates for H16r, H16mr, and RDMA-capable N-series virtual machines, and QDR rates for A8 and A9 virtual machines. These RDMA capabilities can boost the scalability and performance of certain Message Passing Interface (MPI) applications.
In Azure, IP over IB is not supported. Only RDMA over IB is supported.
Deploy the RDMA-capable HPC VMs in the same availability set or VM scale set (when you use the Azure Resource Manager deployment model) or the same cloud service (when you use the classic deployment model). Additional requirements for RDMA-capable HPC VMs to access the Azure RDMA network follow.
Operating system - Windows Server 2016, Windows Server 2012 R2, Windows Server 2012
MPI - Microsoft MPI (MS-MPI) 2012 R2 or later, Intel MPI Library 5.x
Supported MPI implementations use the Microsoft Network Direct interface to communicate between instances.
RDMA network address space - The RDMA network in Azure reserves the address space 172.16.0.0/16. To run MPI applications on instances deployed in an Azure virtual network, make sure that the virtual network address space does not overlap the RDMA network.
HpcVmDrivers VM extension - On RDMA-capable VMs, add the HpcVmDrivers extension to install Windows network device drivers for RDMA connectivity. (In certain deployments of A8 and A9 instances, the HpcVmDrivers extension is added automatically.) To add the VM extension to a VM, you can use Azure PowerShell cmdlets.
The following command installs the latest version 1.1 HpcVMDrivers extension on an existing RDMA-capable VM named myVM deployed in the resource group named myResourceGroup in the West US region:
Set-AzureRmVMExtension -ResourceGroupName "myResourceGroup" -Location "westus" -VMName "myVM" -ExtensionName "HpcVmDrivers" -Publisher "Microsoft.HpcCompute" -Type "HpcVmDrivers" -TypeHandlerVersion "1.1"
Using HPC Pack
Microsoft HPC Pack, Microsoft’s free HPC cluster and job management solution, is one option for you to create a compute cluster in Azure to run Windows-based MPI applications and other HPC workloads. HPC Pack 2012 R2 and later versions include a runtime environment for MS-MPI that uses the Azure RDMA network when deployed on RDMA-capable VMs.
For checklists to use the compute-intensive instances with HPC Pack on Windows Server, see Set up a Windows RDMA cluster with HPC Pack to run MPI applications.
To use compute-intensive instances when running MPI applications with Azure Batch, see Use multi-instance tasks to run Message Passing Interface (MPI) applications in Azure Batch.
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure SKUs.