High performance compute VM sizes

The A8-A11 and H-series sizes are also known as compute-intensive instances. The hardware that runs these sizes is designed and optimized for compute-intensive and network-intensive applications, including high-performance computing (HPC) cluster applications, modeling, and simulations. The A8-A11 series uses Intel Xeon E5-2670 @ 2.6 GHZ and the H-series uses Intel Xeon E5-2667 v3 @ 3.2 GHz.

Azure H-series virtual machines are the next generation high performance computing VMs aimed at high end computational needs, like molecular modeling, and computational fluid dynamics. These 8 and 16 vCPU VMs are built on the Intel Haswell E5-2667 V3 processor technology featuring DDR4 memory and SSD based temporary storage.

In addition to the substantial CPU power, the H-series offers diverse options for low latency RDMA networking using FDR InfiniBand and several memory configurations to support memory intensive computational requirements.

H-series

ACU: 290-300

Size vCPU Memory: GiB Temp storage (SSD) GiB Max data disks Max disk throughput: IOPS Max NICs
Standard_H8 8 56 1000 16 16 x 500 2
Standard_H16 16 112 2000 32 32 x 500 4
Standard_H8m 8 112 1000 16 16 x 500 2
Standard_H16m 16 224 2000 32 32 x 500 4
Standard_H16r* 16 112 2000 32 32 x 500 4
Standard_H16mr* 16 224 2000 32 32 x 500 4

*For MPI applications, dedicated RDMA backend network is enabled by FDR InfiniBand network, which delivers ultra-low-latency and high bandwidth.


A-series - compute-intensive instances

ACU: 225

Size vCPU Memory: GiB Temp storage (HDD): GiB Max data disks Max data disk throughput: IOPS Max NICs
Standard_A8* 8 56 382 16 16x500 2
Standard_A9* 16 112 382 16 16x500 4
Standard_A10 8 56 382 16 16x500 2
Standard_A11 16 112 382 16 16x500 4

*For MPI applications, dedicated RDMA backend network is enabled by FDR InfiniBand network, which delivers ultra-low-latency and high bandwidth.


Size table definitions

  • Storage capacity is shown in units of GiB or 1024^3 bytes. When comparing disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB
  • Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
  • Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
  • Expected network performance is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. Upper limits are not guaranteed, but are intended to provide guidance for selecting the right VM type for the intended application. Actual network performance will depend on a variety of factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see Optimizing network throughput for Windows and Linux. To achieve the expected network performance on Linux or Windows, it may be necessary to select a specific version or optimize your VM. For more information, see How to reliably test for virtual machine throughput.

  • † 16 vCPU performance will consistently reach the upper limit in an upcoming release.

Deployment considerations

  • Azure subscription – To deploy more than a few compute-intensive instances, consider a pay-as-you-go subscription or other purchase options. If you're using an Azure free account, you can use only a limited number of Azure compute cores.

  • Pricing and availability - These VM sizes are offered only in the Standard pricing tier. Check Products available by region for availability in Azure regions.

  • Cores quota – You might need to increase the cores quota in your Azure subscription from the default value. Your subscription might also limit the number of cores you can deploy in certain VM size families, including the H-series. To request a quota increase, open an online customer support request at no charge. (Default limits may vary depending on your subscription category.)

    Note

    Contact Azure Support if you have large-scale capacity needs. Azure quotas are credit limits, not capacity guarantees. Regardless of your quota, you are only charged for cores that you use.

  • Virtual network – An Azure virtual network is not required to use the compute-intensive instances. However, for many deployments you need at least a cloud-based Azure virtual network, or a site-to-site connection if you need to access on-premises resources. When needed, create a new virtual network to deploy the instances. Adding compute-intensive VMs to a virtual network in an affinity group is not supported.
  • Resizing – Because of their specialized hardware, you can only resize compute-intensive instances within the same size family (H-series or compute-intensive A-series). For example, you can only resize an H-series VM from one H-series size to another. In addition, resizing from a non-compute-intensive size to a compute-intensive size is not supported.

RDMA-capable instances

A subset of the compute-intensive instances (H16r, H16mr, A8, and A9) feature a network interface for remote direct memory access (RDMA) connectivity. This interface is in addition to the standard Azure network interface available to other VM sizes.

This interface allows the RDMA-capable instances to communicate over an InfiniBand network, operating at FDR rates for H16r and H16mr virtual machines, and QDR rates for A8 and A9 virtual machines. These RDMA capabilities can boost the scalability and performance of Message Passing Interface (MPI) applications.

Following are requirements for RDMA-capable Windows VMs to access the Azure RDMA network:

  • Operating system

    Windows Server 2012 R2, Windows Server 2012

    Note

    Currently, Windows Server 2016 does not support RDMA connectivity in Azure.

  • Availability set or cloud service – Deploy the RDMA-capable VMs in the same availability set (when you use the Azure Resource Manager deployment model) or the same cloud service (when you use the classic deployment model). If you use Azure Batch, the RDMA-capable VMs must be in the same pool.

  • MPI - Microsoft MPI (MS-MPI) 2012 R2 or later, Intel MPI Library 5.x

    Supported MPI implementations use the Microsoft Network Direct interface to communicate between instances.

  • RDMA network address space - The RDMA network in Azure reserves the address space 172.16.0.0/16. To run MPI applications on instances deployed in an Azure virtual network, make sure that the virtual network address space does not overlap the RDMA network.

  • HpcVmDrivers VM extension - On RDMA-capable VMs, you must add the HpcVmDrivers extension to install Windows network device drivers for RDMA connectivity. (In certain deployments of A8 and A9 instances, the HpcVmDrivers extension is added automatically.) To add the VM extension to a VM, you can use Azure PowerShell cmdlets.

The following command installs the latest version 1.1 HpcVMDrivers extension on an existing RDMA-capable VM named myVM deployed in the resource group named myResourceGroup in the West US region:

Set-AzureRmVMExtension -ResourceGroupName "myResourceGroup" -Location "westus" -VMName "myVM" -ExtensionName "HpcVmDrivers" -Publisher "Microsoft.HpcCompute" -Type "HpcVmDrivers" -TypeHandlerVersion "1.1"

For more information, see Virtual machine extensions and features. You can also work with extensions for VMs deployed in the classic deployment model.

Using HPC Pack

Microsoft HPC Pack, Microsoft’s free HPC cluster and job management solution, is one option for you to create a compute cluster in Azure to run Windows-based MPI applications and other HPC workloads. HPC Pack 2012 R2 and later versions include a runtime environment for MS-MPI that uses the Azure RDMA network when deployed on RDMA-capable VMs.

Other sizes

Next steps