NCv3-series

NCv3-series VMs are powered by NVIDIA Tesla V100 GPUs. These GPUs can provide 1.5x the computational performance of the NCv2-series. Customers can take advantage of these updated GPUs for traditional HPC workloads such as reservoir modeling, DNA sequencing, protein analysis, Monte Carlo simulations, and others. The NC24rs v3 configuration provides a low latency, high-throughput network interface optimized for tightly coupled parallel computing workloads. In addition to the GPUs, the NCv3-series VMs are also powered by Intel Xeon E5-2690 v4 (Broadwell) CPUs.

Premium Storage: Supported

Premium Storage caching: Supported

Live Migration: Not Supported

Memory Preserving Updates: Not Supported

Important

For this VM series, the vCPU (core) quota in your subscription is initially set to 0 in each region. Request a vCPU quota increase for this series in an available region.

Size vCPU Memory: GiB Temp storage (SSD) GiB GPU GPU memory: GiB Max data disks Max uncached disk throughput: IOPS/MBps Max NICs
Standard_NC6s_v3 6 112 736 1 16 12 20000/200 4
Standard_NC12s_v3 12 224 1474 2 32 24 40000/400 8
Standard_NC24s_v3 24 448 2948 4 64 32 80000/800 8
Standard_NC24rs_v3* 24 448 2948 4 64 32 80000/800 8

1 GPU = one V100 card.

*RDMA capable

Size table definitions

  • Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.

  • Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.

  • Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.

  • If you want to get the best performance for your VMs, you should limit the number of data disks to two disks per vCPU.

  • Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see Virtual machine network bandwidth.

    Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see Optimize network throughput for Azure virtual machines. To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see Bandwidth/Throughput testing (NTTTCP).

Supported operating systems and drivers

To take advantage of the GPU capabilities of Azure N-series VMs, NVIDIA GPU drivers must be installed.

The NVIDIA GPU Driver Extension installs appropriate NVIDIA CUDA or GRID drivers on an N-series VM. Install or manage the extension using the Azure portal or tools such as Azure PowerShell or Azure Resource Manager templates. See the NVIDIA GPU Driver Extension documentation for supported operating systems and deployment steps. For general information about VM extensions, see Azure virtual machine extensions and features.

If you choose to install NVIDIA GPU drivers manually, see N-series GPU driver setup for Windows or N-series GPU driver setup for Linux for supported operating systems, drivers, installation, and verification steps.

Other sizes

Next steps

Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure SKUs.