Graphics processing unit (GPU) virtual machine (VM) on Azure Stack Hub

Applies to: Azure Stack Hub integrated systems

This article describes which graphics processing unit (GPU) models are supported on an Azure Stack Hub integrated system. You can also find instructions on installing the drivers used with the GPUs. GPU support in Azure Stack Hub enables solutions such as artificial intelligence, training, inference, and data visualization. The AMD Radeon Instinct MI25 can be used to support graphic-intensive applications such as Autodesk AutoCAD.

You can choose from three GPU models. They are available in NVIDIA V100, NVIDIA T4 and AMD MI25 GPUs. These physical GPUs align with the following Azure N-Series virtual machine (VM) types as follows:


GPU VMs are not supported in this release. You will need to upgrade to Azure Stack Hub 2005 or later. In addition, your Azure Stack Hub hardware must have physical GPUs.


NCv3-series VMs are powered by NVIDIA Tesla V100 GPUs. Customers can take advantage of these updated GPUs for traditional HPC workloads such as reservoir modeling, DNA sequencing, protein analysis, Monte Carlo simulations, and others.

Size vCPU Memory: GiB Temp storage (SSD) GiB GPU GPU memory: GiB Max data disks Max NICs
Standard_NC6s_v3 6 112 736 1 16 12 4
Standard_NC12s_v3 12 224 1474 2 32 24 8
Standard_NC24s_v3 24 448 2948 4 64 32 8


The NVv4-series virtual machines are powered by AMD Radeon Instinct MI25 GPUs. With NVv4-series Azure Stack Hub is introducing virtual machines with partial GPUs. This size can be used for GPU accelerated graphics applications and virtual desktops. NVv4 virtual machines currently support only Windows guest operating system.

Size vCPU Memory: GiB Temp storage (SSD) GiB GPU GPU memory: GiB Max data disks Max NICs
Standard_NV4as_v4 4 14 88 1/8 2 4 2
Standard_NV8as_v4 8 28 176 1/4 4 8 4
Standard_NV16as_v4 16 56 352 1/2 8 16 8
Standard_NV32as_v4 32 112 704 1 16 32 8


Size vCPU Memory: GiB GPU GPU memory: GiB Max data disks Max NICs
Standard_NC4as_T4_v3 4 28 1 16 8 4
Standard_NC8as_T4_v3 8 56 1 16 16 8
Standard_NC16as_T4_v3 16 110 1 16 32 8
Standard_NC64as_T4_v3 64 440 4 64 32 8

GPU system considerations

  • GPU must be one of these SKUs: AMD MI-25, Nvidia V100 (and variants), Nvidia T4.
  • Number of GPUs per server supported (1, 2, 3, 4). Preferred are: 1, 2, and 4.
  • All GPUs must be of the exact same SKU throughout the scale unit.
  • All GPU quantities per server must be the same throughout the scale unit.
  • GPU partition size (for AMD Mi25) needs to be the same throughout all GPU VMs on the scale unit.

Capacity planning

The Azure Stack Hub capacity planner has been updated to support GPU configurations. It is accessible on

Adding GPUs on an existing Azure Stack Hub

Azure Stack Hub now supports adding GPUs to any existing system. To do this, execute stop-azurestack, run through the procedure of stop-azurestack, add GPUs, and then run start-azurestack until completion. If the system already had GPUs, then any previously created GPU VMs will need to be stop-deallocated and then restarted.

Patch and update, FRU behavior of VMs

GPU VMs will undergo downtime during operations such as patch and update (PnU) and hardware replacement (FRU) of Azure Stack Hub. The following table covers the state of the VM as observed during these activities and the manual action you can do to make these VMs available after the operation.

Operation PnU - Full Update, OEM update FRU
VM state Unavailable during update. Can be made available with manual operation. VM is automatically online post update. Unavailable during FRU. Can be made available with manual operation. VM needs to be brought back up after FRU
Manual operation If the VM needs to be made available during the update, if there are available GPU partitions, the VM can be restarted from the portal by clicking the Restart button. VM will automatically come back up post update VM is not available during FRU. If there are available GPUs, VM may be stop-deallocated and restarted during FRU. Post FRU completion, VM needs to be stop-deallocated using the Stop button and started back up using the Start button.

Guest driver installation

The following PowerShell cmdlets can be used for driver installation:

$VmName = <VM Name In Portal>
$ResourceGroupName = <Resource Group of VM>
$Location = "redmond"
$driverName = <Give a name to the driver>
$driverPublisher = "Microsoft.HpcCompute"
$driverType = <Specify Driver Type> #GPU Driver Types: "NvidiaGpuDriverWindows"; "NvidiaGpuDriverLinux"; "AmdGpuDriverWindows"
$driverVersion = <Specify Driver Version> #Nvidia Driver Version:"1.3"; AMD Driver Version:"1.0"

Set-AzureRmVMExtension  -Location $Location `
                            -Publisher $driverPublisher `
                            -ExtensionType $driverType `
                            -TypeHandlerVersion $driverVersion `
                            -VMName $VmName `
                            -ResourceGroupName $ResourceGroupName `
                            -Name $driverName `
                            -Settings $Settings ` # If no settings are set, omit this parameter

Depending on the OS, type and connectivity of your Azure Stack Hub GPU VM, you will need to modify with the settings below.

AMD MI25 - Connected

The above command can be used with the appropriate driver type for AMD. The article Install AMD GPU drivers on N-series VMs running Windows provides instructions on installing the driver for the AMD Radeon Instinct MI25 inside the NVv4 GPU-P enabled VM along with steps on how to verify driver installation.

AMD MI25 - Disconnected

Since the extension pulls the driver from a location on the internet, a VM that is disconnected from the external network cannot access it. You can download the driver from the link below and upload to a storage account in your local network accessible to the VM.

Driver URL:

Adding the above driver to a storage account and attach the URL in Settings. These settings will need to be used in the Set-AzureRMVMExtension cmdlet.

$Settings = @{
"DriverURL" = <URL to Driver in Storage Account>


NVIDIA drivers must be installed inside the virtual machine for CUDA or GRID workloads using the GPU.

Use case: graphics/visualization GRID

This scenario requires the use of GRID drivers. GRID drivers can be downloaded through the NVIDIA Application Hub provided you have the required licenses. The GRID drivers also require a GRID license server with appropriate GRID licenses before using the GRID drivers on the VM.

$Settings = @{
"DriverURL" = ""; "DriverCertificateUrl" = ""; 

Use case: compute/CUDA - Connected

CUDA drivers do not need a license server and do not need modified settings.

Use case: compute/CUDA - Disconnected

Links to NVIDIA CUDA drivers can be obtained using the link:


$Settings = @{
"DriverURL" = "";
"DriverCertificateUrl" = ""; 


You will need to reference some URLs for your settings.

URL Notes
PUBKEY_URL The PUBKEY_URL is the public key for the Nvidia driver repository not for the Linux VM. It is used to install driver for Ubuntu.
DKMS_URL DKMS_URL is used to get the package to compile the Nvidia kernel module on RedHat/CentOs.
DRIVER_URL DRIVER_URL is the URL to download the Nvidia driver's repository information and it is added to the Linux VM's list of repos.
LIS_URL LIS_URL is the URL to download the Linux Integration Service package for RedHat/CentOs, Linux Integration Services v4.3 for Hyper-V and Azure at URL by default it is not installed LIS_RHEL_ver is the fallback kernel version that should work with the Nvidia driver. It is used on RedHat/CentOs if the Linux VM's kernel is not compatible with the requested Nvidia driver.

Add the URLs to your settings.


Next steps