软件定义的网络中的 HNV 网关性能优化HNV Gateway Performance Tuning in Software Defined Networks

除了 Windows Server 网关虚拟机的配置参数 (Vm) 的配置参数外,本主题还为运行 Hyper-v 和托管 Windows Server 网关虚拟机的服务器提供硬件规范和配置建议。This topic provides hardware specifications and configuration recommendations for servers that are running Hyper-V and hosting Windows Server Gateway virtual machines, in addition to configuration parameters for Windows Server Gateway virtual machines (VMs). 若要从 Windows Server 网关 Vm 提取最佳性能,应遵循这些指导原则。To extract best performance from Windows Server gateway VMs, it is expected that these guidelines will be followed. 下列部分包含部署 Windows Server 网关时需满足的硬件和配置要求。The following sections contain hardware and configuration requirements when you deploy Windows Server Gateway.

  1. 有关 Hyper-V 硬件的建议Hyper-V hardware recommendations
  2. Hyper-V 主机配置Hyper-V host configuration
  3. Windows Server 网关 VM 配置Windows Server gateway VM configuration

有关 Hyper-V 硬件的建议Hyper-V hardware recommendations

下面是运行 Windows Server 2016 和 Hyper-v 的每台服务器的最低建议硬件配置。Following is the recommended minimum hardware configuration for each server that is running Windows Server 2016 and Hyper-V.

服务器组件Server Component 规范Specification
中心处理单元 (CPU)Central Processing Unit (CPU) (NUMA) 节点的非一致性内存体系结构:2Non-Uniform Memory Architecture (NUMA) nodes: 2
如果主机上有多个 Windows Server 网关 Vm,为了获得最佳性能,每个网关 VM 应具有对一个 NUMA 节点的完全访问权限。If there are multiple Windows Server gateway VMs on the host, for best performance, each gateway VM should have full access to one NUMA node. 它应该与主机物理适配器使用的 NUMA 节点不同。And it should be different from the NUMA node used by the host physical adapter.
每个 NUMA 节点的核心数Cores per NUMA node 22
超线程Hyper-Threading 已禁用。Disabled. 超线程不能提高 Windows Server 网关的性能。Hyper-Threading does not improve the performance of Windows Server Gateway.
随机存取内存 (RAM)Random Access Memory (RAM) 48 GB48 GB
网络接口卡 (NIC)Network Interface Cards (NICs) 2 10 GB Nic,则网关性能将取决于线路速率。Two 10 GB NICs,The gateway performance will depend on the line rate. 如果线路速率小于10Gbps,则网关隧道吞吐量数值也会按同样的因素下降。If the line rate is less than 10Gbps, the gateway tunnel throughput numbers will also go down by the same factor.

确保分配给 Windows Server 网关 VM 的虚拟处理器数不超过 NUMA 节点上的处理器数。Ensure that the number of virtual processors that are assigned to a Windows Server Gateway VM does not exceed the number of processors on the NUMA node. 例如,如果 NUMA 节点有 8 个核心,则虚拟处理器数应小于或等于 8。For example, if a NUMA node has 8 cores, the number of virtual processors should be less than or equal to 8. 为了获得最佳性能,应为8。For best performance, it should be 8. 若要找出 NUMA 节点数以及每个 NUMA 节点的核心数,请在每台 Hyper-V 主机上运行以下 Windows PowerShell 脚本:To find out the number of NUMA nodes and the number of cores per NUMA node, run the following Windows PowerShell script on each Hyper-V host:

$nodes = [object[]] $(gwmi –Namespace root\virtualization\v2 -Class MSVM_NumaNode)
$cores = ($nodes | Measure-Object NumberOfProcessorCores -sum).Sum
$lps = ($nodes | Measure-Object NumberOfLogicalProcessors -sum).Sum


Write-Host "Number of NUMA Nodes: ", $nodes.count
Write-Host ("Total Number of Cores: ", $cores)
Write-Host ("Total Number of Logical Processors: ", $lps)

重要

跨 NUMA 节点分配虚拟处理器可能会给 Windows Server 网关的性能造成负面影响。Allocating virtual processors across NUMA nodes might have a negative performance impact on Windows Server Gateway. 运行多个 VM 并让每个 VM 使用一个 NUMA 节点的虚拟处理器的做法所产生的聚合性能,很有可能优于运行一个 VM 并向其分配所有虚拟处理器的做法。Running multiple VMs, each of which has virtual processors from one NUMA node, likely provides better aggregate performance than a single VM to which all virtual processors are assigned.

如果每个 NUMA 节点有8个内核,则建议在每个 Hyper-v 主机上选择要在每个 Hyper-v 主机上安装的网关 vm 数量时,一个具有八个虚拟处理器和至少 8GB RAM 的网关 VM。One gateway VM with eight virtual processors and at least 8GB RAM is recommended when selecting the number of gateway VMs to install on each Hyper-V host when each NUMA node has eight cores. 在这种情况下,一个 NUMA 节点专用于主计算机。In this case, one NUMA node is dedicated to the host machine.

Hyper-v 主机配置Hyper-V Host configuration

下面是运行 Windows Server 2016 和 Hyper-v 的每台服务器的建议配置,其工作负荷是运行 Windows Server 网关 Vm。Following is the recommended configuration for each server that is running Windows Server 2016 and Hyper-V and whose workload is to run Windows Server Gateway VMs. 这些配置说明包括 Windows PowerShell 命令的用法示例。These configuration instructions include the use of Windows PowerShell command examples. 这些示例包含了占位符,用于表示你在环境中运行这些命令时需要提供的实际值。These examples contain placeholders for actual values that you need to provide when you run the commands in your environment. 例如,网络适配器名称占位符为 "NIC1" 和 "NIC2"。For example, network adapter name placeholders are "NIC1" and "NIC2." 在运行使用这些占位符的命令时,请使用服务器上的网络适配器的实际名称,而不要使用占位符,否则命令将会失败。When you run commands that use these placeholders, utilize the actual names of the network adapters on your servers rather than using the placeholders, or the commands will fail.

备注

若要运行以下 Windows PowerShell 命令,你必须是管理员组的成员。To run the following Windows PowerShell commands, you must be a member of the Administrators group.

配置项Configuration Item Windows Powershell 配置Windows Powershell Configuration
交换机嵌入式组合Switch Embedded Teaming 创建具有多个网络适配器的 vswitch 时,它会自动为这些适配器启用交换机嵌入组合。When you create a vswitch with multiple network adapters, it automatically enabled switch embedded teaming for those adapters.
New-VMSwitch -Name TeamedvSwitch -NetAdapterName "NIC 1","NIC 2"
Windows Server 2016 中的 SDN 不支持通过 LBFO 的传统组合。Traditional teaming through LBFO is not supported with SDN in Windows Server 2016. 交换机嵌入组合允许你将同一组 Nic 用于虚拟流量和 RDMA 流量。Switch Embedded Teaming allows you to use the same set of NICs for your virtual traffic and RDMA traffic. 基于 LBFO 的 NIC 组合不支持此操作。This was not supported with NIC teaming based on LBFO.
物理 NIC 上的中断裁决Interrupt Moderation on physical NICs 使用默认设置。Use default settings. 若要检查配置,可以使用以下 Windows PowerShell 命令: Get-NetAdapterAdvancedPropertyTo check the configuration, you can use the following Windows PowerShell command: Get-NetAdapterAdvancedProperty
物理 NIC 上的接收缓冲区大小Receive Buffers size on physical NICs 可以通过运行命令来验证物理 Nic 是否支持此参数的配置 Get-NetAdapterAdvancedPropertyYou can verify whether the physical NICs support the configuration of this parameter by running the command Get-NetAdapterAdvancedProperty. 如果它们不支持此参数,则该命令的输出将不包含属性 "接收缓冲区"。If they do not support this parameter, the output from the command does not include the property "Receive Buffers." 如果 NIC 确实支持此参数,你可以使用以下 Windows PowerShell 命令设置接收缓冲区大小:If NICs do support this parameter, you can use the following Windows PowerShell command to set the Receive Buffers size:
Set-NetAdapterAdvancedProperty "NIC1" –DisplayName "Receive Buffers" –DisplayValue 3000
物理 NIC 上的发送缓冲区大小Send Buffers size on physical NICs 可以通过运行命令来验证物理 Nic 是否支持此参数的配置 Get-NetAdapterAdvancedPropertyYou can verify whether the physical NICs support the configuration of this parameter by running the command Get-NetAdapterAdvancedProperty. 如果 Nic 不支持此参数,该命令的输出将不包含属性 "发送缓冲区"。If the NICs do not support this parameter, the output from the command does not include the property "Send Buffers." 如果 NIC 确实支持此参数,你可以使用以下 Windows PowerShell 命令设置发送缓冲区大小:If NICs do support this parameter, you can use the following Windows PowerShell command to set the Send Buffers size:
Set-NetAdapterAdvancedProperty "NIC1" –DisplayName "Transmit Buffers" –DisplayValue 3000
物理 NIC 上的接收方缩放 (RSS)Receive Side Scaling (RSS) on physical NICs 可以通过运行 Windows PowerShell 命令 Get-NetAdapterRss,来验证物理 NIC 是否已启用 RSS。You can verify whether your physical NICs have RSS enabled by running the Windows PowerShell command Get-NetAdapterRss. 你可以使用以下 Windows PowerShell 命令来启用和配置网络适配器上的 RSS:You can use the following Windows PowerShell commands to enable and configure RSS on your network adapters:
Enable-NetAdapterRss "NIC1","NIC2"
Set-NetAdapterRss "NIC1","NIC2" –NumberOfReceiveQueues 16 -MaxProcessors
注意:如果启用了 VMMQ 或 VMQ,则不需要在物理网络适配器上启用 RSS。NOTE: If VMMQ or VMQ is enabled, RSS does not have to be enabled on the physical network adapters. 你可以在主机虚拟网络适配器上启用它。You can enable it on the host virtual network adapters
VMMQVMMQ 若要为 VM 启用 VMMQ,请运行以下命令:To enable VMMQ for a VM, run the following command:
Set-VmNetworkAdapter -VMName <gateway vm name>,-VrssEnabled $true -VmmqEnabled $true
注意:并非所有网络适配器都支持 VMMQ。NOTE: Not all network adapters support VMMQ. 目前,它在 Chelsio T5 和 T6、Mellanox CX-3 和 CX-4 以及 QLogic 45xxx 系列上受支持Currently, it is supported on Chelsio T5 and T6, Mellanox CX-3 and CX-4, and QLogic 45xxx series
NIC 组上的虚拟机队列 (VMQ)Virtual Machine Queue (VMQ) on the NIC Team 你可以使用以下 Windows PowerShell 命令在集团队上启用 VMQ:You can enable VMQ on your SET team by using the following Windows PowerShell command:
Enable-NetAdapterVmq
注意:仅当 HW 不支持 VMMQ 时,才应启用此功能。NOTE: This should be enabled only if the HW does not support VMMQ. 如果支持,则应启用 VMMQ 以提高性能。If supported, VMMQ should be enabled for better performance.

备注

仅当 VM 上的负载较高且 CPU 使用率达到最大值时,VMQ 和 vRSS 才会进入图片。VMQ and vRSS come into picture only when the load on the VM is high and the CPU is being utilized to the maximum. 只有一个处理器核心数至少为1。对于跨多个内核分散处理负载,VMQ 和 vRSS 将非常有用。Only then will at least one processor core max out. VMQ and vRSS will then be beneficial to help spread the processing load across multiple cores. 这不适用于 IPsec 流量,因为 IPsec 流量限制为单个核心。This is not applicable for IPsec traffic as IPsec traffic is confined to a single core.

Windows Server 网关 VM 配置Windows Server Gateway VM configuration

在这两个 Hyper-v 主机上,你可以配置多个配置为带有 Windows Server 网关的网关的 Vm。On both Hyper-V hosts, you can configure multiple VMs that are configured as gateways with Windows Server Gateway. 在 Hyper-V 主机上,可以使用虚拟交换机管理器创建一个与 NIC 组绑定的 Hyper-V 虚拟交换机。You can use Virtual Switch Manager to create a Hyper-V Virtual Switch that is bound to the NIC team on the Hyper-V host. 请注意,为了获得最佳性能,应在 Hyper-v 主机上部署单个网关 VM。Note that for best performance, you should deploy a single gateway VM on a Hyper-V host. 以下是每个 Windows Server 网关 VM 的建议配置。Following is the recommended configuration for each Windows Server Gateway VM.

配置项Configuration Item Windows Powershell 配置Windows Powershell Configuration
内存Memory 8 GB8 GB
虚拟网络适配器数Number of virtual network adapters 3个具有以下特定用途的 Nic:1表示管理操作系统使用的管理,1个外部网络提供对外部网络的访问,1表示仅提供对内部网络的访问。3 NICs with the following specific uses: 1 for Management that is used by the management operating system, 1 External that provides access to external networks, 1 that is Internal that provides access to internal networks only.
接收方伸缩 (RSS)Receive Side Scaling (RSS) 你可以保留管理 NIC 的默认 RSS 设置。You can keep the default RSS settings for the Management NIC. 以下示例配置适用于具有 8 个虚拟处理器的 VM。The following example configuration is for a VM that has 8 virtual processors. 对于外部和内部 Nic,可以使用以下 Windows PowerShell 命令,通过将 BaseProcNumber 设置为0,并将 MaxRssProcessors 设置为8来启用 RSS:For the External and Internal NICs, you can enable RSS with BaseProcNumber set to 0 and MaxRssProcessors set to 8 using the following Windows PowerShell command:
Set-NetAdapterRss "Internal","External" –BaseProcNumber 0 –MaxProcessorNumber 8
发送端缓冲区Send side buffer 你可以保留管理 NIC 的默认发送端缓冲区设置。You can keep the default Send Side Buffer settings for the Management NIC. 对于内部和外部 Nic,可以使用以下 Windows PowerShell 命令来配置包含 32 MB RAM 的发送端缓冲区:For both the Internal and External NICs you can configure the Send Side Buffer with 32 MB of RAM by using the following Windows PowerShell command:
Set-NetAdapterAdvancedProperty "Internal","External" –DisplayName "Send Buffer Size" –DisplayValue "32MB"
接收端缓冲区Receive Side buffer 你可以保留管理 NIC 的默认接收端缓冲区设置。You can keep the default Receive Side Buffer settings for the Management NIC. 对于内部和外部 Nic,可以使用以下 Windows PowerShell 命令,通过 16 MB 的 RAM 配置接收端缓冲区:For both the Internal and External NICs, you can configure the Receive Side Buffer with 16 MB of RAM by using the following Windows PowerShell command:
Set-NetAdapterAdvancedProperty "Internal","External" –DisplayName "Receive Buffer Size" –DisplayValue "16MB"
转发优化Forward Optimization 你可以保留管理 NIC 的默认前向优化设置。You can keep the default Forward Optimization settings for the Management NIC. 对于内部和外部 Nic,可以使用以下 Windows PowerShell 命令启用前向优化:For both the Internal and External NICs, you can enable Forward Optimization by using the following Windows PowerShell command:
Set-NetAdapterAdvancedProperty "Internal","External" –DisplayName "Forward Optimization" –DisplayValue "1"