Hyper-v 主機 CPU 資源管理Hyper-V Host CPU Resource Management

Windows Server 2016 或更新版本中引進的 hyper-v 主機 CPU 資源控制,可讓 Hyper-v 系統管理員更有效地管理和配置「根」或管理磁碟分割和來賓 Vm 之間的主機伺服器 CPU 資源。Hyper-V host CPU resource controls introduced in Windows Server 2016 or later allow Hyper-V administrators to better manage and allocate host server CPU resources between the “root”, or management partition, and guest VMs. 系統管理員可以使用這些控制項,將主機系統的處理器子集專用於根磁碟分割。Using these controls, administrators can dedicate a subset of the processors of a host system to the root partition. 這可以在 Hyper-v 主機中,從在系統處理器的個別子集上執行的工作負載中,將完成的工作與來賓虛擬機器中執行的工作負載隔離。This can segregate the work done in a Hyper-V host from the workloads running in guest virtual machines by running them on separate subsets of the system processors.

如需 Hyper-v 主機硬體的詳細資訊,請參閱Windows 10 Hyper-v 系統需求For details about hardware for Hyper-V hosts, see Windows 10 Hyper-V System Requirements.

背景Background

在設定 Hyper-v 主機 CPU 資源的控制權之前,最好先參閱 Hyper-v 架構的基本概念。Before setting controls for Hyper-V host CPU resources, it's helpful to review the basics of the Hyper-V architecture.
您可以在Hyper-v 架構一節中找到一般摘要。You can find a general summary in the Hyper-V Architecture section. 以下是這篇文章的重要概念:These are important concepts for this article:

  • Hyper-v 會建立及管理虛擬機器磁碟分割,在這些資料分割中,會在管理者控制下配置和共用計算資源。Hyper-V creates and manages virtual machine partitions, across which compute resources are allocated and shared, under control of the hypervisor. 分割區可在所有來賓虛擬機器之間,以及在來賓 Vm 與根磁碟分割之間提供強式隔離界限。Partitions provide strong isolation boundaries between all guest virtual machines, and between guest VMs and the root partition.

  • 根磁碟分割本身是虛擬機器磁碟分割,雖然它具有與來賓虛擬機器不同的唯一屬性和更高的許可權。The root partition is itself a virtual machine partition, although it has unique properties and much greater privileges than guest virtual machines. 根磁碟分割提供管理服務,可控制所有來賓虛擬機器、提供來賓的虛擬裝置支援,以及管理來賓虛擬機器的所有裝置 i/o。The root partition provides the management services that control all guest virtual machines, provides virtual device support for guests, and manages all device I/O for guest virtual machines. Microsoft 強烈建議您不要在主機分割區中執行任何應用程式工作負載。Microsoft strongly recommends not running any application workloads in a host partition.

  • 根磁碟分割的每個虛擬處理器(VP)都會將1:1 對應到基礎邏輯處理器(LP)。Each virtual processor (VP) of the root partition is mapped 1:1 to an underlying logical processor (LP). 主機副總一律會在相同的基礎 LP 上執行–不會遷移根磁碟分割的 VPs。A host VP will always run on the same underlying LP – there is no migration of the root partition's VPs.

  • 根據預設,主機 VPs 執行所在的 LPs 也可以執行來賓 VPs。By default, the LPs on which host VPs run can also run guest VPs.

  • 在任何可用的邏輯處理器上執行的虛擬程式都可能會排程來賓副總裁。A guest VP may be scheduled by the hypervisor to run on any available logical processor. 當「執行程式管理者」在排程來賓副總時,會負責考慮時態性快取位置、NUMA 拓朴和許多其他因素,最後就是在任何主機 LP 上排定副總裁。While the hypervisor scheduler takes care to consider temporal cache locality, NUMA topology, and many other factors when scheduling a guest VP, ultimately the VP could be scheduled on any host LP.

最小根或 "Minroot" 設定The Minimum Root, or “Minroot” Configuration

舊版的 Hyper-v 具有每個分割區 64 VPs 的架構上限。Early versions of Hyper-V had an architectural maximum limit of 64 VPs per partition. 這會同時套用至根和來賓磁碟分割。This applied to both the root and guest partitions. 當高階伺服器上出現超過64個邏輯處理器的系統時,Hyper-v 也會演變其主機分級限制,以支援這些較大的系統,其中一點支援最多 320 LPs 的主機。As systems with more than 64 logical processors appeared on high-end servers, Hyper-V also evolved its host scale limits to support these larger systems, at one point supporting a host with up to 320 LPs. 不過,每個分割區的限制為64副總裁,當時呈現了幾項挑戰,並引進了支援每個資料分割超過 64 VPs 的複雜度。However, breaking the 64 VP per partition limit at that time presented several challenges and introduced complexities that made supporting more than 64 VPs per partition prohibitive. 為了解決此情況,Hyper-v 將指定給根磁碟分割的 VPs 數目限制為64,即使基礎電腦有更多可用的邏輯處理器也一樣。To address this, Hyper-V limited the number of VPs given to the root partition to 64, even if the underlying machine had many more logical processors available. 虛擬程式管理人員會繼續使用所有可用的 LPs 來執行來賓 VPs,但在64的根磁碟分割上按人工限制。The hypervisor would continue to utilize all available LPs for running guest VPs, but artificially capped the root partition at 64. 這種設定稱為「最小根」或「minroot」設定。This configuration became known as the “minimum root”, or “minroot” configuration. 效能測試已確認,即使在具有超過 64 LPs 的大型系統上,根目錄也不需要超過64根 VPs,就能為大量的來賓 Vm 和來賓 VPs 提供足夠的支援–事實上,遠低於64的根 VPs 通常都是足夠的,視來賓 Vm 的數目和大小、執行中的特定工作負載等而定。Performance testing confirmed that, even on large scale systems with more than 64 LPs, the root did not need more than 64 root VPs to provide sufficient support to a large number of guest VMs and guest VPs – in fact, much less than 64 root VPs was often adequate, depending of course on the number and size of the guest VMs, the specific workloads being run, etc.

這個「minroot」概念今天會繼續使用。This “minroot” concept continues to be utilized today. 事實上,即使 Windows Server 2016 Hyper-v 增加了主機 LPs 到 512 LPs 的最大架構支援限制,根磁碟分割仍會限制為最多 320 LPs。In fact, even as Windows Server 2016 Hyper-V increased its maximum architectural support limit for host LPs to 512 LPs, the root partition will still be limited to a maximum of 320 LPs.

使用 Minroot 來限制和隔離主機計算資源Using Minroot to Constrain and Isolate Host Compute Resources

在 Windows Server 2016 Hyper-v 中,具有高預設閾值 320 LPs,minroot 設定只會在非常大的伺服器系統上使用。With the high default threshold of 320 LPs in Windows Server 2016 Hyper-V, the minroot configuration will only be utilized on the very largest server systems. 不過,Hyper-v 主機管理員可以將這項功能設定為較低的臨界值,因此會利用來大幅限制根磁碟分割可用的主機 CPU 資源數量。However, this capability can be configured to a much lower threshold by the Hyper-V host administrator, and thus leveraged to greatly restrict the amount of host CPU resources available to the root partition. 當然,您必須謹慎選擇要使用的特定根 LPs 數目,以支援配置給主機的 Vm 和工作負載的最大需求。The specific number of root LPs to utilize must of course be chosen carefully to support the maximum demands of the VMs and workloads allocated to the host. 不過,您可以透過仔細評估和監視生產工作負載,並在非生產環境中進行廣泛部署之前進行驗證,來判斷主機 LPs 數目合理的值。However, reasonable values for the number of host LPs can be determined through careful assessment and monitoring of production workloads, and validated in non-production environments before broad deployment.

啟用和設定 MinrootEnabling and Configuring Minroot

Minroot 設定是透過「虛擬程式」 BCD 專案來控制。The minroot configuration is controlled via hypervisor BCD entries. 若要啟用 minroot,請從具有系統管理員許可權的 cmd 提示字元:To enable minroot, from a cmd prompt with administrator privileges:

    bcdedit /set hypervisorrootproc n

其中 n 是根 VPs 的數目。Where n is the number of root VPs.

系統必須重新開機,而且在 OS 開機的存留期間,新的根處理器數目將會保存。The system must be rebooted, and the new number of root processors will persist for the lifetime of the OS boot. Minroot 設定無法在執行時間動態變更。The minroot configuration cannot be changed dynamically at runtime.

如果有多個 NUMA 節點,每個節點都會n/NumaNodeCount取得處理器。If there are multiple NUMA nodes, each node will get n/NumaNodeCount processors.

請注意,使用多個 NUMA 節點時,您必須確定 VM 的拓撲是在每個 NUMA 節點上有足夠的可用 LPs (也就是 LPs,不含根 VPs),才能執行對應的 VM NUMA 節點 VPs。Note that with multiple NUMA nodes, you must ensure the VM's topology is such that there are enough free LPs (i.e., LPs without root VPs) on each NUMA node to run the corresponding VM's NUMA node VPs.

正在驗證 Minroot 設定Verifying the Minroot Configuration

您可以使用 [工作管理員] 來驗證主機的 minroot 設定,如下所示。You can verify the host's minroot configuration using Task Manager, as shown below.

當 Minroot 為作用中時,[工作管理員] 會顯示目前配置給主機的邏輯處理器數目,以及系統中的邏輯處理器總數。When Minroot is active, Task Manager will display the number of logical processors currently allotted to the host, in addition to the total number of logical processors in the system.