Performance Issues with your Hyper-V vNic?
There have been numerous calls into Microsoft regarding the performance of their networking when using Hyper-V versus not using Hyper-V.
So take for example:
- Using a physical network card not bound to Hyper-V, they may be seeing transfer rates of ~900 mb/s
- Binding that same network card to a Hyper-V virtual switch, transfer rates may drop to ~300 mb/s
- In-bind that same network card from Hyper-V, transfer rates go back up to ~900 mb/s
This is a known issue and is expected with Windows Server 2012 R2 and below.
The issue is that Windows Server 2012 R2 / Windows 8.1 Hyper-V does not support virtual Receive Side Scaling (vRSS) in the host (parent) partition. Without support for vRSS, the virtual NIC cannot distribute network load across multiple logical processors and thus all processing for network traffic will be limited to logical processor 0. Without vRSS and depending on the speed of the processor, we would expect performance of 3.5-5 Gbps at the vNIC assuming VMQ is active.
This performance issue will not be present in Windows Server 2016. Windows Server 2016 / Windows 10 added support for Receive Side Scaling in the host. This means that processing for network traffic can and will automatically scale out to multiple logical processors for greater scalability. Given that both core counts and network performance are continually rising, this a great new feature in Window Server 2016. Here’s a screenshot of Receive Side Scaling for the Hyper-V host which is enabled by default.
Here is a FAQ regarding virtual Receive Side Scaling.
Special thanks to Jeff Woolsey for this information.