Boot from SCSI in Virtual Server vs. Boot from IDE in Windows Server virtualization (Hyper-V)
Update: 26th June 2008. Hyper-V RTM is now available. This post refers to pre-release software for Hyper-V, formerly known as Windows Server virtualization. However, there is no difference in the guidance provided in this post.
Following on from my post last week, I had some good questions asking about the difference between the SCSI adapter in Virtual Server and the SCSI controller in Windows Server virtualization. In Virtual Server 2005, the best practice is to configure Virtual Machines to boot from the SCSI adapter for performance reasons. This is not the case in Windows Server virtualization. This post takes a dip into explaining why.
To start with, let’s take a look at the SCSI adapter in Virtual Server
In common with all devices in Virtual Server, including the IDE controller, the SCSI adapter is an emulated device. It emulates a real-world counterpart, a parallel SCSI adapter with the Adaptec 7870 chip set. It can support up to 7 storage devices (Virtual Hard Disks).
You may be asking – if both the IDE controller and the SCSI adapter in Virtual Server are emulated, why should the SCSI adapter perform better. The answer is simple. It’s due to a driver that is installed when the Virtual Machine Additions are installed inside a virtual machine. We have an optimized driver – if you take a look under device manager in a VM after the additions are installed, you’ll see that the driver is msvmscsi.sys.
The SCSI adapter in Virtual Server has another advantage over the IDE controller in Virtual Server. The IDE controller can have VHDs connected up to 127GB in size. The SCSI adapter can have VHDs connected up to 2040GB in size (8GB short of 2TB). The IDE controller is not 48-bit LBA aware (http://www.48bitlba.com/) so the maximum theoretical capacity (if we allowed it) would be 137.4GB. The SCSI adapter has a boot BIOS which enables virtual machines to boot directly from VHDs connected to it after control has been passed from the virtual machine BIOS.
So keeping that in mind, let’s compare and contrast the above with the IDE and SCSI controllers in Windows Server virtualization.
The IDE controller remains an emulated device, but with a couple of differences to the IDE controller in Virtual Server. It is now 48-bit LBA capable . This allows you to connect large VHDs up to 2040GB to it. The second difference is a filter driver we insert into the storage stack inside the guest which effectively bypasses the emulation path for IDE, making it much higher performance. In fact, for I/O paths, the IDE controller with the filter driver performs equivalently to the SCSI controller in Windows Server virtualization. You can also attach pass-through disk storage to IDE, which was not possible in Virtual Server.
The SCSI controller in Windows Server virtualization is not an emulated device. Instead, it is a “synthetic” device. It has no real world counterpart – it is a virtual controller. You can’t go to a store to buy one for a physical machine. The controller allows up to 255 VHDs or pass-through storage devices per controller, while gaining improved performance over the emulated adapter in virtual server. (The why for this is architectural - I'll cover that another day). As a “synthetic” device, it is not currently possible to boot directly from it until an operating system is available with a loader capable of reading from the drives/device. BIOS changes would also required. That's definitely a topic for another day though.
Hopefully that gives a bit more insight into why the best practice recommendation of booting from SCSI in Virtual Server no longer applies in Windows Server virtualization and why booting from IDE does not incur the same performance overhead as in Virtual Server.