Some more Windows 8 , Hyper-V 3.0 facts
I would HIGHLY RECOMMEND you read this Michael Otey’s blog on Windows Server 8.
Windows Server 8: Hyper-V 3.0 Evens the Odds with vSphere
At the recent Windows Server Workshop at the Microsoft campus in Redmond Washington Jeff Woolsey, Principle Program Manager Lead for Windows Virtualization in the Windows Server and Cloud division presented the new features in the next version of their Hyper-V virtualization platform. In the introduction to the workshop Jeffery Snover, Distinguished Engineer and the Lead Architect for the Windows Server Division made the bold statement that with Microsoft it’s the third release is where Microsoft really gets it right and with regard to what Microsoft demonstrated in the next version of Hyper-V this is definitely true. The upcoming Hyper-V 3.0 release that’s included in the next version of Windows Server has closed the technology gap with VMware’s vSphere.
Hyper-V 3.0 Scalability
The days when Hyper-V lagged behind VMware in terms of scalability are a thing of the past. The new Hyper-V 3.0 meets or exceeds all of the scalability marks that were previously VMware-only territory. Hyper-V 3.0 hosts support up to 160 logical processors (where a logical processor is either a core or a hyperthread) and up to 2 TB RAM. On the VM guest side, Hyper-V 3.0 guests will support up to 32 virtual CPUs with up to 512 GB RAM per VM. More subtle changes include support for guest NUMA where the guest VM has processor and memory affinity with the Hyper-V host resources. NUMA support is important for ensuring scalability increases as the number of available host processors increase.
Multiple Concurrent Live Migration and Storage Live Migration
Perhaps more important than the sheer scalability enhancements are the changes in Live Migration and the introduction of Storage Live Migration. Live Migration was introduced in Hyper-V 2.0 which came out with Windows Server 2008 R2. While it filled an important hole in the Hyper-V feature set it wasn’t up to par with the VMotion capability provided in vSphere. Live Migration was limited to a single Live Migration at a time while ESX Server was capable of performing multiple simultaneous VMotions. In addition, vSphere supported a similar feature called Storage VMotion which allowed a VM’s storage to be moved to new locations without incurring any downtime. Hyper-V 3.0 erases both of these advantages. Hyper-V 3.0 supports multiple concurrent Live Migrations. There are no limits to the number of concurrent Live Migrations that can take place with Hyper-V 3.0. In addition, Hyper-V 3.0 also provides full support for Storage Live Migration where a virtual machine’s files ( the configuration, virtual disk and snapshot files) can be moved to different storage locations without any interruption of end user connectivity to the guest VM.
Microsoft also threw in one additional twist that vSphere has never had. Hyper-V 3.0 has the ability to perform Live Migration and Storage Live Migration without the requirement of a shared storage on the backend. The removal of this requirement really helps bring the availability advantages of Live Migration to small and medium sized businesses that came afford a SAN or don’t want to deal with the complexities of a SAN. The ability to perform Live Migration without requiring shared storage really sets Hyper-V apart from vSphere and will definitely be a big draw – especially for SMBs that haven’t implemented virtualization yet.
VHDX, ODX, Virtual Fiber Channel & Boot from SAN
Another important enhancement with Hyper-V 3.0 was the introduction of a new virtual disk format called VHDX. The new VHDX format breaks the 2TB limit that was present in the older VHD format and pushes the maximum size of the virtual disk up to 16 TB per VHDX. The new format also provides improved performance, support for larger block sizes and is more resilient to corruption.
Hyper-V 3.0 also supports a feature called Offloaded Date Transfer (ODX). ODX enables Hyper-V to take advantage of the storage features of a backend shared storage subsystem. When performing file copies on an ODX enabled SAN the OS hands off all of the data transfer tasks to the SAN providing much high file copy performance with zero to minimal CPU utilization. There is no special ODX button. Instead ODX works in the backend. ODX requires the storage subsystem to support ODX.
Companies that use fiber channel SANs will appreciate the addition of the virtual Fiber Channel support in the Hyper-V guests. Hyper-V 3.0 guests can have up to four virtual fiber channel host bus adapters. The virtual HBAs appear in the VMs as devices very like virtual NICs and other virtual devices. Hyper-V VMs will also be able to boot from both fiber channel and iSCSI SANs.
Extensible Virtual Switch & NIC Teaming
In keeping par with the sweeping changes in Hyper-V’s compute capabilities and storage Microsoft also made a some of significant enhancements to Hyper-V’s networking capabilities. First, they updated the virtual switch that’s built into the Hyper-V hypervisor. The new virtual switch has a number of new capabilities multi-tenant capability as well as the ability to provide minimum and maximum bandwidth guarantees. In addition to these features the new virtual switch is also extensible. Microsoft provides a API that allows capture, filter and forwarding extensions. To ensure the high quality of these virtual switch extensions Microsoft will be initiating a Hyper-V virtual switch logo program.
Another overdue feature that will be a part of Windows Server 8 is the built-in ability to provide NIC teaming natively in the operating system. VMware’s ESX Server has provided NIC teaming for some time. Prior to Windows Server 8 you could only get NIC teaming for Windows via specialized NICs from Broadcom and Intel. The new NIC teaming works across heterogonous vendor NICs and can provide support for load balancing as well as failover.