Geek of All Trades: Fault-tolerant Hyper-V storage

You can actually use the Windows Server 2012 Scale-Out File Server and Server Message Block to create a level of fault-tolerant storage.

Greg Shields

Put the words “Microsoft” and “storage” together in the same sentence, and most of you veteran IT guys might just chuckle. Add the words “fault tolerant,” and you’ll probably break into a hearty guffaw. You know Microsoft and storage haven’t exactly enjoyed a happy association in the past. Many have been burned by unexpectedly crashing Windows volumes or smooth-running servers that get squirrely the moment a basic disk goes dynamic.

And now you’re probably thinking, “Fault tolerance for Hyper-V VMs? In software? You must be mad!” In years past, those concepts didn’t exactly connect. Thankfully, times change, and so do OS vendors. If you look at the storage architectures native to Windows Server 2012 today, you might wonder if Microsoft is sticking out its tongue at the vendors selling storage for virtualization.

At the center of all this excitement are two technologies that work together to offer a respectable alternative for fault-tolerant Hyper-V VMs: Server Message Block (SMB) 3.0 and the Scale-Out File Server (SOFS). Massive investments in the first elevate this once-maligned protocol to near-native performance with impressive scalability. The release of the second greatly simplifies how Hyper-V servers connect with their VMs. As I like to think of it, with SOFS, any VM is just a whack-whack away.

Bought a SAN, want a NAS

SANs have long been the wise choice for virtualization storage. Their shared storage structure has become a necessity for VM live migration. However, a SAN’s biggest strength in facilitating live migration also happens to be its greatest hurdle. SANs use low-level protocols such as iSCSI and Fibre Channel for storage connections. These protocols offer great flexibility when you’re experienced with their intricacies. They can be an administrative nightmare when you’re not.

If you’ve ever struggled to set up multiple Multipath I/O (MPIO) connections to an iSCSI SAN or Fibre Channel connections across host bus adapters (HBAs), you’re familiar with this pain (see Figure 1). You must carefully configure each connection, and each server requires multiple connections for redundancy. Automation is minimal. Mistakes are easy to make. Repeat these steps across many Hyper-V hosts, and your storage network quickly comes to resemble a Paris street map.

Direct SAN connections require many complex connections.

Figure 1 Direct SAN connections require many complex connections.

Windows Server 2012 SOFS (see Figure 2) dramatically simplifies managing the spiderweb of Hyper-V server-to-storage connections. When combined with the performance and scalability enhancements now in SMB 3.0, it takes much of the interconnecting out of your hands.

With SOFS in place, accessing a Hyper-V VM requires little more than a UNC path: \\server\share\folder\vm.vhdx. SMB 3.0 automatically handles everything else—redundancy, load balancing, failover and the gamut of fault tolerance networking goals. Even the SOFS cluster itself is active-active in nature, which means no server purchase goes unused in driving the storage needs of your Hyper-V VMs.

Scale-Out File Server aggregates SAN connections like SANs aggregate storage.

Figure 2 Scale-Out File Server aggregates SAN connections like SANs aggregate storage.

This architecture defines fault-tolerant storage. In accomplishing this goal, it looks more like a NAS than a SAN. Assuming your SAN infrastructure is internally highly available, all you need to worry about is proper storage NICs teaming on Hyper-V hosts. This is simpler, easier and better.

Microsoft documents how to deploy SOFS. The process involves installing the File Server role and Failover Clustering feature, creating a cluster, and then creating a Cluster Shared Volume (CSV).

Hyper-V storage alternatives

SANs might be a best practice for Hyper-V storage, but not all of you Jack-of-all-trades IT professionals would enjoy having one in your datacenter. You could argue the threat of downtime in such environments is slightly less mission-critical, or that limited budgets mandate “creative” Hyper-V administration. In either case, Windows Server 2012 has a pair of storage alternatives that could meet your needs.

The first leverages the features of SMB 3.0. It just doesn’t have the high-availability (HA) functions of SOFS. With just the File Services role installed on a Windows Server 2012 instance, you can create an SMB share for applications that works with direct-attached storage, or DAS (see Figure 3). This special kind of file share enjoys the performance benefits SMB 3.0 has to offer Hyper-V VMs, but without the load balancing, failover and other SOFS options. If your environment and your budget can stand occasional downtime, running VMs on SMB shares gives you all the UNC path simplicity—without the expensive SAN.

The SMB Share – Applications profile.

Figure 3 The SMB Share – Applications profile.

The second alternative has generated a lot of confusion since its introduction. Storage Spaces is a new feature in Windows Server 2012 and Windows 8. The intent is to provide a SAN-like experience for DAS. Like the controversial dynamic disks before it, Storage Spaces offers HA for commodity disks by way of software RAID. There are mirror, double mirror and parity options available to protect against individual disk loss.

At face value, Storage Spaces by itself isn’t terrifically groundbreaking technology. It’s essentially a tool to pool disk space from multiple storage devices. The confusion arises when you connect a Storage Space from more than one Windows Server at a time. This architecture is referred to as Clustered Storage Spaces. As you might imagine, Clustered Storage Spaces is the intersection of Failover Clustering with Storage Spaces.

The “space” being clustered exists as (and this is Microsoft’s wording), “a small collection of servers … and a set of shared SAS [secure attention sequence] JBOD enclosures.” In this configuration, servers are connected via SAS connections to one or more just a bunch of disks (JBOD) enclosures (see Figure 4). Both hosts enjoy equal access to disks in the enclosures with Storage Spaces itself mediating access between hosts.

Two servers connected to a SAS JBOD enclosure.

Figure 4 Two servers connected to a SAS JBOD enclosure.

The jury is still out as to the efficacy of this not-quite-SAN, but-not-quite-SAN-less approach. At this point, there only seems to be a single vendor, DataON Storage, currently offering the certified SAS JBOD enclosures Storage Spaces requires.

SOFS in the way

In this sense, getting in the way is probably a good thing. The first time I was exposed to SOFS more than a year ago, I wondered aloud, “Who in the world would buy this thing?” I had a difficult time seeing the advantages to adding yet another layer to our already-complex datacenter environments.

That said, the more time I spend with SOFS, the more it has earned my respect. It will help you do what you do best while letting your Hyper-V admins focus on keeping their VMs running. In cahoots with SMB 3.0, SOFS serves as middleman. It offers you a NAS-like experience for Hyper-V VMs with performance almost equivalent to a traditional SAN.

Greg Shields

Greg Shields, MVP, is a partner at Concentrated Technology. Get more of Shields’ Geek of All Trades tips and tricks at