Manage Storage Spaces Direct clusters

Important

This version of Virtual Machine Manager (VMM) has reached the end of support. We recommend you to upgrade to VMM 2022.

This article explains how to manage Storage Spaces Direct (S2D) clusters in the System Center Virtual Machine Manager (VMM) fabric.

Configure cluster settings

You can view and configure cluster settings, which include cluster status, failure management, and available storage.

Add a node to a hyper-converged cluster

You can add a new node on a hyper-converged S2D cluster in the VMM fabric. The new node can be an existing Hyper-V server or a bare-metal physical server.

Note

Typically, S2D node requires Remote Direct Memory Access (RDMA), Quality of Service (QoS), and switch embedded teaming (SET) settings. To configure these settings for a node by using bare-metal computers, you can use the post-deployment script capability in the physical computer profile (PCP). Here's the sample PCP post-deployment script. You can also use this script to configure RDMA, QoS, and SET when you add a new node to an existing S2D deployment from bare-metal computers.

  • When you add a new node on a hyper-converged cluster, VMM automatically discovers disks on the new node and enables S2D.
  • VMM disables maintenance mode on disks before adding them.

Control storage resources with QoS

Quality of Service in Windows Server provides a way to specify minimum and maximum resources that can be assigned to Hyper-V VMs by using scale-out file share storage. QoS mitigates noisy neighbor issues and ensures that a single VM doesn't consume all storage resources.

Set up QoS policies for a file server or for specific virtual disks on the server.

Note

The following feature is applicable from VMM 2019 UR1.

Configure DCB settings on S2D clusters

With the advent of converged networking, organizations are using Ethernet as a converged network for their management and storage traffic. It's important for Ethernet networks to support a similar level for performance and losslessness compared to that of dedicated fiber channel networks. This similar level of support becomes more important when the use of S2D clusters is considered.

RDMA, in conjunction with data center bridging (DCB), helps to achieve a similar level of performance and losslessness in an Ethernet network compared to fiber channel networks.

The DCB settings must be consistent across all the hosts and the fabric network (switches). A misconfigured DCB setting in any one of the host or fabric devices is detrimental to the S2D performance.

To configure a DCB setting, use this procedure.