Release notes for System Center Virtual Machine Manager

Virtual Machine Manager (VMM) 2022 doesn't have any known issues.

For new features in VMM 2022, see What's new.

This article lists the release notes for System Center 2019 - Virtual Machine Manager (VMM).

VMM 2019 release notes

The following sections summarize the release notes for VMM 2019 and include the known issues and workarounds. No known issues in VMM 2019 UR1 and UR2.

Removal of cluster node fails with CleanUpDisks flag

Description: When you remove a cluster node from Windows Server 2019 S2D cluster, with a CleanUpDisks flag, the removal fails with Could not get the specified instance MSFT_StorageJob error, in the following scenarios.

  • The storage capacity is inadequate in the remaining servers to accommodate all the volumes.

  • There aren't enough fault domains to provide the resiliency of the volume.

Workaround: Ensure the following:

  • Adequate storage capacity is available in the remaining servers to accommodate all the volumes

  • Enough fault domains are available to provide the resiliency of your volumes.

Addition of storage device having SMI-S management interface fails

Description: Addition of storage device having SMI-S management interface fails with the error Registration of storage provider failed with error code WsManMIInvokeFailed when System Center Virtual Machine Manager (VMM) 2019 is installed on Windows Server 2019.

Workaround: VMM depends on the Windows Standards-Based Storage Management service to manage the storage devices using SMI-S. Ensure that the service is started before trying to add the storage device.

Windows Server 2019 does not support HNVv1 networks

Description: Windows Server 2019 doesn't support HNVv1. If HNVv1 is currently in use, then the cluster that is utilizing HNVv1 shouldn't be upgraded to Windows Server 2019 using Cluster Rolling Upgrade.

Workaround: Migrate out of HNVv1 to SDNv2 on Windows Server 2016 before using Cluster Rolling upgrade to Windows Server 2019.

Latest accessibility fixes in Console are not available

Description: Latest accessibility fixes in Console might not be available when you use .NET 4.7 while installing the VMM console.

Workaround: We recommend you to use .NET 4.8. For detailed information on .NET 4.8 migration, see the article on .NET migration .

Backend adapter connectivity for SLB MUX doesn't work as expected

Description: Backend adapter connectivity of SLB MUX might not work as expected after the migration of virtual machine (VM).

Workaround: Users scale in/scale out in the SLB MUX VM as a workaround.

Cluster Rollup Upgrade fails

Description: Cluster rollup upgrade (CRU) fails during the Connect Hyper-V host to storage arrays stage, if the Windows Server 2019 Virtual Hard Disk (VHD) in library server, used as the computer profile for redeploying the operating system (OS) isn't installed with the latest updates.

Workaround: To resolve this error, install all the pending updates on the VHD and restart the CRU job.

To avoid this issue, prior to CRU triggering, ensure to install the latest OS updates on the VHD that you want to use for CRU.

Storage Dynamic Optimization does not trigger VHD migration even when optimization criteria are met

Description: Storage Dynamic Optimization (DO) should trigger the VHD migration between Clustered Shared Volumes (CSV), when the free storage space in one of the CSVs falls below the disk space threshold set in the Dynamic Optimization page, and the aggressiveness criteria are met. However, in some cases, the VHDs might not be migrated even if all other Storage DO conditions are met.

Workaround: To ensure Storage migration is triggered, do the following:

  1. Check the HostVolumeID using Get-SCStorageVolume cmdlet. If the HostVolumeID returns Null for the volume, refresh the VM and perform Storage DO again.
  2. Check the DiskSpacePlacementLevel of the host group using the Get-SCHostResever cmdlet. Set the DiskSpacePlacementLevel value that is equal to the value of disk space as in Host Reserve settings, in the Dynamic Optimization wizard.

Storage Dynamic Optimization disk performs multiple back and forth VHD migrations

Description: If there's a mismatch of disk space warning levels between host groups having the same file share, it can result in multiple migrations, to and from that file share, and might impact storage DO performance.

Workaround: We recommend that you don't do a file share across different clusters where storage dynamic optimization is enabled.

Performance monitoring for VMM server fails with Access denied event error

Description: In a scenario where VMM is monitored using Operations Manager, performance monitoring for VMM server fails with Access denied event error. Service users don't have permission to access VirtualMachineManager-Server/Operational event log.

Workaround: Change the security descriptor for Operational event log registry with the following command, and then restart the event log service and health log service.

reg add HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\WINEVT\Channels\Microsoft-VirtualMachineManager-Server/Operational /v ChannelAccess /t REG_SZ /d O:BAG:SYD:(D;;0xf0007;;;AN)(D;;0xf0007;;;BG)(A;;0xf0007;;;SY)(A;;0x7;;;BA)(A;;0x3;;;NS)(A;;0x1;;;IU)(A;;0x1;;;SU)"

This command will add the service user to the list of allowed users, who can access VirtualMachineManager-Server/Operational event log.

Set-SCVMSubnet -RemovePortACL job completes in VMM without removing portACL association from NC VMSubnet object

Description: Set-SCVMSubnet -RemovePortACL job completes in VMM without removing portACL association from NC VMSubnet object due to which Remove-PortACL job fails with NC Exception that is still in use.

Workaround: Remove the VMSubnet from VMM and then remove Port-ACL.

Import-Module NetworkController

#Replace the URI of the Network Controller with REST IP or FQDN

$uri = "<NC FQDN or IP>"

#Provide NC Admin credentials

$cred = Get-Credential

#Identify the virtual network that contains the subnet

$vnet = Get-NetworkControllerVirtualNetwork -ConnectionUri $uri -ResourceId "Fabrikam_VNet1" -Credential $cred

#Identify the subnet for which the ACL needs to be removed

$vnet.Properties.Subnets[0].Properties = $vnet.Properties.Subnets[0].Properties | Select-Object -Property * -ExcludeProperty AccessControlList

#Update

New-NetworkControllerVirtualNetwork -ResourceId "Fabrikam_VNet1" -ConnectionUri $uri –Properties $vnet.Properties -Credential $cred

Important

This version of Virtual Machine Manager (VMM) has reached the end of support. We recommend you to upgrade to VMM 2022.

This article lists the release notes for System Center 1807 - Virtual Machine Manager (VMM).

VMM 1807 release notes

The following sections summarize the release notes for VMM 1807 and include the known issues and workarounds.

Latest accessibility fixes in Console are not available

Description: Latest accessibility fixes in Console might not be available when you use .NET 4.7 while installing the VMM Console.

Workaround: We recommend using .NET 4.7.1 while installing the VMM Console. For detailed information on .NET 4.7.1 migration, see the article on .NET migration .

Backend adapter connectivity for SLB MUX doesn't work as expected

Description: Backend adapter connectivity of SLB MUX might not work as expected after VM migration.

Workaround: Users scale in/scale out in the SLB MUX VM as a workaround.

Connectivity issues for SLB addresses

Description: For frontend and backend IP addresses assigned to Software Load Balancer MUX VMs, you might experience connectivity issues if Register this connection's address in DNS is selected.

Workaround: Clear the setting to avoid issues with these IP addresses.

VMM integrated with Azure Site Recovery will not support DRA versions earlier than 5.1.3100

Description: In case you're using a VMM integrated with Azure Site Recovery, VMM supports Data Recovery Agent (DRA) version 5.1.3100 or higher. Earlier versions aren't supported.

Workaround: Use the following steps and upgrade the DRA version:

  1. Uninstall existing version of DRA
  2. Install VMM 1807 patch)
  3. Install 5.1.3100 version or higher.

Host/Cluster refresh might take longer if there are large number of logical network definitions

Description: When there are a large number of logical network definitions in the environment, cluster/host refresh might take longer than expected.

Set-SCVMSubnet -RemovePortACL job completes in VMM without removing portACL association from NC VMSubnet object

Description: Set-SCVMSubnet -RemovePortACL job completes in VMM without removing portACL association from NC VMSubnet object, due to which Remove-PortACL job fails with NC Exception that is still in use.

Workaround: Remove the VMSubnet from VMM and then remove Port-ACL.

Import-Module NetworkController

#Replace the URI of the Network Controller with REST IP or FQDN

$uri = "<NC FQDN or IP>"

#Provide NC Admin credentials

$cred = Get-Credential

#Identify the virtual network that contains the subnet

$vnet = Get-NetworkControllerVirtualNetwork -ConnectionUri $uri -ResourceId "Fabrikam_VNet1" -Credential $cred

#Identify the subnet for which the ACL needs to be removed

$vnet.Properties.Subnets[0].Properties = $vnet.Properties.Subnets[0].Properties | Select-Object -Property * -ExcludeProperty AccessControlList

#Update

New-NetworkControllerVirtualNetwork -ResourceId "Fabrikam_VNet1" -ConnectionUri $uri –Properties $vnet.Properties -Credential $cred

This article lists the release notes for System Center 2016 - Virtual Machine Manager (VMM).

Important

This version of Virtual Machine Manager (VMM) has reached the end of support. We recommend you to upgrade to VMM 2022.

VMM 1801 release notes

The following sections summarize the release notes for VMM 1801 and include the known issues and workarounds.

Latest accessibility fixes in Console are not available

Description: Latest accessibility fixes in Console might not be available when you use .NET 4.7 while installing the VMM Console.

Workaround: We recommend using .NET 4.7.1 while installing the VMM Console. For detailed information on .NET 4.7.1 migration, see the article on .NET migration .

Backend adapter connectivity for SLB MUX doesn't work as expected

Description: Backend adapter connectivity of SLB MUX might not work as expected after VM migration.

Workaround: Users scale in/scale out in the SLB MUX VM as a workaround.

Connectivity issues for SLB addresses

Description: For frontend and backend IP addresses assigned to Software Load Balancer MUX VMs, you might experience connectivity issues if Register this connection's address in DNS is selected.

Workaround: Clear the setting to avoid issues with these IP addresses.

Upgrade might fail if the name of a default port classification has been changed

Description: When you change the original name of a default port classification and then try to upgrade to VMM 1801 – upgrade might fail with the following error message in the VMM setup log.

Violation of PRIMARY KEY constraint 'PK_tbl_NetMan_PortClassification'. Cannot insert duplicate key in object 'dbo.tbl_NetMan_PortClassification'.

Workaround: Change the port classification name back to the original name, and then trigger the upgrade. After the upgrade, you can change the default name to a different one.

Set-SCVMSubnet -RemovePortACL job completes in VMM without removing portACL association from NC VMSubnet object

Description: Set-SCVMSubnet -RemovePortACL job completes in VMM without removing portACL association from NC VMSubnet object, due to which Remove-PortACL job fails with NC Exception that is still in use.

Workaround: Remove the VMSubnet from VMM and then remove Port-ACL.

Import-Module NetworkController

#Replace the URI of the Network Controller with REST IP or FQDN

$uri = "<NC FQDN or IP>"

#Provide NC Admin credentials

$cred = Get-Credential

#Identify the virtual network that contains the subnet

$vnet = Get-NetworkControllerVirtualNetwork -ConnectionUri $uri -ResourceId "Fabrikam_VNet1" -Credential $cred

#Identify the subnet for which the ACL needs to be removed

$vnet.Properties.Subnets[0].Properties = $vnet.Properties.Subnets[0].Properties | Select-Object -Property * -ExcludeProperty AccessControlList

#Update

New-NetworkControllerVirtualNetwork -ResourceId "Fabrikam_VNet1" -ConnectionUri $uri –Properties $vnet.Properties -Credential $cred

VMM 2016 release notes

The following sections summarize the release notes for VMM 2016 and include the known issues, fixes, and the workarounds.

VMM deployment

VMM admin console import might fail

Description: If you import the VMM admin console add-in as a non-administrator, the console will crash. This occurs because the console add-in is stored in location “C:\Program Files\”, and only admins have access to this location. Workaround: Store the console add-in at a location that doesn't need admin access, and then import it.

Storage

Promoting a VM to highly available might fail

Description: You create a VM on local storage, start it, and create checkpoints. If you try to migrate and promote the VM as highly available in a cluster, the migration might fail. Workaround: Before you run the migration, delete the running checkpoint, and stop the VM.

Migrating a VM from CSV to LUN storage might fail

Description: You create a highly available VM using CSV storage, add a LUN as available storage on the cluster, and migrate the VM from CSV to the LUN. If the VM and LUN storage are on the same node, the migration will succeed. If they're not, migration will fail. Workaround: If the VM isn't located on the cluster node on which the LUN storage is registered, move it there. Then, migrate the VM to the LUN storage.

Capacity of NAS arrays is displayed as 0 GB.

Description: VMM shows Total Capacity and Available Capacity as 0 GB for existing file shares in the NAS arrays. Workaround: None.

Networking

Logical networks managed by SDN network controller can't use dynamic IP addresses

Description: Using dynamic IP addresses for VMs connected to logical networks that are managed by SDN network controller in the VMM fabric isn't supported. Workaround: Configure static IP addresses.

SET switch shows as ‘Internal’ in VMM

Description: If you deploy an SET switch outside the VMM console, and then start managing it in the VMM fabric, The switch type will show as internal. This doesn't impact switch functionality. Workaround: None.

LACP teamed switch doesn't work after upgrade

Description: An LACP team configured in a logical switch doesn't work after upgrading to VMM 2016. Workaround: Redeploy the switch, or remove and re-add a physical network adapter in the team.

Backend adapter connectivity for SLB MUX doesn't work as expected

Description: Backend adapter connectivity of SLB MUX might not work as expected after VM migration. Workaround: Use scale in/scale out in the SLB MUX VM as a workaround.

CNG-based CA certificate isn't supported

Description: If you're using certificates from a CA, you can't use CNG certificates for SDN deployment in VMM. Workaround: Use other certificate formats.

A virtual adapter connected to a network managed by Network Controller must be restarted if you change the IP address

Description: If there's a change in the assigned IP address on any of the virtual network adapters connected to a VM network managed by Network Controller, you need to manually restart the associated adapters. Workaround: No workaround.

IPv6 isn't supported for a network infrastructure managed by Network Controller

Description: IPv6 isn't supported by Network Controller in the VMM fabric. Workaround: Use IPv4.

Connectivity issues for SLB addresses

Description: For frontend and backend IP addresses assigned to SLB MUX VMs, you might experience connectivity issues if Register this connection's address in DNS is selected. Workaround: Clear the setting to avoid issues.

Set-SCVMSubnet -RemovePortACL job completes in VMM without removing portACL association from NC VMSubnet object

Description: Set-SCVMSubnet -RemovePortACL job completes in VMM without removing portACL association from NC VMSubnet object, due to which Remove-PortACL job fails with NC Exception that is still in use.

Workaround: Remove the VMSubnet from VMM and then remove Port-ACL.

Import-Module NetworkController

#Replace the URI of the Network Controller with REST IP or FQDN

$uri = "<NC FQDN or IP>"

#Provide NC Admin credentials

$cred = Get-Credential

#Identify the virtual network that contains the subnet

$vnet = Get-NetworkControllerVirtualNetwork -ConnectionUri $uri -ResourceId "Fabrikam_VNet1" -Credential $cred

#Identify the subnet for which the ACL needs to be removed

$vnet.Properties.Subnets[0].Properties = $vnet.Properties.Subnets[0].Properties | Select-Object -Property * -ExcludeProperty AccessControlList

#Update

New-NetworkControllerVirtualNetwork -ResourceId "Fabrikam_VNet1" -ConnectionUri $uri –Properties $vnet.Properties -Credential $cred

Cluster management

Upgrading the functional level of a cluster doesn't refresh file server information

Description: If you upgrade the functional level for a cluster that includes a file server, the platform information isn't automatically updated in the VMM database. Workaround: After upgrading the cluster functional level, refresh the storage provider for the File Server.

Updating a Storage Spaces Direct cluster in VMM will fail

Description: Updating a Storage Spaces Direct cluster (hyper-converged or disaggregated) using VMM isn't supported, and might cause data loss. Workaround: Update the clusters outside VMM, using cluster-aware updating (CAU) in Windows.

A cluster rolling upgrade of a Windows Server 2012 R2 host cluster to a Windows Server 2016 Nano Server host cluster will fail

Description: When you try to upgrade the host nodes of a Windows Server 2012 R2 cluster to Windows Server 2016 - Nano Server using cluster rolling upgrade functionality in VMM, the upgrade will fail with error 20406: VMM could not enumerate instances of class MSFT_StorageNodeToDisk on the server <servername>. Failed with error MI RESULT 7 The requested operation isn't supported. Workaround: Manually upgrade the Windows Server 2012 R2 host cluster to Nano outside of VMM.

Note

The rolling upgrade from Windows Server 2012 R2 to Windows Server 2016 Full Server works fine. This issue is specific to Nano.

Adding a cluster in the VMM Admin console might cause an error

Description: When you add a cluster as a resource in the VMM Administrative console, you might receive an error stating There were no computers discovered based on your inputs. Workaround: Select OK, and close the error dialog. Then try to add the cluster again.

A cluster rolling upgrade doesn't do live migration of VMs that aren't highly available

Description: When you run a rolling upgrade of Windows Server 2012 R2 clusters to Windows Server 2016 using VMM, it doesn't perform live migration of VMs that aren't highly available. They're moved to a saved state. Workaround: Either make all cluster VMs highly available before the upgrade, or do a manual live migration for the specific VMs.

You need manual steps to add a Nano Server-based host located in an untrusted domain

Description: You can't add a Nano Server-based host in an untrusted domain. Workaround: Perform these steps on the host and then add it to the VMM fabric as an untrusted host.

  1. Enable WINRM over HTTPS:

    New-Item -Path WSMan:\LocalHost\Listener -Transport HTTPS -Address * -CertificateThumbPrint $cert.Thumbprint –Force

  2. Create a firewall exception on the host to allow WINRM over HTTPS:

    New-NetFirewallRule -DisplayName 'Windows Remote Management (HTTPS-In)' -Name 'Windows Remote Management (HTTPS-In)' -Profile Any -LocalPort 5986 -Protocol TCP

You can't add Nano Server-based hosts located in a perimeter network

Description: Trying to add a Nano Server-based host located in a perimeter network using the Add Resource Wizard fails. Workaround: Perform these steps on the host, and then add it as an untrusted host to the VMM fabric.

  1. Enable WINRM over HTTPS on the host:

    New-Item -Path WSMan:\LocalHost\Listener -Transport HTTPS -Address * -CertificateThumbPrint $cert.Thumbprint –Force

  2. Create a firewall exception on the host, to allow WINRM over HTTPS:

    New-NetFirewallRule -DisplayName 'Windows Remote Management (HTTPS-In)' -Name 'Windows Remote Management (HTTPS-In)' -Profile Any -LocalPort 5986 -Protocol TCP

Bare metal deployment of hosts might fail during a highly available upgrade

Description: After a highly available upgrade to VMM 2016, VMM might incorrectly update the Windows Deployment Services (WDS) registry key, HKLM\SYSTEM\CCS\SERVICES\WDSSERVER\PROVIDER\WDSPXE\PROVIDES\VMMOSDPROVIDER, to 'HOST/VIRT-VMM-1', instead of 'SCVMM/VIRT-VMM-1'. This will cause failures in bare metal deployment. Workaround: Manually change the registry entry for HKLM\SYSTEM\CCS\SERVICES\WDSSERVER\PROVIDER\WDSPXE\PROVIDES\VMMOSDPROVIDER to 'SCVMM/VIRT-VMM-1'.

Host agent status mismatch after upgrade

Description: When VMM updates the host agent, it generates a new certificate for the host. Because of this update, the Network Controller server certificate and the host certificate don't match. Workaround: Repairing the host on the Host Status page

SAN migration fails for a Nano Server-based host

Description: If you do a SAN migration between two standalone Nano Server-based hosts, an error is issued. Workaround: Install the latest VMM update rollup (issue fixed in Update Rollup 2).

Storage Spaces Direct

Adding a host with Storage Spaces Direct enabled to the VMM fabric issues a warning

Description: When you add a host to a cluster with Storage Spaces Direct enabled, the warning "Multipath I/O is not enabled for known storage arrays on host <\hostname>" is generated. Workaround: Install the latest VMM update rollup (issue fixed in Update Rollup 2).

Deploying a VM on SOFS using fast file copy issues a warning

Description: If you deploy a VM on an SOFS using fast file copy, the action completes successfully with the following warning: VMM could not transfer the file <source location> to <destination location> using fast file copy. The VMM agent on <host> returned an error. Workaround: None.

Cluster validation always runs

Description: When you add a node to a cluster (or create a Storage Spaces Direct hyper-converged cluster), cluster validation is always performed, even when the skip cluster validation option is selected. Workaround: Install the latest VMM update rollup. The issue was fixed in Update Rollup 2.

A classification change on a Cluster Shared Volume (CSV) isn't applied

Description: If you change the classification on a Cluster Shared Volume (CSV) in a Storage Spaces Direct hyper-converged cluster, only the classification of the owner node is updated. The other nodes still have the older classification assigned. Workaround: Install the latest VMM update rollup. The issue was fixed in update rollup 2.

Creating tiered file share on SOFS doesn't work as expect o

Description: When you successfully create a tiered fileshare on SOFS, an error (43020 [SM_RC_DEDUP_NOT_AVAILABLE]) is issued even if dedup option isn't selected. Workaround: Ignore the error.

VMM doesn't show correct information for a hyper-converged cluster, or Storage Spaces Direct SOFS

Description: After you add an existing hyper-converged cluster or Storage Spaces Direct SOFS cluster to the VMM fabric, the Storage Provider isn't added, and some properties aren't available. Workaround: Install the latest VMM update rollup. The issue was fixed in update rollup 2.

VM management

Shielding a VM causes an error

Description: If you enable shield for an existing VM in the VMM fabric, or if you create a shielded VM from a non-shielded template, the job might fail with the error 1730: The selected action could not be completed because the virtual machine is not in a state in which the action is valid. The failure happens during the last step of the job, when the VM is shut down after the shielding is completed. The VM is shielded properly, and is usable. Workaround: Repair the VM with the Ignore option.

VMM doesn't show changes to VM security properties

Description: If you change the secure boot properties of a generation 2 VM, or enable/disable vTPM for a shielded VM, outside the VMM console, the change isn't immediately shown in VMM. Workaround: Manually refresh the VM to show the changes.

Storing a VM in the VMM library fails if you change the default port for BITS (443)

Description: If you change the default BITS port when configuring VMM, and error is issued when you store a VM in the VMM library. Error 2940: VMM is unable to complete the requested file transfer. The connection to the HTTP Server <name> could not be established. Workaround: Manually add the new port number to the Windows Firewall exceptions list of the host: netsh advfirewall firewall add rule name="VMM" dir=in action=allow localport=<port no.> protocol=TCP

You can't create VM templates from a Nano Server-based VM.

Description: When you try to create a VM template from a Nano Server-based VM, error 2903 is issued: VMM could not locate the specified file/folder '' on the '<server name>' server. This file/folder might be required as part of another object. Workaround: Create a VM template from scratch using a Nano Server VHD.

Service deployments from service templates might fail on a Nano Server/Core-based guest OS.

Description: When selecting roles and features for a service template, the guest OS profile doesn't differentiate between Core, Nano Server, and Desktop. If you select roles and features (such as Desktop Experience or other GUI-related features) that don't apply to a Core/Nano Server-based guest OS, deployment failure might occur. Workaround: Don't include these roles and features in the service template.

VMM 2016 doesn't update the VMM guest agents after an upgrade

Description: When you upgrade VMM to 2016 with existing service deployments, and then service those services, VMM 2016 guest agents aren't updated on the VMs that were part of the service deployment. This doesn't affect functionality. Workaround: Manually install the VMM 2016 guest agent.

Nano Server-based VM fails to join a domain

Description: During Nano Server VM deployment, if you join the VM to a domain by specifying the domain join information on the OS Configuration page of the VM deployment Wizard, VMM deploys the VM but doesn't add it to the specified domain. Workaround: After the VM is deployed, manually join the VM to the domain. Learn more.

Error when starting a VM with Start Ordering

Description: Windows Server 2016 includes the VM Start Ordering feature, which defines the order in which dependent VMs are started. This functionality isn't available in VMM, but if you've configured the feature outside the VMM, VMM understands the order in which the VMs will start. However, VMM throws a false positive error (12711): VMM cannot complete the WMI operation on the server <servername> because of an error: [MSCluster_ResourceGroup.Name=<name>] The group or resource isn't in the correct state to perform the requested operation. Workaround: Ignore the error. The VMs will start in the correct order.

Integration

SQL Server Analysis Services (SSAS) integration doesn't work in VMM and Operations Manager Update Rollup 1.

Description: If you're running Update Rollup 1, you can't configure SSAS for SQL Server. Workaround: Download the latest Update Rollups. The issue was fixed in Update Rollup 2.

Next steps

What's new in Virtual Machine Manager