Azure Stack Hub known issues

This article lists known issues in Azure Stack Hub releases. The list is updated as new issues are identified.

To access known issues for a different version, use the version selector dropdown above the table of contents on the left.

Important

Review this section before applying the update.

Important

If your Azure Stack Hub instance is behind by more than two updates, it's considered out of compliance. You must update to at least the minimum supported version to receive support.

Update

Update to 2108 will not proceed if there are AKS clusters or ACR registries created and the private previews of AKS and ACR services are installed

  • Applicable: This issue applies to Azure Kubernetes Service (AKS) and Azure Container Registry (ACR) private preview customers who plan to upgrade to 2108.
  • Remediation: The operator must delete all AKS clusters and ACR Registries and uninstall the private preview of the AKS and ACR services.
  • Occurrence: Any stamp that has the AKS and ACR private previews installed will experience this message.

For known Azure Stack Hub update issues, see Troubleshooting Updates in Azure Stack Hub.

Networking

Load Balancer

Load Balancer rules

  • Applicable: This issue applies to all supported releases.
  • Cause: Updating/changing the load distribution property (session persistence) has no effect and some virtual machines might not participate in the traffic load distribution. For example, if you have 4 backend virtual machines and only 2 clients connecting to the load balancer, and the load distribution is set to client IP, the client sessions will always use the same backend virtual machines. Changing the load distribution property to "none" to distribute the client connections across all the backend virtual machines will have no effect.
  • Remediation: Recreating the load balancing rule will ensure the selected settings are correctly configured to all backend VMs.
  • Occurrence: Common

Cannot create a Virtual Machine Scale Set with a data disk attached

  • Applicable: This issue applies to release 2108.
  • Cause: Missing properties for the object type data disk.
  • Remediation: Add data disks after deployment.
  • Occurrence: Common

Create disk snapshot can fail

  • Applicable: This issue applies to release 2108.
  • Cause: Missing properties for snapshot operation.
  • Remediation: Apply hotfix 1.2108.2.73.
  • Occurrence: Common

Portal

Container registries

Metrics unavailable for container registries in the user portal

  • Applicable: This issue applies to the public preview release of Azure Container Registry on Azure Stack Hub.
  • Cause: An issue is preventing metrics from displaying when viewing a container registry in the Azure portal. The metrics are also not available in Shoebox.
  • Remediation: No remediation available, will be addressed in an upcoming hotfix.
  • Occurrence: Common

Container Registry operator experience prompting to install although installation already complete

  • Applicable: This issue applies to the public preview release of Azure Container Registry on Azure Stack Hub.
  • Cause: Seven days following the installation of Container Registry, the operator experience in the admin portal may prompt the operator to install Container Registry again. The service is operating normally but the operator experience is not available. Tenants are able to create and manage container registries.
  • Remediation: No remediation available, will be addressed in an upcoming hotfix.
  • Occurrence: Common

Administrative subscriptions

  • Applicable: This issue applies to all supported releases.
  • Cause: The two administrative subscriptions that were introduced with version 1804 should not be used. The subscription types are Metering, and Consumption.
  • Remediation: If you have resources running on these two subscriptions, recreate them in user subscriptions.
  • Occurrence: Common

Create DNS blade results in portal crashing

  • Applicable: This issue applies to all supported releases with hotfix version 1.2108.2.81.
  • Cause: Two specific flows sometimes end with the user portal crashing:
    • Create a resource > Networking > DNS zone
    • Create a resource > Networking > Connection
  • Remediation: The following workflow can ensure there are no crashes:
    • All services > DNS zone > + Add or All services > Connections > + Add
  • Occurrence: Common

Portal shows "Unidentified User" instead of user email

  • Applicable: This issue applies to all systems with hotfix version 1.2108.2.81 that are using an Azure AD account without an email address in the account profile.
  • Remediation: Sign in to the Azure portal, and add an email address to the Azure AD account that is experiencing this issue.
  • Occurrence: Common

Secret expiration doesn't trigger an alert

  • Applicable: This issue applies to all supported releases of Event Hubs on Azure Stack Hub.
  • Cause: Administrative alerts aren't currently integrated.
  • Remediation: Complete the process in How to rotate secrets for Event Hubs on Azure Stack Hubs regularly, ideally every six months.

Data plane clusters are in an unhealthy state with all nodes in warning state

  • Applicable: This issue applies to all supported releases of Event Hubs on Azure Stack Hub.
  • Cause: Internal infrastructure secrets may be nearing expiration.
  • Remediation: Update to the latest Event Hubs on Azure Stack Hub release, then complete the process in How to rotate secrets for Event Hubs on Azure Stack Hubs.

Data plane clusters' health isn't getting updated in admin portal or scale-out of clusters results in access denied

  • Applicable: This issue applies to all supported releases of Event Hubs on Azure Stack Hub.
  • Cause: Internal components haven't refreshed their cache with new secrets, after secret rotation is completed.
  • Remediation: Open a support request to receive assistance.

Azure Stack Hub backup fails

  • Applicable: This issue applies to all supported releases of Event Hubs on Azure Stack Hub.
  • Cause: Internal infrastructure secrets may have expired.
  • Remediation: Open a support request to receive assistance.

Azure Kubernetes Service (AKS)

Applications deployed to AKS clusters fail to access persistent volumes

  • Applicable: This issue applies to release 2108.
  • Cause: When you deploy an AKS cluster using:
    • Kubernetes 1.19, or
    • Kubernetes 1.20 with Kubenet as the network plug-in
      And deploy an application that uses persistent volumes, you will note an issue with the application's pod when it tries to deploy a persistent volume. If you look into the pod's log, you may find an error regarding permissions denied. The problem resides in Azure Stack Hub's Azure Disk CSI driver.
  • Remediation: When deploying an AKS cluster, you should select only Kubernetes version 1.20 and Azure CNI for the Network plugin.
  • Occurrence: Common

Update

For known Azure Stack Hub update issues, see Troubleshooting Updates in Azure Stack Hub.

Update to 2102 fails during pre-update checks for AKS/ACR

  • Applicable: This issue applies to Azure Kubernetes Service (AKS) and Azure Container Registry (ACR) private preview customers who plan to upgrade to 2102 or apply any hotfixes.
  • Remediation: Uninstall AKS and ACR prior to updating to 2102, or prior to applying any hotfixes after updating to 2102. Restart the update after uninstalling these services.
  • Occurrence: Any stamp that has ACR or AKS installed will experience this failure.

Portal

Administrative subscriptions

  • Applicable: This issue applies to all supported releases.
  • Cause: The two administrative subscription types Metering and Consumption have been disabled and should not be used. If you have resources in them, an alert is generated until those resources are removed.
  • Remediation: If you have resources running on these two subscriptions, recreate them in user subscriptions.
  • Occurrence: Common

Networking

Virtual network gateway

Load balancer

Load Balancer rules

  • Applicable: This issue applies to all supported releases.
  • Cause: Updating/changing the load distribution property (session persistence) has no effect and some virtual machines might not participate in the traffic load distribution. For example, if you have 4 backend virtual machines and only 2 clients connecting to the load balancer, and the load distribution is set to client IP, the client sessions will always use the same backend virtual machines. Changing the load distribution property to "none" to distribute the client connections across all the backend virtual machines will have no effect.
  • Remediation: Recreating the load balancing rule will ensure the selected settings are correctly configured to all backend VMs.
  • Occurrence: Common

IPv6 button visible on "Add frontend IP address"

  • Applicable: This issue applies to release 2008 and later.
  • Cause: IPv6 button is visible on the Add frontend IP address option on a load balancer. These buttons are disabled and cannot be selected.
  • Occurrence: Common

Backend and frontend ports when floating IP is enabled

  • Applicable: This issue applies to all supported releases.
  • Cause: Both the frontend port and backend port need to be the same in the load balancing rule when floating IP is enabled. This behavior is by design.
  • Occurrence: Common

Health and alerts

Azure Kubernetes Service (AKS) or Azure Container Registry (ACR) resource providers fail in test-azurestack

  • Applicable: This issue applies to release 2102 and earlier.
  • Cause: When you run the test-azurestack update readiness command the test triggers the following two warnings:
    WARNING: Name resolution of containerservice.aks.azs failed
    WARNING: Name resolution of containerregistry.acr.azs failed
    
  • Remediation: These warnings are to be expected since you don't have the Azure Kubernetes Service (AKS) or Azure Container Registry (ACR) resource provider installed.
  • Occurrence: Common

No alerts in Syslog pipeline

  • Applicable: This issue applies to release 2102.
  • Cause: The alert module for customers depending on Syslog for alerts has been disabled in this release. For this release, the health and monitoring pipeline was modified to reduce the number of dependencies and services requirements. As a result, the new services will not emit alerts to the Syslog pipeline.
  • Remediation: None.
  • Occurrence: Common

Usage

Wrong status on infrastructure backup

  • Applicable: This issue applies to release 2102.
  • Cause: The infrastructure backup job can display the wrong status (failed or successful) while the status itself is refreshed. This does not impact the consistency of the backup data, but can cause confusion if an actual failure occurred.
  • Remediation: The issue will be fixed in the next hotfix for 2102.

Update

For known Azure Stack Hub update issues, see Troubleshooting Updates in Azure Stack Hub.

Update failed to install package Microsoft.AzureStack.Compute.Installer to CA VM

  • Applicable: This issue applies to all supported releases.
  • Cause: During update, a process takes a lock on the new content that needs to be copied to CA VM. When the update fails, the lock is released.
  • Remediation: Resume the update.
  • Occurrence: Rare

Portal

Administrative subscriptions

  • Applicable: This issue applies to all supported releases.
  • Cause: The two administrative subscriptions that were introduced with version 1804 should not be used. The subscription types are Metering subscription, and Consumption subscription.
  • Remediation: If you have resources running on these two subscriptions, recreate them in user subscriptions.
  • Occurrence: Common

Networking

Network Security Groups

VM deployment fails due to DenyAllOutbound rule

  • Applicable: This issue applies to all supported releases.

  • Cause: An explicit DenyAllOutbound rule to the internet cannot be created in an NSG during VM creation as this will prevent the communication required for the VM deployment to complete. It will also deny the two essential IPs that are required in order to deploy VMs: DHCP IP:169.254.169.254 and DNS IP: 168.63.129.16.

  • Remediation: Allow outbound traffic to the internet during VM creation, and modify the NSG to block the required traffic after VM creation is complete.

  • Occurrence: Common

Virtual Network Gateway

Load Balancer

Load Balancer rules

  • Applicable: This issue applies to all supported releases.
  • Cause: Updating/changing the load distribution property (session persistence) has no effect and some virtual machines might not participate in the traffic load distribution. For example, if you have 4 backend virtual machines and only 2 clients connecting to the load balancer, and the load distribution is set to client IP, the client sessions will always use the same backend virtual machines. Changing the load distribution property to "none" to distribute the client connections across all the backend virtual machines will have no effect.
  • Remediation: Recreating the load balancing rule will ensure the selected settings are correctly configured to all backend VMs.
  • Occurrence: Common

Load Balancer directing traffic to one backend VM in specific scenarios

  • Applicable: This issue applies to all supported releases.
  • Cause: When enabling Session Affinity on a load balancer, the 2 tuple hash utilizes the PA IP (Physical Address IP) instead of the private IPs assigned to the VMs. In scenarios where traffic directed to the load balancer arrives through a VPN, or if all the client VMs (source IPs) reside on the same node and Session Affinity is enabled, all traffic is directed to one backend VM.
  • Occurrence: Common

IPv6 button visible on "Add frontend IP address"

  • Applicable: This issue applies to the 2008 release.
  • Cause: The IPv6 button is visible and enabled when creating the frontend IP configuration of a public load balancer. This is a cosmetic issue on the portal. IPv6 is not supported on Azure Stack Hub.
  • Occurrence: Common

Backend port and frontend port need to be the same when floating IP is enabled

  • Applicable: This issue applies to all releases.
  • Cause: Both the frontend port and backend port need to be the same in the load balancing rule when floating IP is enabled. This behavior is by design.
  • Occurrence: Common

Compute

Stop or start VM

Stop-Deallocate VM results in MTU configuration removal

  • Applicable: This issue applies to all supported releases.
  • Cause: Performing Stop-Deallocate on a VM results in MTU configuration on the VM to be removed. This behavior is inconsistent with Azure.
  • Occurrence: Common

Archive

To access archived known issues for an older version, use the version selector dropdown above the table of contents on the left, and select the version you want to see.

Next steps

2005 archived known issues

2002 archived known issues

1910 archived known issues

1908 archived known issues

1907 archived known issues

1906 archived known issues

1905 archived known issues

1904 archived known issues

1903 archived known issues

1902 archived known issues

1901 archived known issues

1811 archived known issues

1809 archived known issues

1808 archived known issues

1807 archived known issues

1805 archived known issues

1804 archived known issues

1803 archived known issues

1802 archived known issues

You can access older versions of Azure Stack Hub known issues in the table of contents on the left side, under the Resources > Release notes archive. Select the desired archived version from the version selector dropdown in the upper left. These archived articles are provided for reference purposes only and do not imply support for these versions. For information about Azure Stack Hub support, see Azure Stack Hub servicing policy. For further assistance, contact Microsoft Customer Support Services.