Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster and shared disk for SAP ASCS/SCS

Windows OS Windows

This article describes the steps you take to prepare the Azure infrastructure for installing and configuring a high-availability SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk as an option for clustering an SAP ASCS instance. Two alternatives for cluster shared disk are presented in the documentation:

The presented configuration is relying on Azure proximity placement groups (PPG) to achieve optimal network latency for SAP workloads. The documentation doesn't cover the database layer.

Note

Azure proximity placement groups are prerequisite for using Azure Shared Disk.

Prerequisites

Before you begin the installation, review this article:

Create the ASCS VMs

For SAP ASCS / SCS cluster deploy two VMs in Azure Availability Set. Deploy the VMs in the same Proximity Placement Group. Once the VMs are deployed:

  • Create Azure Internal Load Balancer for SAP ASCS /SCS instance
  • Add Windows VMs to the AD domain

The host names and the IP addresses for the presented scenario are:

Host name role Host name Static IP address Availability set Proximity placement group
1st cluster node ASCS/SCS cluster pr1-ascs-10 10.0.0.4 pr1-ascs-avset PR1PPG
2nd cluster node ASCS/SCS cluster pr1-ascs-11 10.0.0.5 pr1-ascs-avset PR1PPG
Cluster Network Name pr1clust 10.0.0.42(only for Win 2016 cluster) n/a n/a
ASCS cluster network name pr1-ascscl 10.0.0.43 n/a n/a
ERS cluster network name (only for ERS2) pr1-erscl 10.0.0.44 n/a n/a

Create Azure internal load balancer

SAP ASCS, SAP SCS, and the new SAP ERS2, use virtual hostname and virtual IP addresses. On Azure a load balancer is required to use a virtual IP address. We strongly recommend using Standard load balancer.

Important

Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see Azure Load balancer Limitations. If you need additional IP address for the VM, deploy a second NIC.

The following list shows the configuration of the (A)SCS/ERS load balancer. The configuration for both SAP ASCS and ERS2 in performed in the same Azure load balancer.

(A)SCS

  • Frontend configuration
    • Static ASCS/SCS IP address 10.0.0.43
  • Backend configuration
    Add all virtual machines that should be part of the (A)SCS/ERS cluster. In this example VMs pr1-ascs-10 and pr1-ascs-11.
  • Probe Port
    • Port 620nr Leave the default option for Protocol (TCP), Interval (5), Unhealthy threshold (2)
  • Load-balancing rules
    • If using Standard Load Balancer, select HA ports

    • If using Basic Load Balancer, create Load balancing rules for the following ports

      • 32nr TCP
      • 36nr TCP
      • 39nr TCP
      • 81nr TCP
      • 5nr13 TCP
      • 5nr14 TCP
      • 5nr16 TCP
    • Make sure that Idle timeout (minutes) is set to max value 30, and that Floating IP (direct server return) is Enabled.

ERS2

As Enqueue Replication Server 2 (ERS2) is also clustered, ERS2 virtual IP address must be also configured on Azure ILB in addition to above SAP ASCS/SCS IP. This section only applies, if using Enqueue replication server 2 architecture.

  • 2nd Frontend configuration

    • Static SAP ERS2 IP address 10.0.0.44
  • Backend configuration
    The VMs were already added to the ILB backend pool.

  • 2nd Probe Port

    • Port 621nr
      Leave the default option for Protocol (TCP), Interval (5), Unhealthy threshold (2)
  • 2nd Load-balancing rules

    • If using Standard Load Balancer, select HA ports

    • If using Basic Load Balancer, create Load balancing rules for the following ports

      • 32nr TCP
      • 33nr TCP
      • 5nr13 TCP
      • 5nr14 TCP
      • 5nr16 TCP
    • Make sure that Idle timeout (minutes) is set to max value 30, and that Floating IP (direct server return) is Enabled.

Tip

With the Azure Resource Manager Template for WSFC for SAP ASCS/SCS instance with Azure Shared Disk, you can automate the infrastructure preparation, using Azure Shared Disk for one SAP SID with ERS1.
The Azure ARM template will create two Windows 2019 or 2016 VMs, create Azure shared disk and attach to the VMs. Azure Internal Load Balancer will be created and configured as well. For details - see the ARM template.

Add registry entries on both cluster nodes of the ASCS/SCS instance

Azure Load Balancer may close connections, if the connections are idle for a period and exceed the idle timeout. The SAP work processes open connections to the SAP enqueue process as soon as the first enqueue/dequeue request needs to be sent. To avoid interrupting these connections, change the TCP/IP KeepAliveTime and KeepAliveInterval values on both cluster nodes. If using ERS1, it is also necessary to add SAP profile parameters, as described later in this article. The following registry entries must be changed on both cluster nodes:

  • KeepAliveTime
  • KeepAliveInterval
Path Variable name Variable type Value Documentation
HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters KeepAliveTime REG_DWORD (Decimal) 120000 KeepAliveTime
HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters KeepAliveInterval REG_DWORD (Decimal) 120000 KeepAliveInterval

To apply the changes, restart both cluster nodes.

Add the Windows VMs to the domain

After you assign static IP addresses to the virtual machines, add the virtual machines to the domain.

Install and configure Windows failover cluster

Install the Windows failover cluster feature

Run this command on one of the cluster nodes:

 # Hostnames of the Win cluster for SAP ASCS/SCS
	$SAPSID = "PR1"
	$ClusterNodes = ("pr1-ascs-10","pr1-ascs-11")
	$ClusterName = $SAPSID.ToLower() + "clust"
	
	# Install Windows features.
	# After the feature installs, manually reboot both nodes
	Invoke-Command $ClusterNodes {Install-WindowsFeature Failover-Clustering, FS-FileServer -IncludeAllSubFeature -IncludeManagementTools }

Once the feature installation has completed, reboot both cluster nodes.

Test and configure Windows failover cluster

On Windows 2019, the cluster will automatically recognize that it is running in Azure, and as a default option for cluster management IP, it will use Distributed Network name. Therefore, it will use any of the cluster nodes local IP addresses. As a result, there is no need for a dedicated (virtual) network name for the cluster, and there is no need to configure this IP address on Azure Internal Load Balancer.

For more information see, Windows Server 2019 Failover Clustering New features Run this command on one of the cluster nodes:

 # Hostnames of the Win cluster for SAP ASCS/SCS
	$SAPSID = "PR1"
	$ClusterNodes = ("pr1-ascs-10","pr1-ascs-11")
	$ClusterName = $SAPSID.ToLower() + "clust"
	
	# IP adress for cluster network name is needed ONLY on Windows Server 2016 cluster
	$ClusterStaticIPAddress = "10.0.0.42"
		
	# Test cluster
	Test-Cluster –Node $ClusterNodes -Verbose
	
	$ComputerInfo = Get-ComputerInfo
	
	$WindowsVersion = $ComputerInfo.WindowsProductName
	
	if($WindowsVersion -eq "Windows Server 2019 Datacenter"){
	    write-host "Configuring Windows Failover Cluster on Windows Server 2019 Datacenter..."
	    New-Cluster –Name $ClusterName –Node  $ClusterNodes -Verbose
	}elseif($WindowsVersion -eq "Windows Server 2016 Datacenter"){
	    write-host "Configuring Windows Failover Cluster on Windows Server 2016 Datacenter..."
	    New-Cluster –Name $ClusterName –Node  $ClusterNodes –StaticAddress $ClusterStaticIPAddress -Verbose 
	}else{
	    Write-Error "Not supported Windows version!"
	}

Configure cluster cloud quorum

As you use Windows Server 2016 or 2019, we recommended configuring Azure Cloud Witness, as cluster quorum.

Run this command on one of the cluster nodes:

	$AzureStorageAccountName = "cloudquorumwitness"
	Set-ClusterQuorum –CloudWitness –AccountName $AzureStorageAccountName -AccessKey <YourAzureStorageAccessKey> -Verbose

Tuning the Windows failover cluster thresholds

After you successfully install the Windows failover cluster, you need to adjust some thresholds, to be suitable for clusters deployed in Azure. The parameters to be changed are documented in Tuning failover cluster network thresholds. Assuming that your two VMs that make up the Windows cluster configuration for ASCS/SCS are in the same subnet, change the following parameters to these values:

  • SameSubNetDelay = 2000
  • SameSubNetThreshold = 15
  • RoutingHistoryLength = 30

These settings were tested with customers and offer a good compromise. They are resilient enough, but they also provide failover that is fast enough for real error conditions in SAP workloads or VM failure.

Configure Azure shared disk

This section is only applicable, if you are using Azure shared disk.

Create and attach Azure shared disk with PowerShell

Run this command on one of the cluster nodes. You will need to adjust the values for your resource group, Azure region, SAPSID, and so on.

	#############################
	# Create Azure Shared Disk
	#############################
	
	$ResourceGroupName = "MyResourceGroup"
	$location = "MyAzureRegion"
	$SAPSID = "PR1"
	
	$DiskSizeInGB = 512
	$DiskName = "$($SAPSID)ASCSSharedDisk"
	
	# With parameter '-MaxSharesCount', we define the maximum number of cluster nodes to attach the shared disk
	$NumberOfWindowsClusterNodes = 2
			
	$diskConfig = New-AzDiskConfig -Location $location -SkuName Premium_LRS  -CreateOption Empty  -DiskSizeGB $DiskSizeInGB -MaxSharesCount $NumberOfWindowsClusterNodes
	$dataDisk = New-AzDisk -ResourceGroupName $ResourceGroupName -DiskName $DiskName -Disk $diskConfig
	
	##################################
	## Attach the disk to cluster VMs
	##################################
	# ASCS Cluster VM1
	$ASCSClusterVM1 = "$SAPSID-ascs-10"
	
	# ASCS Cluster VM2
	$ASCSClusterVM2 = "$SAPSID-ascs-11"
	
	# Add the Azure Shared Disk to Cluster Node 1
	$vm = Get-AzVM -ResourceGroupName $ResourceGroupName -Name $ASCSClusterVM1 
	$vm = Add-AzVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -ManagedDiskId $dataDisk.Id -Lun 0
	Update-AzVm -VM $vm -ResourceGroupName $ResourceGroupName -Verbose
	
	# Add the Azure Shared Disk to Cluster Node 2
	$vm = Get-AzVM -ResourceGroupName $ResourceGroupName -Name $ASCSClusterVM2
	$vm = Add-AzVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -ManagedDiskId $dataDisk.Id -Lun 0
	Update-AzVm -VM $vm -ResourceGroupName $ResourceGroupName -Verbose

Format the shared disk with PowerShell

  1. Get the disk number. Run these PowerShell commands on one of the cluster nodes:

     Get-Disk | Where-Object PartitionStyle -Eq "RAW"  | Format-Table -AutoSize 
     # Example output
     # Number Friendly Name     Serial Number HealthStatus OperationalStatus Total Size Partition Style
     # ------ -------------     ------------- ------------ ----------------- ---------- ---------------
     # 2      Msft Virtual Disk               Healthy      Online                512 GB RAW            
    
    
  2. Format the disk. In this example, it is disk number 2.

     # Format SAP ASCS Disk number '2', with drive letter 'S'
     $SAPSID = "PR1"
     $DiskNumber = 2
     $DriveLetter = "S"
     $DiskLabel = "$SAPSID" + "SAP"
    
     Get-Disk -Number $DiskNumber | Where-Object PartitionStyle -Eq "RAW" | Initialize-Disk -PartitionStyle GPT -PassThru |  New-Partition -DriveLetter $DriveLetter -UseMaximumSize | Format-Volume  -FileSystem ReFS -NewFileSystemLabel $DiskLabel -Force -Verbose
     # Example outout
     # DriveLetter FileSystemLabel FileSystem DriveType HealthStatus OperationalStatus SizeRemaining      Size
     # ----------- --------------- ---------- --------- ------------ ----------------- -------------      ----
     # S           PR1SAP          ReFS       Fixed     Healthy      OK                    504.98 GB 511.81 GB
    
  3. Verify that the disk is now visible as a cluster disk.

     # List all disks
     Get-ClusterAvailableDisk -All
     # Example output
     # Cluster    : pr1clust
     # Id         : 88ff1d94-0cf1-4c70-89ae-cbbb2826a484
     # Name       : Cluster Disk 1
     # Number     : 2
     # Size       : 549755813888
     # Partitions : {\\?\GLOBALROOT\Device\Harddisk2\Partition2\}
    
  4. Register the disk in the cluster.

     # Add the disk to cluster 
     Get-ClusterAvailableDisk -All | Add-ClusterDisk
     # Example output	 
     # Name           State  OwnerGroup        ResourceType 
     # ----           -----  ----------        ------------ 
     # Cluster Disk 1 Online Available Storage Physical Disk
    

SIOS DataKeeper Cluster Edition for the SAP ASCS/SCS cluster share disk

This section is only applicable, if you are using the third-party software SIOS DataKeeper Cluster Edition to create a mirrored storage that simulates cluster shared disk.

Now, you have a working Windows Server failover clustering configuration in Azure. To install an SAP ASCS/SCS instance, you need a shared disk resource. One of the options is to use SIOS DataKeeper Cluster Edition is a third-party solution that you can use to create shared disk resources.

Installing SIOS DataKeeper Cluster Edition for the SAP ASCS/SCS cluster share disk involves these tasks:

Install SIOS DataKeeper

Install SIOS DataKeeper Cluster Edition on each node in the cluster. To create virtual shared storage with SIOS DataKeeper, create a synced mirror and then simulate cluster shared storage.

Before you install the SIOS software, create the DataKeeperSvc domain user.

Note

Add the DataKeeperSvc domain user to the Local Administrator group on both cluster nodes.

  1. Install the SIOS software on both cluster nodes.

    SIOS installer

    Figure 31: First page of the SIOS DataKeeper installation

    First page of the SIOS DataKeeper installation

  2. In the dialog box, select Yes.

    Figure 32: DataKeeper informs you that a service will be disabled

    DataKeeper informs you that a service will be disabled

  3. In the dialog box, we recommend that you select Domain or Server account.

    Figure 33: User selection for SIOS DataKeeper

    User selection for SIOS DataKeeper

  4. Enter the domain account user name and password that you created for SIOS DataKeeper.

    Figure 34: Enter the domain user name and password for the SIOS DataKeeper installation

    Enter the domain user name and password for the SIOS DataKeeper installation

  5. Install the license key for your SIOS DataKeeper instance, as shown in Figure 35.

    Figure 35: Enter your SIOS DataKeeper license key

    Enter your SIOS DataKeeper license key

  6. When prompted, restart the virtual machine.

Configure SIOS DataKeeper

After you install SIOS DataKeeper on both nodes, start the configuration. The goal of the configuration is to have synchronous data replication between the additional disks that are attached to each of the virtual machines.

  1. Start the DataKeeper Management and Configuration tool, and then select Connect Server.

    Figure 36: SIOS DataKeeper Management and Configuration tool

    SIOS DataKeeper Management and Configuration tool

  2. Enter the name or TCP/IP address of the first node the Management and Configuration tool should connect to, and, in a second step, the second node.

    Figure 37: Insert the name or TCP/IP address of the first node the Management and Configuration tool should connect to, and in a second step, the second node

    Insert the name or TCP/IP address of the first node the Management and Configuration tool should connect to, and in a second step, the second node

  3. Create the replication job between the two nodes.

    Figure 38: Create a replication job

    Create a replication job

    A wizard guides you through the process of creating a replication job.

  4. Define the name of the replication job.

    Figure 39: Define the name of the replication job

    Define the name of the replication job

    Figure 40: Define the base data for the node, which should be the current source node

    Define the base data for the node, which should be the current source node

  5. Define the name, TCP/IP address, and disk volume of the target node.

    Figure 41: Define the name, TCP/IP address, and disk volume of the current target node

    Define the name, TCP/IP address, and disk volume of the current target node

  6. Define the compression algorithms. In our example, we recommend that you compress the replication stream. Especially in resynchronization situations, the compression of the replication stream dramatically reduces resynchronization time. Compression uses the CPU and RAM resources of a virtual machine. As the compression rate increases, so does the volume of CPU resources that are used. You can adjust this setting later.

  7. Another setting you need to check is whether the replication occurs asynchronously or synchronously. When you protect SAP ASCS/SCS configurations, you must use synchronous replication.

    Figure 42: Define replication details

    Define replication details

  8. Define whether the volume that is replicated by the replication job should be represented to a Windows Server failover cluster configuration as a shared disk. For the SAP ASCS/SCS configuration, select Yes so that the Windows cluster sees the replicated volume as a shared disk that it can use as a cluster volume.

    Figure 43: Select Yes to set the replicated volume as a cluster volume

    Select Yes to set the replicated volume as a cluster volume

    After the volume is created, the DataKeeper Management and Configuration tool shows that the replication job is active.

    Figure 44: DataKeeper synchronous mirroring for the SAP ASCS/SCS share disk is active

    DataKeeper synchronous mirroring for the SAP ASCS/SCS share disk is active

    Failover Cluster Manager now shows the disk as a DataKeeper disk, as shown in Figure 45:

    Figure 45: Failover Cluster Manager shows the disk that DataKeeper replicated

    Failover Cluster Manager shows the disk that DataKeeper replicated

Next steps