Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux with Azure NetApp Files for SAP applications

This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster framework, and install a highly available SAP NetWeaver 7.50 system, using Azure NetApp Files. In the example configurations, installation commands etc. ASCS instance is number 00, the ERS instance is number 01, Primary Application instance (PAS) is 02 and the Application instance (AAS) is 03. SAP System ID QAS is used.

The database layer isn't covered in detail in this article.

Read the following SAP Notes and papers first:

Overview

High availability(HA) for SAP Netweaver central services requires shared storage. To achieve that on Red Hat Linux so far it was necessary to build separate highly available GlusterFS cluster.

Now it is possible to achieve SAP Netweaver HA by using shared storage, deployed on Azure NetApp Files. Using Azure NetApp Files for the shared storage eliminates the need for additional GlusterFS cluster. Pacemaker is still needed for HA of the SAP Netweaver central services(ASCS/SCS).

SAP NetWeaver High Availability overview

SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA database use virtual hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. We recommend using Standard load balancer. The following list shows the configuration of the load balancer with separate front-end IPs for (A)SCS and ERS.

Important

Multi-SID clustering of SAP ASCS/ERS with Red Hat Linux as guest operating system in Azure VMs is NOT supported. Multi-SID clustering describes the installation of multiple SAP ASCS/ERS instances with different SIDs in one Pacemaker cluster.

(A)SCS

  • Frontend configuration
    • IP address 192.168.14.9
  • Backend configuration
    • Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS cluster
  • Probe Port
    • Port 620<nr>
  • Load-balancing rules
    • If using Standard Load Balancer, select HA ports
    • 32<nr> TCP
    • 36<nr> TCP
    • 39<nr> TCP
    • 81<nr> TCP
    • 5<nr>13 TCP
    • 5<nr>14 TCP
    • 5<nr>16 TCP

ERS

  • Frontend configuration
    • IP address 192.168.14.10
  • Backend configuration
    • Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS cluster
  • Probe Port
    • Port 621<nr>
  • Load-balancing rules
    • If using Standard Load Balancer, select HA ports
    • 32<nr> TCP
    • 33<nr> TCP
    • 5<nr>13 TCP
    • 5<nr>14 TCP
    • 5<nr>16 TCP

Setting up the Azure NetApp Files infrastructure

SAP NetWeaver requires shared storage for the transport and profile directory. Before proceeding with the setup for Azure NetApp files infrastructure, familiarize yourself with the Azure NetApp Files documentation. Check if your selected Azure region offers Azure NetApp Files. The following link shows the availability of Azure NetApp Files by Azure region: Azure NetApp Files Availability by Azure Region.

Azure NetApp files are available in several Azure regions. Before deploying Azure NetApp Files, request onboarding to Azure NetApp Files, following the Register for Azure NetApp files instructions.

Deploy Azure NetApp Files resources

The steps assume that you have already deployed Azure Virtual Network. The Azure NetApp Files resources and the VMs, where the Azure NetApp Files resources will be mounted must be deployed in the same Azure Virtual Network or in peered Azure Virtual Networks.

  1. If you haven't done that already, request onboarding to Azure NetApp Files.

  2. Create the NetApp account in the selected Azure region, following the instructions to create NetApp Account.

  3. Set up Azure NetApp Files capacity pool, following the instructions on how to set up Azure NetApp Files capacity pool.
    The SAP Netweaver architecture presented in this article uses single Azure NetApp Files capacity pool, Premium SKU. We recommend Azure NetApp Files Premium SKU for SAP Netweaver application workload on Azure.

  4. Delegate a subnet to Azure NetApp files as described in the instructions Delegate a subnet to Azure NetApp Files.

  5. Deploy Azure NetApp Files volumes, following the instructions to create a volume for Azure NetApp Files. Deploy the volumes in the designated Azure NetApp Files subnet. Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in the same Azure Virtual Network or in peered Azure Virtual Networks. In this example we use two Azure NetApp Files volumes: sapQAS and transSAP. The file paths that are mounted to the corresponding mount points are /usrsapqas/sapmntQAS, /usrsapqas/usrsapQASsys, etc.

    1. volume sapQAS (nfs://192.168.24.5/usrsapqas/sapmntQAS)
    2. volume sapQAS (nfs://192.168.24.5/usrsapqas/usrsapQASascs)
    3. volume sapQAS (nfs://192.168.24.5/usrsapqas/usrsapQASsys)
    4. volume sapQAS (nfs://192.168.24.5/usrsapqas/usrsapQASers)
    5. volume transSAP (nfs://192.168.24.4/transSAP)
    6. volume sapQAS (nfs://192.168.24.5/usrsapqas/usrsapQASpas)
    7. volume sapQAS (nfs://192.168.24.5/usrsapqas/usrsapQASaas)

In this example, we used Azure NetApp Files for all SAP Netweaver file systems to demonstrate how Azure NetApp Files can be used. The SAP file systems that don't need to be mounted via NFS can also be deployed as Azure disk storage . In this example a-e must be on Azure NetApp Files and f-g (that is, /usr/sap/QAS/D02, /usr/sap/QAS/D03) could be deployed as Azure disk storage.

Important considerations

When considering Azure NetApp Files for the SAP Netweaver on SUSE High Availability architecture, be aware of the following important considerations:

  • The minimum capacity pool is 4 TiB. The capacity pool size can be increased in 1 TiB increments.
  • The minimum volume is 100 GiB
  • Azure NetApp Files and all virtual machines, where Azure NetApp Files volumes will be mounted, must be in the same Azure Virtual Network or in peered virtual networks in the same region. Azure NetApp Files access over VNET peering in the same region is supported now. Azure NetApp access over global peering is not yet supported.
  • The selected virtual network must have a subnet, delegated to Azure NetApp Files.
  • Azure NetApp Files offers export policy: you can control the allowed clients, the access type (Read&Write, Read Only, etc.).
  • Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files feature isn't deployed in all Availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions.

Setting up (A)SCS

In this example, the resources were deployed manually via the Azure portal.

Deploy Linux manually via Azure portal

First you need to create the Azure NetApp Files volumes. Deploy the VMs. Afterwards, you create a load balancer and use the virtual machines in the backend pools.

  1. Create load balancer (internal, standard):
    1. Create the frontend IP addresses
      1. IP address 192.168.14.9 for the ASCS
        1. Open the load balancer, select frontend IP pool, and click Add
        2. Enter the name of the new frontend IP pool (for example frontend.QAS.ASCS)
        3. Set the Assignment to Static and enter the IP address (for example 192.168.14.9)
        4. Click OK
      2. IP address 192.168.14.10 for the ASCS ERS
        • Repeat the steps above under "a" to create an IP address for the ERS (for example 192.168.14.10 and frontend.QAS.ERS)
    2. Create the backend pools
      1. Create a backend pool for the ASCS
        1. Open the load balancer, select backend pools, and click Add
        2. Enter the name of the new backend pool (for example backend.QAS)
        3. Click Add a virtual machine.
        4. Select Virtual machine.
        5. Select the virtual machines of the (A)SCS cluster and their IP addresses.
        6. Click Add
    3. Create the health probes
      1. Port 62000 for ASCS
        1. Open the load balancer, select health probes, and click Add
        2. Enter the name of the new health probe (for example health.QAS.ASCS)
        3. Select TCP as protocol, port 62000, keep Interval 5 and Unhealthy threshold 2
        4. Click OK
      2. Port 62101 for ASCS ERS
        • Repeat the steps above under "c" to create a health probe for the ERS (for example 62101 and health.QAS.ERS)
    4. Load-balancing rules
      1. Load-balancing rules for ASCS
        1. Open the load balancer, select Load-balancing rules, and click Add
        2. Enter the name of the new load balancer rule (for example lb.QAS.ASCS)
        3. Select the frontend IP address for ASCS, backend pool, and health probe you created earlier (for example frontend.QAS.ASCS, backend.QAS and health.QAS.ASCS)
        4. Select HA ports
        5. Increase idle timeout to 30 minutes
        6. Make sure to enable Floating IP
        7. Click OK
        • Repeat the steps above to create load balancing rules for ERS (for example lb.QAS.ERS)
  2. Alternatively, if your scenario requires basic load balancer (internal), follow these steps:
    1. Create the frontend IP addresses
      1. IP address 192.168.14.9 for the ASCS
        1. Open the load balancer, select frontend IP pool, and click Add
        2. Enter the name of the new frontend IP pool (for example frontend.QAS.ASCS)
        3. Set the Assignment to Static and enter the IP address (for example 192.168.14.9)
        4. Click OK
      2. IP address 192.168.14.10 for the ASCS ERS
        • Repeat the steps above under "a" to create an IP address for the ERS (for example 192.168.14.10 and frontend.QAS.ERS)
    2. Create the backend pools
      1. Create a backend pool for the ASCS
        1. Open the load balancer, select backend pools, and click Add
        2. Enter the name of the new backend pool (for example backend.QAS)
        3. Click Add a virtual machine.
        4. Select the Availability Set you created earlier for ASCS
        5. Select the virtual machines of the (A)SCS cluster
        6. Click OK
    3. Create the health probes
      1. Port 62000 for ASCS
        1. Open the load balancer, select health probes, and click Add
        2. Enter the name of the new health probe (for example health.QAS.ASCS)
        3. Select TCP as protocol, port 62000, keep Interval 5 and Unhealthy threshold 2
        4. Click OK
      2. Port 62101 for ASCS ERS
        • Repeat the steps above under "c" to create a health probe for the ERS (for example 62101 and health.QAS.ERS)
    4. Load-balancing rules
      1. 3200 TCP for ASCS
        1. Open the load balancer, select Load-balancing rules, and click Add
        2. Enter the name of the new load balancer rule (for example lb.QAS.ASCS.3200)
        3. Select the frontend IP address for ASCS, backend pool, and health probe you created earlier (for example frontend.QAS.ASCS)
        4. Keep protocol TCP, enter port 3200
        5. Increase idle timeout to 30 minutes
        6. Make sure to enable Floating IP
        7. Click OK
      2. Additional ports for the ASCS
        • Repeat the steps above under "d" for ports 3600, 3900, 8100, 50013, 50014, 50016 and TCP for the ASCS
      3. Additional ports for the ASCS ERS
        • Repeat the steps above under "d" for ports 3201, 3301, 50113, 50114, 50116 and TCP for the ASCS ERS

Note

When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios.

Important

Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter net.ipv4.tcp_timestamps to 0. For details see Load Balancer health probes.

Create Pacemaker cluster

Follow the steps in Setting up Pacemaker on Red Hat Enterprise Linux in Azure to create a basic Pacemaker cluster for this (A)SCS server.

Prepare for SAP NetWeaver installation

The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] - only applicable to node 2.

  1. [A] Setup host name resolution

    You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the /etc/hosts file. Replace the IP address and the hostname in the following commands

    sudo vi /etc/hosts
    

    Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment

    # IP address of cluster node 1
    192.168.14.5    anftstsapcl1
    # IP address of cluster node 2
    192.168.14.6     anftstsapcl2
    # IP address of the load balancer frontend configuration for SAP Netweaver ASCS
    192.168.14.9    anftstsapvh
    # IP address of the load balancer frontend configuration for SAP Netweaver ERS
    192.168.14.10    anftstsapers
    
  2. [1] Create SAP directories in the Azure NetApp Files volume.
    Mount temporarily the Azure NetApp Files volume on one of the VMs and create the SAP directories(file paths).

     #mount temporarily the volume
     sudo mkdir -p /saptmp
     sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 192.168.24.5:/sapQAS /saptmp
     # create the SAP directories
     sudo cd /saptmp
     sudo mkdir -p sapmntQAS
     sudo mkdir -p usrsapQASascs
     sudo mkdir -p usrsapQASers
     sudo mkdir -p usrsapQASsys
     sudo mkdir -p usrsapQASpas
     sudo mkdir -p usrsapQASaas
     # unmount the volume and delete the temporary directory
     sudo cd ..
     sudo umount /saptmp
     sudo rmdir /saptmp
    
  3. [A] Create the shared directories

    sudo mkdir -p /sapmnt/QAS
    sudo mkdir -p /usr/sap/trans
    sudo mkdir -p /usr/sap/QAS/SYS
    sudo mkdir -p /usr/sap/QAS/ASCS00
    sudo mkdir -p /usr/sap/QAS/ERS01
    
    sudo chattr +i /sapmnt/QAS
    sudo chattr +i /usr/sap/trans
    sudo chattr +i /usr/sap/QAS/SYS
    sudo chattr +i /usr/sap/QAS/ASCS00
    sudo chattr +i /usr/sap/QAS/ERS01
    
  4. [A] Install NFS client and other requirements

    sudo yum -y install nfs-utils resource-agents resource-agents-sap
    
  5. [A] Check version of resource-agents-sap

    Make sure that the version of the installed resource-agents-sap package is at least 3.9.5-124.el7

    sudo yum info resource-agents-sap
    
    # Loaded plugins: langpacks, product-id, search-disabled-repos
    # Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
    # Installed Packages
    # Name        : resource-agents-sap
    # Arch        : x86_64
    # Version     : 3.9.5
    # Release     : 124.el7
    # Size        : 100 k
    # Repo        : installed
    # From repo   : rhel-sap-for-rhel-7-server-rpms
    # Summary     : SAP cluster resource agents and connector script
    # URL         : https://github.com/ClusterLabs/resource-agents
    # License     : GPLv2+
    # Description : The SAP resource agents and connector script interface with
    #          : Pacemaker to allow SAP instances to be managed in a cluster
    #          : environment.
    
  6. [A] Add mount entries

    sudo vi /etc/fstab
    
    # Add the following lines to fstab, save and exit
     192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs rw,hard,rsize=65536,wsize=65536,vers=3
     192.168.24.5:/sapQAS/usrsapQASsys /usr/sap/QAS/SYS nfs rw,hard,rsize=65536,wsize=65536,vers=3
     192.168.24.4:/transSAP /usr/sap/trans nfs rw,hard,rsize=65536,wsize=65536,vers=3
    

    Note

    Make sure to match the NFS protocol version of the Azure NetApp Files volumes, when mounting the volumes. In this example the Azure NetApp Files volumes were created as NFSv3 volumes.

    Mount the new shares

    sudo mount -a  
    
  7. [A] Configure SWAP file

    sudo vi /etc/waagent.conf
    
    # Set the property ResourceDisk.EnableSwap to y
    # Create and use swapfile on resource disk.
    ResourceDisk.EnableSwap=y
    
    # Set the size of the SWAP file with property ResourceDisk.SwapSizeMB
    # The free space of resource disk varies by virtual machine size. Make sure that you do not set a value that is too big. You can check the SWAP space with command swapon
    # Size of the swapfile.
    ResourceDisk.SwapSizeMB=2000
    

    Restart the Agent to activate the change

    sudo service waagent restart
    
  8. [A] RHEL configuration

    Configure RHEL as described in SAP Note 2002167

Installing SAP NetWeaver ASCS/ERS

  1. [1] Create a virtual IP resource and health-probe for the ASCS instance

    sudo pcs node standby anftstsapcl2
    
    sudo pcs resource create fs_QAS_ASCS Filesystem device='192.168.24.5:/sapQAS/usrsapQASascs' \
      directory='/usr/sap/QAS/ASCS00' fstype='nfs' \
      --group g-QAS_ASCS
    
    sudo pcs resource create vip_QAS_ASCS IPaddr2 \
      ip=192.168.14.9 cidr_netmask=24 \
      --group g-QAS_ASCS
    
    sudo pcs resource create nc_QAS_ASCS azure-lb port=62000 \
      --group g-QAS_ASCS
    

    Make sure that the cluster status is ok and that all resources are started. It is not important on which node the resources are running.

    sudo pcs status
    
    # Node anftstsapcl2: standby
    # Online: [ anftstsapcl1 ]
    #
    # Full list of resources:
    #
    # rsc_st_azure    (stonith:fence_azure_arm):      Started anftstsapcl1
    #  Resource Group: g-QAS_ASCS
    #      fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
    #      nc_QAS_ASCS        (ocf::heartbeat:azure-lb):      Started anftstsapcl1
    #      vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
    
  2. [1] Install SAP NetWeaver ASCS

    Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the ASCS, for example anftstsapvh, 192.168.14.9 and the instance number that you used for the probe of the load balancer, for example 00.

    You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst.

    # Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the command again.
    sudo firewall-cmd --zone=public  --add-port=4237/tcp
    
    sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=<virtual_hostname>
    

    If the installation fails to create a subfolder in /usr/sap/QAS/ASCS00, try setting the owner and group of the ASCS00 folder and retry.

    sudo chown qasadm /usr/sap/QAS/ASCS00
    sudo chgrp sapsys /usr/sap/QAS/ASCS00
    
  3. [1] Create a virtual IP resource and health-probe for the ERS instance

    sudo pcs node unstandby anftstsapcl2
    sudo pcs node standby anftstsapcl1
    
    sudo pcs resource create fs_QAS_AERS Filesystem device='192.168.24.5:/sapQAS/usrsapQASers' \
      directory='/usr/sap/QAS/ERS01' fstype='nfs' \
     --group g-QAS_AERS
    
    sudo pcs resource create vip_QAS_AERS IPaddr2 \
      ip=192.168.14.10 cidr_netmask=24 \
     --group g-QAS_AERS
    
    sudo pcs resource create nc_QAS_AERS azure-lb port=62101 \
     --group g-QAS_AERS
    

    Make sure that the cluster status is ok and that all resources are started. It is not important on which node the resources are running.

    sudo pcs status
    
    # Node anftstsapcl1: standby
    # Online: [ anftstsapcl2 ]
    #
    # Full list of resources:
    #
    # rsc_st_azure    (stonith:fence_azure_arm):      Started anftstsapcl2
    #  Resource Group: g-QAS_ASCS
    #      fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
    #      nc_QAS_ASCS        (ocf::heartbeat:azure-lb):      Started anftstsapcl2<
    #      vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
    #  Resource Group: g-QAS_AERS
    #      fs_QAS_AERS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
    #      nc_QAS_AERS        (ocf::heartbeat:azure-lb):      Started anftstsapcl2
    #      vip_QAS_AERS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
    
  4. [2] Install SAP NetWeaver ERS

    Install SAP NetWeaver ERS as root on the second node using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the ERS, for example anftstsapers, 192.168.14.10 and the instance number that you used for the probe of the load balancer, for example 01.

    You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst.

    # Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the command again.
    sudo firewall-cmd --zone=public  --add-port=4237/tcp
    
    sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=<virtual_hostname>
    

    If the installation fails to create a subfolder in /usr/sap/QAS/ERS01, try setting the owner and group of the ERS01 folder and retry.

    sudo chown qaadm /usr/sap/QAS/ERS01
    sudo chgrp sapsys /usr/sap/QAS/ERS01
    
  5. [1] Adapt the ASCS/SCS and ERS instance profiles

    • ASCS/SCS profile
    sudo vi /sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh
    
    # Change the restart command to a start command
    #Restart_Program_01 = local $(_EN) pf=$(_PF)
    Start_Program_01 = local $(_EN) pf=$(_PF)
    
    # Add the keep alive parameter
    enque/encni/set_so_keepalive = true
    
    • ERS profile
    sudo vi /sapmnt/QAS/profile/QAS_ERS01_anftstsapers
    
    # Change the restart command to a start command
    #Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
    Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
    
    # remove Autostart from ERS profile
    # Autostart = 1
    
  6. [A] Configure Keep Alive

    The communication between the SAP NetWeaver application server and the ASCS/SCS is routed through a software load balancer. The load balancer disconnects inactive connections after a configurable timeout. To prevent this, you need to set a parameter in the SAP NetWeaver ASCS/SCS profile and change the Linux system settings. Read SAP Note 1410736 for more information.

    The ASCS/SCS profile parameter enque/encni/set_so_keepalive was already added in the last step.

    # Change the Linux system configuration
    sudo sysctl net.ipv4.tcp_keepalive_time=120
    
  7. [A] Update the /usr/sap/sapservices file

    To prevent the start of the instances by the sapinit startup script, all instances managed by Pacemaker must be commented out from /usr/sap/sapservices file. Do not comment out the SAP HANA instance if it will be used with HANA SR.

    sudo vi /usr/sap/sapservices
    
    # On the node where you installed the ASCS, comment out the following line
    # LD_LIBRARY_PATH=/usr/sap/QAS/ASCS00/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/QAS/ASCS00/exe/sapstartsrv pf=/usr/sap/QAS/SYS/profile/QAS_ASCS00_anftstsapvh -D -u qasadm
    
    # On the node where you installed the ERS, comment out the following line
    # LD_LIBRARY_PATH=/usr/sap/QAS/ERS01/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/QAS/ERS01/exe/sapstartsrv pf=/usr/sap/QAS/ERS01/profile/QAS_ERS01_anftstsapers -D -u qasadm
    
  8. [1] Create the SAP cluster resources

    If using enqueue server 1 architecture (ENSA1), define the resources as follows:

    sudo pcs property set maintenance-mode=true
    
     sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \
     InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \
     AUTOMATIC_RECOVER=false \
     meta resource-stickiness=5000 migration-threshold=1 \
     --group g-QAS_ASCS
    
     sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \
     InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \
     AUTOMATIC_RECOVER=false IS_ERS=true \
     --group g-QAS_AERS
    
     sudo pcs constraint colocation add g-QAS_AERS with g-QAS_ASCS -5000
     sudo pcs constraint location rsc_sap_QAS_ASCS00 rule score=2000 runs_ers_QAS eq 1
     sudo pcs constraint order g-QAS_ASCS then g-QAS_AERS kind=Optional symmetrical=false
    
     sudo pcs node unstandby anftstsapcl1
     sudo pcs property set maintenance-mode=false
    

    SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note 2630416 for enqueue server 2 support. If using enqueue server 2 architecture (ENSA2), install resource agent resource-agents-sap-4.1.1-12.el7.x86_64 or newer and define the resources as follows:

    sudo pcs property set maintenance-mode=true
    
    sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \
    InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \
    AUTOMATIC_RECOVER=false \
    meta resource-stickiness=5000 \
    --group g-QAS_ASCS
    
    sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \
    InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \
    AUTOMATIC_RECOVER=false IS_ERS=true \
    --group g-QAS_AERS
    
    sudo pcs constraint colocation add g-QAS_AERS with g-QAS_ASCS -5000
    sudo pcs constraint order g-QAS_ASCS then g-QAS_AERS kind=Optional symmetrical=false
    
    sudo pcs node unstandby anftstsapcl1
    sudo pcs property set maintenance-mode=false
    

    If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641322.

    Make sure that the cluster status is ok and that all resources are started. It is not important on which node the resources are running.

    sudo pcs status
    
    # Online: [ anftstsapcl1 anftstsapcl2 ]
    #
    # Full list of resources:
    #
    # rsc_st_azure    (stonith:fence_azure_arm):      Started anftstsapcl2
    #  Resource Group: g-QAS_ASCS
    #      fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
    #      nc_QAS_ASCS        (ocf::heartbeat:azure-lb):      Started anftstsapcl2
    #      vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
    #      rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    #  Resource Group: g-QAS_AERS
    #      fs_QAS_AERS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
    #      nc_QAS_AERS        (ocf::heartbeat:azure-lb):      Started anftstsapcl1
    #      vip_QAS_AERS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
    #      rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    
  9. [A] Add firewall rules for ASCS and ERS on both nodes Add the firewall rules for ASCS and ERS on both nodes.

    # Probe Port of ASCS
    sudo firewall-cmd --zone=public --add-port=62000/tcp --permanent
    sudo firewall-cmd --zone=public --add-port=62000/tcp
    sudo firewall-cmd --zone=public --add-port=3200/tcp --permanent
    sudo firewall-cmd --zone=public --add-port=3200/tcp
    sudo firewall-cmd --zone=public --add-port=3600/tcp --permanent
    sudo firewall-cmd --zone=public --add-port=3600/tcp
    sudo firewall-cmd --zone=public --add-port=3900/tcp --permanent
    sudo firewall-cmd --zone=public --add-port=3900/tcp
    sudo firewall-cmd --zone=public --add-port=8100/tcp --permanent
    sudo firewall-cmd --zone=public --add-port=8100/tcp
    sudo firewall-cmd --zone=public --add-port=50013/tcp --permanent
    sudo firewall-cmd --zone=public --add-port=50013/tcp
    sudo firewall-cmd --zone=public --add-port=50014/tcp --permanent
    sudo firewall-cmd --zone=public --add-port=50014/tcp
    sudo firewall-cmd --zone=public --add-port=50016/tcp --permanent
    sudo firewall-cmd --zone=public --add-port=50016/tcp
    # Probe Port of ERS
    sudo firewall-cmd --zone=public --add-port=62101/tcp --permanent
    sudo firewall-cmd --zone=public --add-port=62101/tcp
    sudo firewall-cmd --zone=public --add-port=3301/tcp --permanent
    sudo firewall-cmd --zone=public --add-port=3301/tcp
    sudo firewall-cmd --zone=public --add-port=50113/tcp --permanent
    sudo firewall-cmd --zone=public --add-port=50113/tcp
    sudo firewall-cmd --zone=public --add-port=50114/tcp --permanent
    sudo firewall-cmd --zone=public --add-port=50114/tcp
    sudo firewall-cmd --zone=public --add-port=50116/tcp --permanent
    sudo firewall-cmd --zone=public --add-port=50116/tcp
    

SAP NetWeaver application server preparation

Some databases require that the database instance installation is executed on an application server. Prepare the application server virtual machines to be able to use them in these cases.

The steps bellow assume that you install the application server on a server different from the ASCS/SCS and HANA servers. Otherwise some of the steps below (like configuring host name resolution) are not needed.

The following items are prefixed with either [A] - applicable to both PAS and AAS, [P] - only applicable to PAS or [S] - only applicable to AAS.

  1. [A] Setup host name resolution You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the /etc/hosts file. Replace the IP address and the hostname in the following commands:

    sudo vi /etc/hosts
    

    Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment.

    # IP address of the load balancer frontend configuration for SAP NetWeaver ASCS
    192.168.14.9 anftstsapvh
    # IP address of the load balancer frontend configuration for SAP NetWeaver ASCS ERS
    192.168.14.10 anftstsapers
    192.168.14.7 anftstsapa01
    192.168.14.8 anftstsapa02
    
  2. [A] Create the sapmnt directory Create the sapmnt directory.

    sudo mkdir -p /sapmnt/QAS
    sudo mkdir -p /usr/sap/trans
    
    sudo chattr +i /sapmnt/QAS
    sudo chattr +i /usr/sap/trans
    
  3. [A] Install NFS client and other requirements

    sudo yum -y install nfs-utils uuidd
    
  4. [A] Add mount entries

    sudo vi /etc/fstab
    
    # Add the following lines to fstab, save and exit
    192.168.24.5:/sapQAS/sapmntQAS /sapmnt/QAS nfs rw,hard,rsize=65536,wsize=65536,vers=3
    192.168.24.4:/transSAP /usr/sap/trans nfs rw,hard,rsize=65536,wsize=65536,vers=3
    

    Mount the new shares

    sudo mount -a
    
  5. [P] Create and mount the PAS directory

    sudo mkdir -p /usr/sap/QAS/D02
    sudo chattr +i /usr/sap/QAS/D02
    
    sudo vi /etc/fstab
    # Add the following line to fstab
    92.168.24.5:/sapQAS/usrsapQASpas /usr/sap/QAS/D02 nfs rw,hard,rsize=65536,wsize=65536,vers=3
    
    # Mount
    sudo mount -a
    
  6. [S] Create and mount the AAS directory

    sudo mkdir -p /usr/sap/QAS/D03
    sudo chattr +i /usr/sap/QAS/D03
    
    sudo vi /etc/fstab
    # Add the following line to fstab
    92.168.24.5:/sapQAS/usrsapQASaas /usr/sap/QAS/D03 nfs rw,hard,rsize=65536,wsize=65536,vers=3
    
    # Mount
    sudo mount -a
    
  7. [A] Configure SWAP file

    sudo vi /etc/waagent.conf
    
    # Set the property ResourceDisk.EnableSwap to y
    # Create and use swapfile on resource disk.
    ResourceDisk.EnableSwap=y
    
    # Set the size of the SWAP file with property ResourceDisk.SwapSizeMB
    # The free space of resource disk varies by virtual machine size. Make sure that you do not set a value that is too big. You can check the SWAP space with command swapon
    # Size of the swapfile.
    ResourceDisk.SwapSizeMB=2000
    

    Restart the Agent to activate the change

    sudo service waagent restart
    

Install database

In this example, SAP NetWeaver is installed on SAP HANA. You can use every supported database for this installation. For more information on how to install SAP HANA in Azure, see High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux. For a list of supported databases, see SAP Note 1928533.

  1. Run the SAP database instance installation

    Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the database.

    You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst.

    sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
    

SAP NetWeaver application server installation

Follow these steps to install an SAP application server.

  1. Prepare application server

    Follow the steps in the chapter SAP NetWeaver application server preparation above to prepare the application server.

  2. Install SAP NetWeaver application server

    Install a primary or additional SAP NetWeaver applications server.

    You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst.

    sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
    
  3. Update SAP HANA secure store

    Update the SAP HANA secure store to point to the virtual name of the SAP HANA System Replication setup.

    Run the following command to list the entries as <sapsid>adm

    hdbuserstore List
    

    This should list all entries and should look similar to

    DATA FILE       : /home/qasadm/.hdb/anftstsapa01/SSFS_HDB.DAT
    KEY FILE        : /home/qasadm/.hdb/anftstsapa01/SSFS_HDB.KEY
    
    KEY DEFAULT
      ENV : 192.168.14.4:30313
      USER: SAPABAP1
      DATABASE: QAS
    

    The output shows that the IP address of the default entry is pointing to the virtual machine and not to the load balancer's IP address. This entry needs to be changed to point to the virtual hostname of the load balancer. Make sure to use the same port (30313 in the output above) and database name (QAS in the output above)!

    su - qasadm
    hdbuserstore SET DEFAULT qasdb:30313@QAS SAPABAP1 <password of ABAP schema>
    

Test the cluster setup

  1. Manually migrate the ASCS instance

    Resource state before starting the test:

     rsc_st_azure    (stonith:fence_azure_arm):      Started anftstsapcl1
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ASCS        (ocf::heartbeat:azure-lb):      Started anftstsapcl1
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
     Resource Group: g-QAS_AERS
         fs_QAS_AERS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_AERS        (ocf::heartbeat:azure-lb):      Started anftstsapcl2
         vip_QAS_AERS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    

    Run the following commands as root to migrate the ASCS instance.

    [root@anftstsapcl1 ~]# pcs resource move rsc_sap_QAS_ASCS00
    
    [root@anftstsapcl1 ~]# pcs resource clear rsc_sap_QAS_ASCS00
    
    # Remove failed actions for the ERS that occurred as part of the migration
    [root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ERS01
    

    Resource state after the test:

    rsc_st_azure    (stonith:fence_azure_arm):      Started anftstsapcl1
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_ASCS        (ocf::heartbeat:azure-lb):      Started anftstsapcl2
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
     Resource Group: g-QAS_AERS
         fs_QAS_AERS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_AERS        (ocf::heartbeat:azure-lb):      Started anftstsapcl1
         vip_QAS_AERS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    
  2. Simulate node crash

    Resource state before starting the test:

    rsc_st_azure    (stonith:fence_azure_arm):      Started anftstsapcl1
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_ASCS        (ocf::heartbeat:azure-lb):      Started anftstsapcl2
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
     Resource Group: g-QAS_AERS
         fs_QAS_AERS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_AERS        (ocf::heartbeat:azure-lb):      Started anftstsapcl1
         vip_QAS_AERS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    

    Run the following command as root on the node where the ASCS instance is running

    [root@anftstsapcl2 ~]# echo b > /proc/sysrq-trigger
    

    The status after the node is started again should look like this.

    Online: [ anftstsapcl1 anftstsapcl2 ]
    
    Full list of resources:
    
    rsc_st_azure    (stonith:fence_azure_arm):      Started anftstsapcl1
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ASCS        (ocf::heartbeat:azure-lb):      Started anftstsapcl1
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
     Resource Group: g-QAS_AERS
         fs_QAS_AERS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_AERS        (ocf::heartbeat:azure-lb):      Started anftstsapcl2
         vip_QAS_AERS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    
    Failed Actions:
    * rsc_sap_QAS_ERS01_monitor_11000 on anftstsapcl1 'not running' (7): call=45, status=complete, exitreason='',
    

    Use the following command to clean the failed resources.

    [root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ERS01
    

    Resource state after the test:

    rsc_st_azure    (stonith:fence_azure_arm):      Started anftstsapcl1
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ASCS        (ocf::heartbeat:azure-lb):      Started anftstsapcl1
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
     Resource Group: g-QAS_AERS
         fs_QAS_AERS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_AERS        (ocf::heartbeat:azure-lb):      Started anftstsapcl2
         vip_QAS_AERS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    
  3. Kill message server process

    Resource state before starting the test:

    rsc_st_azure    (stonith:fence_azure_arm):      Started anftstsapcl1
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ASCS        (ocf::heartbeat:azure-lb):      Started anftstsapcl1
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
     Resource Group: g-QAS_AERS
         fs_QAS_AERS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_AERS        (ocf::heartbeat:azure-lb):      Started anftstsapcl2
         vip_QAS_AERS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    

    Run the following commands as root to identify the process of the message server and kill it.

    [root@anftstsapcl1 ~]# pgrep ms.sapQAS | xargs kill -9
    

    If you only kill the message server once, it will be restarted by sapstart. If you kill it often enough, Pacemaker will eventually move the ASCS instance to the other node. Run the following commands as root to clean up the resource state of the ASCS and ERS instance after the test.

    [root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ASCS00
    [root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ERS01
    

    Resource state after the test:

    rsc_st_azure    (stonith:fence_azure_arm):      Started anftstsapcl1
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_ASCS        (ocf::heartbeat:azure-lb):      Started anftstsapcl2
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
     Resource Group: g-QAS_AERS
         fs_QAS_AERS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_AERS        (ocf::heartbeat:azure-lb):      Started anftstsapcl1
         vip_QAS_AERS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    
  4. Kill enqueue server process

    Resource state before starting the test:

    rsc_st_azure    (stonith:fence_azure_arm):      Started anftstsapcl1
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_ASCS        (ocf::heartbeat:azure-lb):      Started anftstsapcl2
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
     Resource Group: g-QAS_AERS
         fs_QAS_AERS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_AERS        (ocf::heartbeat:azure-lb):      Started anftstsapcl1
         vip_QAS_AERS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    

    Run the following commands as root on the node where the ASCS instance is running to kill the enqueue server.

    [root@anftstsapcl2 ~]# pgrep en.sapQAS | xargs kill -9
    

    The ASCS instance should immediately fail over to the other node. The ERS instance should also fail over after the ASCS instance is started. Run the following commands as root to clean up the resource state of the ASCS and ERS instance after the test.

    [root@anftstsapcl2 ~]# pcs resource cleanup rsc_sap_QAS_ASCS00
    [root@anftstsapcl2 ~]# pcs resource cleanup rsc_sap_QAS_ERS01
    

    Resource state after the test:

    rsc_st_azure    (stonith:fence_azure_arm):      Started anftstsapcl1
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ASCS        (ocf::heartbeat:azure-lb):      Started anftstsapcl1
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
     Resource Group: g-QAS_AERS
         fs_QAS_AERS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_AERS        (ocf::heartbeat:azure-lb):      Started anftstsapcl2
         vip_QAS_AERS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    
  5. Kill enqueue replication server process

    Resource state before starting the test:

    rsc_st_azure    (stonith:fence_azure_arm):      Started anftstsapcl1
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ASCS        (ocf::heartbeat:azure-lb):      Started anftstsapcl1
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
     Resource Group: g-QAS_AERS
         fs_QAS_AERS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_AERS        (ocf::heartbeat:azure-lb):      Started anftstsapcl2
         vip_QAS_AERS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    

    Run the following command as root on the node where the ERS instance is running to kill the enqueue replication server process.

    [root@anftstsapcl2 ~]# pgrep er.sapQAS | xargs kill -9
    

    If you only run the command once, sapstart will restart the process. If you run it often enough, sapstart will not restart the process and the resource will be in a stopped state. Run the following commands as root to clean up the resource state of the ERS instance after the test.

    [root@anftstsapcl2 ~]# pcs resource cleanup rsc_sap_QAS_ERS01
    

    Resource state after the test:

    rsc_st_azure    (stonith:fence_azure_arm):      Started anftstsapcl1
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ASCS        (ocf::heartbeat:azure-lb):      Started anftstsapcl1
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
     Resource Group: g-QAS_AERS
         fs_QAS_AERS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_AERS        (ocf::heartbeat:azure-lb):      Started anftstsapcl2
         vip_QAS_AERS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    
  6. Kill enqueue sapstartsrv process

    Resource state before starting the test:

    rsc_st_azure    (stonith:fence_azure_arm):      Started anftstsapcl1
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ASCS        (ocf::heartbeat:azure-lb):      Started anftstsapcl1
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
     Resource Group: g-QAS_AERS
         fs_QAS_AERS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_AERS        (ocf::heartbeat:azure-lb):      Started anftstsapcl2
         vip_QAS_AERS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    

    Run the following commands as root on the node where the ASCS is running.

    [root@anftstsapcl1 ~]# pgrep -fl ASCS00.*sapstartsrv
    # 59545 sapstartsrv
    
    [root@anftstsapcl1 ~]# kill -9 59545
    

    The sapstartsrv process should always be restarted by the Pacemaker resource agent as part of the monitoring. Resource state after the test:

    rsc_st_azure    (stonith:fence_azure_arm):      Started anftstsapcl1
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ASCS        (ocf::heartbeat:azure-lb):      Started anftstsapcl1
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
     Resource Group: g-QAS_AERS
         fs_QAS_AERS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_AERS        (ocf::heartbeat:azure-lb):      Started anftstsapcl2
         vip_QAS_AERS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    

Next steps