SUSE Linux Enterprise Server 上的 Azure Vm 上的 SAP NetWeaver 高可用性與適用于 SAP 應用程式的 Azure NetApp FilesHigh availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with Azure NetApp Files for SAP applications

本文說明如何使用Azure NetApp Files部署虛擬機器、設定虛擬機器、安裝叢集架構,以及安裝高可用性的 SAP NetWeaver 7.50 系統。This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster framework, and install a highly available SAP NetWeaver 7.50 system, using Azure NetApp Files. 在範例設定、安裝命令等中,ASCS 實例是數位00,ERS 實例號碼01,主要應用程式實例(PAS)是02,而應用程式實例(.AAS)是03。In the example configurations, installation commands etc., the ASCS instance is number 00, the ERS instance number 01, the Primary Application instance (PAS) is 02 and the Application instance (AAS) is 03. 使用 SAP 系統識別碼 QAS。SAP System ID QAS is used.

本文說明如何使用 Azure NetApp Files 達到 SAP NetWeaver 應用程式的高可用性。This article explains how to achieve high availability for SAP NetWeaver application with Azure NetApp Files. 本文不會詳細說明資料庫層。The database layer isn't covered in detail in this article.

請先閱讀下列 SAP Note 和文件:Read the following SAP Notes and papers first:

概觀Overview

SAP Netweaver 中央服務的高可用性(HA)需要共用儲存體。High availability(HA) for SAP Netweaver central services requires shared storage. 若要在 SUSE Linux 上達到此目的,您必須建立個別的高可用性 NFS 叢集。To achieve that on SUSE Linux so far it was necessary to build separate highly available NFS cluster.

現在,您可以使用部署在 Azure NetApp Files 上的共用儲存體來達到 SAP Netweaver HA。Now it is possible to achieve SAP Netweaver HA by using shared storage, deployed on Azure NetApp Files. 將 Azure NetApp Files 用於共用存放裝置,就不需要額外的NFS叢集。Using Azure NetApp Files for the shared storage eliminates the need for additional NFS cluster. SAP Netweaver central services (ASCS/SCS)的 HA 仍然需要 Pacemaker。Pacemaker is still needed for HA of the SAP Netweaver central services(ASCS/SCS).

SAP NetWeaver 高可用性概觀

SAP NetWeaver ASCS、SAP NetWeaver SCS、SAP NetWeaver ERS 和 SAP Hana 資料庫會使用虛擬主機名稱和虛擬 IP 位址。SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and the SAP HANA database use virtual hostname and virtual IP addresses. 在 Azure 上,需要有負載平衡器,才能使用虛擬 IP 位址。On Azure, a load balancer is required to use a virtual IP address. 我們建議使用標準負載平衡器We recommend using Standard load balancer. 下列清單顯示 (A)SCS 和 ERS 負載平衡器的組態。The following list shows the configuration of the (A)SCS and ERS load balancer.

重要

不支援在 Azure vm 中使用具有 SUSE Linux 作為客體作業系統的 SAP ASCS/ERS 多 SID 叢集。Multi-SID clustering of SAP ASCS/ERS with SUSE Linux as guest operating system in Azure VMs is NOT supported. 多 SID 叢集描述在一個 Pacemaker 叢集中安裝多個具有不同 Sid 的 SAP ASCS/ERS 實例Multi-SID clustering describes the installation of multiple SAP ASCS/ERS instances with different SIDs in one Pacemaker cluster

(A)SCS(A)SCS

  • 前端組態Frontend configuration
    • IP 位址10.1.1.20IP address 10.1.1.20
  • 後端組態Backend configuration
    • 連線到應該屬於 (A)SCS/ERS 叢集一部分之所有虛擬機器的主要網路介面Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS cluster
  • 探查連接埠Probe Port
    • 連接埠 620<nr>Port 620<nr>
  • 負載平衡規則Load-balancing rules
    • 如果使用 Standard Load Balancer,請選取 [ HA 埠]If using Standard Load Balancer, select HA ports
    • 如果使用基本 Load Balancer,請建立下列埠的負載平衡規則If using Basic Load Balancer, create Load balancing rules for the following ports
      • 32<nr> TCP32<nr> TCP
      • 36<nr> TCP36<nr> TCP
      • 39<nr> TCP39<nr> TCP
      • 81<nr> TCP81<nr> TCP
      • 5<nr>13 TCP5<nr>13 TCP
      • 5<nr>14 TCP5<nr>14 TCP
      • 5<nr>16 TCP5<nr>16 TCP

ERSERS

  • 前端組態Frontend configuration
    • IP 位址10.1.1.21IP address 10.1.1.21
  • 後端組態Backend configuration
    • 連線到應該屬於 (A)SCS/ERS 叢集一部分之所有虛擬機器的主要網路介面Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS cluster
  • 探查連接埠Probe Port
    • 連接埠 621<nr>Port 621<nr>
  • 負載平衡規則Load-balancing rules
    • 如果使用 Standard Load Balancer,請選取 [ HA 埠]If using Standard Load Balancer, select HA ports
    • 如果使用基本 Load Balancer,請建立下列埠的負載平衡規則If using Basic Load Balancer, create Load balancing rules for the following ports
      • 32<nr> TCP32<nr> TCP
      • 33<nr> TCP33<nr> TCP
      • 5<nr>13 TCP5<nr>13 TCP
      • 5<nr>14 TCP5<nr>14 TCP
      • 5<nr>16 TCP5<nr>16 TCP

設定 Azure NetApp Files 基礎結構Setting up the Azure NetApp Files infrastructure

SAP NetWeaver 需要傳輸和設定檔目錄的共用儲存體。SAP NetWeaver requires shared storage for the transport and profile directory. 繼續進行 Azure NetApp files 基礎結構的設定之前,請先熟悉Azure Netapp files檔。Before proceeding with the setup for Azure NetApp files infrastructure, familiarize yourself with the Azure NetApp Files documentation. 檢查您選取的 Azure 區域是否提供 Azure NetApp Files。Check if your selected Azure region offers Azure NetApp Files. 下列連結會依 Azure 區域顯示 Azure NetApp Files 的可用性: azure Netapp files By Azure 區域的可用性The following link shows the availability of Azure NetApp Files by Azure region: Azure NetApp Files Availability by Azure Region.

Azure NetApp files 在數個azure 區域中都有提供。Azure NetApp files is available in several Azure regions. 在部署 Azure NetApp Files 之前,請在註冊 azure netapp files指示之後,要求上線至 Azure netapp files。Before deploying Azure NetApp Files, request onboarding to Azure NetApp Files , following the Register for Azure NetApp files instructions.

部署 Azure NetApp Files 資源Deploy Azure NetApp Files resources

這些步驟假設您已部署Azure 虛擬網路The steps assume that you have already deployed Azure Virtual Network. Azure NetApp Files 資源和 Vm (將裝載 Azure NetApp Files 資源)必須部署在相同的 Azure 虛擬網路或對等互連 Azure 虛擬網路中。The Azure NetApp Files resources and the VMs, where the Azure NetApp Files resources will be mounted must be deployed in the same Azure Virtual Network or in peered Azure Virtual Networks.

  1. 如果您還沒有這麼做,請要求上架至 Azure NetApp FilesIf you haven't done that already, request onboarding to Azure NetApp Files.

  2. 遵循建立 Netapp 帳戶的指示,在選取的 Azure 區域中建立 netapp 帳戶。Create the NetApp account in the selected Azure region, following the instructions to create NetApp Account.

  3. 設定 Azure NetApp Files 容量集區,請遵循如何設定 Azure Netapp files 容量集區的指示。Set up Azure NetApp Files capacity pool, following the instructions on how to set up Azure NetApp Files capacity pool.
    本文中所呈現的 SAP Netweaver 架構使用單一 Azure NetApp Files 容量集區 Premium SKU。The SAP Netweaver architecture presented in this article uses single Azure NetApp Files capacity pool, Premium SKU. 針對 Azure 上的 SAP Netweaver 應用程式工作負載,我們建議 Azure NetApp Files Premium SKU。We recommend Azure NetApp Files Premium SKU for SAP Netweaver application workload on Azure.

  4. 將子網委派給 Azure NetApp files,如將子網委派給 Azure Netapp files中的指示所述。Delegate a subnet to Azure NetApp files as described in the instructions Delegate a subnet to Azure NetApp Files.

  5. 部署 Azure NetApp Files 磁片區,並遵循指示來建立 Azure Netapp files 的磁片區。Deploy Azure NetApp Files volumes, following the instructions to create a volume for Azure NetApp Files. 將磁片區部署在指定的 Azure NetApp Files子網中。Deploy the volumes in the designated Azure NetApp Files subnet. 請記住,Azure NetApp Files 資源和 Azure Vm 必須位於相同的 Azure 虛擬網路或對等互連 Azure 虛擬網路中。Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in the same Azure Virtual Network or in peered Azure Virtual Networks. 例如,sapmntQAS、usrsapQAS等是磁片區名稱和 sapmntQAS、UsrsapQAS等等,是 Azure NetApp Files 磁片區的 filepaths。For example sapmntQAS, usrsapQAS, etc. are the volume names and sapmntqas, usrsapqas, etc. are the filepaths for the Azure NetApp Files volumes.

    1. volume sapmntQAS (nfs://10.1.0.4/sapmntQASvolume sapmntQAS (nfs://10.1.0.4/sapmntqas)
    2. volume usrsapQAS (nfs://10.1.0.4/usrsapQASvolume usrsapQAS (nfs://10.1.0.4/usrsapqas)
    3. volume usrsapQASsys (nfs://10.1.0.5/usrsapQASsys)volume usrsapQASsys (nfs://10.1.0.5/usrsapqassys)
    4. volume usrsapQASers (nfs://10.1.0.4/usrsapQASers)volume usrsapQASers (nfs://10.1.0.4/usrsapqasers)
    5. 磁片區交易(nfs://10.1.0.4/trans)volume trans (nfs://10.1.0.4/trans)
    6. volume usrsapQASpas (nfs://10.1.0.5/usrsapQASpas)volume usrsapQASpas (nfs://10.1.0.5/usrsapqaspas)
    7. volume usrsapQAS.aas (nfs://10.1.0.4/usrsapQAS.aas)volume usrsapQASaas (nfs://10.1.0.4/usrsapqasaas)

在此範例中,我們使用適用于所有 SAP Netweaver 檔案系統的 Azure NetApp Files 來示範如何使用 Azure NetApp Files。In this example, we used Azure NetApp Files for all SAP Netweaver file systems to demonstrate how Azure NetApp Files can be used. 不需要透過 NFS 裝載的 SAP 檔案系統也可以部署為Azure 磁片儲存體The SAP file systems that don't need to be mounted via NFS can also be deployed as Azure disk storage . 在此範例中, a-e必須位於 Azure NetApp Files,而f-g (也就是/usr/sap/QAS/d02,/Usr/sap/QAS/d03)可以部署為 azure 磁片儲存體。In this example a-e must be on Azure NetApp Files and f-g (that is, /usr/sap/QAS/D02, /usr/sap/QAS/D03) could be deployed as Azure disk storage.

重要考量︰Important considerations

在針對 SUSE 高可用性架構的 SAP Netweaver 考慮 Azure NetApp Files 時,請注意下列重要考慮:When considering Azure NetApp Files for the SAP Netweaver on SUSE High Availability architecture, be aware of the following important considerations:

  • 最小容量集區為 4 TiB。The minimum capacity pool is 4 TiB. 容量集區大小必須是 4 TiB 的倍數。The capacity pool size must be in multiples of 4 TiB.
  • 最小磁片區為 100 GiBThe minimum volume is 100 GiB
  • Azure NetApp Files 和所有虛擬機器(將裝載 Azure NetApp Files 磁片區)必須位於相同的 Azure 虛擬網路或相同區域的對等互連虛擬網路中。Azure NetApp Files and all virtual machines, where Azure NetApp Files volumes will be mounted, must be in the same Azure Virtual Network or in peered virtual networks in the same region. 現在支援透過相同區域中的 VNET 對等互連來存取 Azure NetApp Files。Azure NetApp Files access over VNET peering in the same region is supported now. 尚不支援透過全球對等互連進行 Azure NetApp 存取。Azure NetApp access over global peering is not yet supported.
  • 選取的虛擬網路必須有委派給 Azure NetApp Files 的子網。The selected virtual network must have a subnet, delegated to Azure NetApp Files.
  • Azure NetApp Files 目前僅支援 NFSv3Azure NetApp Files currently supports only NFSv3
  • Azure NetApp Files 提供匯出原則:您可以控制允許的用戶端、存取類型(讀取 & 寫入、唯讀等等)。Azure NetApp Files offers export policy: you can control the allowed clients, the access type (Read&Write, Read Only, etc.).
  • Azure NetApp Files 功能尚無法感知區域。Azure NetApp Files feature isn't zone aware yet. Azure NetApp Files 功能目前不會部署在 Azure 區域中的所有可用性區域。Currently Azure NetApp Files feature isn't deployed in all Availability zones in an Azure region. 請留意某些 Azure 區域中可能的延遲含意。Be aware of the potential latency implications in some Azure regions.

透過 Azure 入口網站手動部署 Linux VmDeploy Linux VMs manually via Azure portal

首先,您必須建立 Azure NetApp Files 磁片區。First you need to create the Azure NetApp Files volumes. 部署 Vm。Deploy the VMs. 之後,您需建立負載平衡器,然後使用後端集區中的虛擬機器。Afterwards, you create a load balancer and use the virtual machines in the backend pools.

  1. 建立資源群組Create a Resource Group
  2. 建立虛擬網路Create a Virtual Network
  3. 建立 ASCS 的可用性設定組Create an Availability Set for ASCS
    設定更新網域上限Set max update domain
  4. 建立虛擬機器 1Create Virtual Machine 1
    使用至少使用 SLES4SAP 12 SP3,在此範例中會使用使用 SLES4SAP 12 SP3 映射Use at least SLES4SAP 12 SP3, in this example the SLES4SAP 12 SP3 image is used
    選取稍早針對 ASCS 建立的可用性設定組Select Availability Set created earlier for ASCS
  5. 建立虛擬機器 2Create Virtual Machine 2
    使用至少使用 SLES4SAP 12 SP3,在此範例中會使用使用 SLES4SAP 12 SP3 映射Use at least SLES4SAP 12 SP3, in this example the SLES4SAP 12 SP3 image is used
    選取稍早針對 ASCS 建立的可用性設定組Select Availability Set created earlier for ASCS
  6. 建立 SAP 應用程式實例的可用性設定組(PAS、.AAS)Create an Availability Set for the SAP application instances (PAS, AAS)
    設定更新網域上限Set max update domain
  7. 建立虛擬機器3Create Virtual Machine 3
    使用至少使用 SLES4SAP 12 SP3,在此範例中會使用使用 SLES4SAP 12 SP3 映射Use at least SLES4SAP 12 SP3, in this example the SLES4SAP 12 SP3 image is used
    選取稍早針對 PAS/.AAS 所建立的可用性設定組Select Availability Set created earlier for PAS/AAS
  8. 建立虛擬機器4Create Virtual Machine 4
    使用至少使用 SLES4SAP 12 SP3,在此範例中會使用使用 SLES4SAP 12 SP3 映射Use at least SLES4SAP 12 SP3, in this example the SLES4SAP 12 SP3 image is used
    選取稍早針對 PAS/.AAS 所建立的可用性設定組Select Availability Set created earlier for PAS/AAS

設定 (A)SCSSetting up (A)SCS

在此範例中,會透過Azure 入口網站手動部署資源。In this example, the resources were deployed manually via the Azure portal .

透過 Azure 入口網站手動部署 Azure Load BalancerDeploy Azure Load Balancer manually via Azure portal

首先,您必須建立 Azure NetApp Files 磁片區。First you need to create the Azure NetApp Files volumes. 部署 Vm。Deploy the VMs. 之後,您需建立負載平衡器,然後使用後端集區中的虛擬機器。Afterwards, you create a load balancer and use the virtual machines in the backend pools.

  1. 建立負載平衡器(內部、標準):Create load balancer (internal, standard):
    1. 建立前端 IP 位址Create the frontend IP addresses
      1. ASCS 的 IP 位址10.1.1.20IP address 10.1.1.20 for the ASCS
        1. 開啟負載平衡器,選取前端 IP 集區,然後按一下 [新增]Open the load balancer, select frontend IP pool, and click Add
        2. 輸入新前端 IP 集區的名稱(例如前端) 。QAS.ASCSEnter the name of the new frontend IP pool (for example frontend.QAS.ASCS)
        3. 將 [指派] 設定為 [靜態],然後輸入 IP 位址(例如10.1.1.20Set the Assignment to Static and enter the IP address (for example 10.1.1.20)
        4. Click OKClick OK
      2. ASCS ERS 的 IP 位址10.1.1.21IP address 10.1.1.21 for the ASCS ERS
        • 在 "a" 底下重複上述步驟,以建立 ERS 的 IP 位址(例如10.1.1.21和前端) 。QAS.ERSRepeat the steps above under "a" to create an IP address for the ERS (for example 10.1.1.21 and frontend.QAS.ERS)
    2. 建立後端集區Create the backend pools
      1. 建立 ASCS 的後端集區Create a backend pool for the ASCS
        1. 開啟負載平衡器,選取後端集區,然後按一下 [新增]Open the load balancer, select backend pools, and click Add
        2. 輸入新後端集區的名稱(例如後端)。QASEnter the name of the new backend pool (for example backend.QAS)
        3. 按一下 [新增虛擬機器]。Click Add a virtual machine.
        4. 選取虛擬機器Select Virtual machine
        5. 選取(A) SCS 叢集及其 IP 位址的虛擬機器。Select the virtual machines of the (A)SCS cluster and their IP addresses.
        6. 按一下 [新增]Click Add
    3. 建立健康狀態探查Create the health probes
      1. 針對 ASCS 是連接埠 62000Port 62000 for ASCS
        1. 開啟負載平衡器,選取健康情況探查,然後按一下 [新增]Open the load balancer, select health probes, and click Add
        2. 輸入新健康狀態探查的名稱(例如健全狀況)。QAS.ASCSEnter the name of the new health probe (for example health.QAS.ASCS)
        3. 選取 [TCP] 作為通訊協定、連接埠 62000,保留 [間隔] 5 和 [狀況不良閾值] 2Select TCP as protocol, port 62000, keep Interval 5 and Unhealthy threshold 2
        4. Click OKClick OK
      2. 適用于 ASCS ERS 的埠 62101Port 62101 for ASCS ERS
        • 在 "c" 底下重複上述步驟,以建立 ERS 的健康情況探查(例如 62101健全狀況)。QAS.ERSRepeat the steps above under "c" to create a health probe for the ERS (for example 62101 and health.QAS.ERS)
    4. 負載平衡規則Load-balancing rules
      1. 建立 ASCS 的後端集區Create a backend pool for the ASCS
        1. 開啟負載平衡器,選取 [負載平衡規則],然後按一下 [新增]Open the load balancer, select Load-balancing rules and click Add
        2. 輸入新負載平衡器規則的名稱(例如lb。QAS.ASCSEnter the name of the new load balancer rule (for example lb.QAS.ASCS)
        3. 選取您稍早建立的 ASCS、後端集區及健康情況探查的前端 IP 位址(例如前端)。QAS.ASCS後端。QAS健全狀況。QAS.ASCSSelect the frontend IP address for ASCS, backend pool, and health probe you created earlier (for example frontend.QAS.ASCS, backend.QAS and health.QAS.ASCS)
        4. 選取HA 埠Select HA ports
        5. 將閒置逾時增加為 30 分鐘Increase idle timeout to 30 minutes
        6. 務必啟用浮動 IPMake sure to enable Floating IP
        7. Click OKClick OK
        • 重複上述步驟以建立 ERS 的負載平衡規則(例如lb。QAS.ERSRepeat the steps above to create load balancing rules for ERS (for example lb.QAS.ERS)
  2. 或者,如果您的案例需要基本負載平衡器(內部),請遵循下列步驟:Alternatively, if your scenario requires basic load balancer (internal), follow these steps:
    1. 建立前端 IP 位址Create the frontend IP addresses
      1. ASCS 的 IP 位址10.1.1.20IP address 10.1.1.20 for the ASCS
        1. 開啟負載平衡器,選取前端 IP 集區,然後按一下 [新增]Open the load balancer, select frontend IP pool, and click Add
        2. 輸入新前端 IP 集區的名稱(例如前端) 。QAS.ASCSEnter the name of the new frontend IP pool (for example frontend.QAS.ASCS)
        3. 將 [指派] 設定為 [靜態],然後輸入 IP 位址(例如10.1.1.20Set the Assignment to Static and enter the IP address (for example 10.1.1.20)
        4. Click OKClick OK
      2. ASCS ERS 的 IP 位址10.1.1.21IP address 10.1.1.21 for the ASCS ERS
        • 在 "a" 底下重複上述步驟,以建立 ERS 的 IP 位址(例如10.1.1.21和前端) 。QAS.ERSRepeat the steps above under "a" to create an IP address for the ERS (for example 10.1.1.21 and frontend.QAS.ERS)
    2. 建立後端集區Create the backend pools
      1. 建立 ASCS 的後端集區Create a backend pool for the ASCS
        1. 開啟負載平衡器,選取後端集區,然後按一下 [新增]Open the load balancer, select backend pools, and click Add
        2. 輸入新後端集區的名稱(例如後端)。QASEnter the name of the new backend pool (for example backend.QAS)
        3. 按一下 [新增虛擬機器]。Click Add a virtual machine.
        4. 選取您稍早為 ASCS 建立的可用性設定組Select the Availability Set you created earlier for ASCS
        5. 選取 (A)SCS 叢集的虛擬機器Select the virtual machines of the (A)SCS cluster
        6. Click OKClick OK
    3. 建立健康狀態探查Create the health probes
      1. 針對 ASCS 是連接埠 62000Port 62000 for ASCS
        1. 開啟負載平衡器,選取健康情況探查,然後按一下 [新增]Open the load balancer, select health probes, and click Add
        2. 輸入新健康狀態探查的名稱(例如健全狀況)。QAS.ASCSEnter the name of the new health probe (for example health.QAS.ASCS)
        3. 選取 [TCP] 作為通訊協定、連接埠 62000,保留 [間隔] 5 和 [狀況不良閾值] 2Select TCP as protocol, port 62000, keep Interval 5 and Unhealthy threshold 2
        4. Click OKClick OK
      2. 適用于 ASCS ERS 的埠 62101Port 62101 for ASCS ERS
        • 在 "c" 底下重複上述步驟,以建立 ERS 的健康情況探查(例如 62101健全狀況)。QAS.ERSRepeat the steps above under "c" to create a health probe for the ERS (for example 62101 and health.QAS.ERS)
    4. 負載平衡規則Load-balancing rules
      1. 針對 ASCS 是 3200 TCP3200 TCP for ASCS
        1. 開啟負載平衡器,選取 [負載平衡規則],然後按一下 [新增]Open the load balancer, select Load-balancing rules and click Add
        2. 輸入新負載平衡器規則的名稱(例如lb。QAS.ASCS 3200Enter the name of the new load balancer rule (for example lb.QAS.ASCS.3200)
        3. 選取您稍早建立的 ASCS、後端集區及健康情況探查的前端 IP 位址(例如前端)。QAS.ASCSSelect the frontend IP address for ASCS, backend pool, and health probe you created earlier (for example frontend.QAS.ASCS)
        4. 保留通訊協定 [TCP],輸入連接埠 3200Keep protocol TCP, enter port 3200
        5. 將閒置逾時增加為 30 分鐘Increase idle timeout to 30 minutes
        6. 務必啟用浮動 IPMake sure to enable Floating IP
        7. Click OKClick OK
      2. ASCS 的其他連接埠Additional ports for the ASCS
        • 針對 ASCS 的埠 3600、3900、8100、50013、50014、50016 和 TCP,重複上述的步驟 "d"Repeat the steps above under "d" for ports 3600, 3900, 8100, 50013, 50014, 50016 and TCP for the ASCS
      3. ASCS ERS 的其他連接埠Additional ports for the ASCS ERS
        • 針對 ASCS ERS 的埠 3301、50113、50114、501 16 和TCP 的 "d" 底下重複上述步驟Repeat the steps above under "d" for ports 3301, 50113, 50114, 50116 and TCP for the ASCS ERS

注意

當沒有公用 IP 位址的 Vm 放在內部(沒有公用 IP 位址)標準 Azure 負載平衡器的後端集區中時,除非執行額外設定以允許路由傳送至公用端點,否則將不會有輸出網際網路連線能力。When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. 如需如何達到輸出連線能力的詳細資訊,請參閱在 SAP 高可用性案例中使用 Azure Standard Load Balancer 虛擬機器的公用端點連線能力For details on how to achieve outbound connectivity see Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios.

重要

請勿在位於 Azure Load Balancer 後方的 Azure Vm 上啟用 TCP 時間戳記。Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. 啟用 TCP 時間戳記會導致健康情況探查失敗。Enabling TCP timestamps will cause the health probes to fail. 將參數net.tcp. tcp_timestamps設定為0Set parameter net.ipv4.tcp_timestamps to 0. 如需詳細資訊,請參閱Load Balancer 健康情況探查For details see Load Balancer health probes.

建立 Pacemaker 叢集Create Pacemaker cluster

依照在 Azure 中於 SUSE Linux Enterprise Server 上設定 Pacemaker 中的步驟,建立此 (A)SCS 伺服器的基本 Pacemaker 叢集。Follow the steps in Setting up Pacemaker on SUSE Linux Enterprise Server in Azure to create a basic Pacemaker cluster for this (A)SCS server.

安裝Installation

下列項目會加上下列其中一個前置詞: [A] - 適用於所有節點、 [1] - 僅適用於節點 1 或 [2] - 僅適用於節點 2。The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] - only applicable to node 2.

  1. [A] 安裝 SUSE 連接器[A] Install SUSE Connector

    sudo zypper install sap-suse-cluster-connector
    

    注意

    請勿在叢集節點的主機名稱中使用短破折號。Do not use dashes in the hostnames of your cluster nodes. 否則叢集將無法運作。Otherwise your cluster will not work. 這是已知的限制,而 SUSE 正在努力研發修正程式。This is a known limitation and SUSE is working on a fix. 修正程式將會作為 sap-suse-cloud-connector 套件的修補程式來發行。The fix will be released as a patch of the sap-suse-cloud-connector package.

    請確定您已安裝新版的 SAP SUSE 叢集連接器。Make sure that you installed the new version of the SAP SUSE cluster connector. 舊版連接器稱為 sap_suse_cluster_connector,新版連接器稱為 sap-suse-cluster-connectorThe old one was called sap_suse_cluster_connector and the new one is called sap-suse-cluster-connector.

    sudo zypper info sap-suse-cluster-connector
    
       Information for package sap-suse-cluster-connector:
    

Repository : SLE-12-SP3-SAP-Updates Name : sap-suse-cluster-connector Version : 3.1.0-8.1 Arch : noarch Vendor : SUSE LLC <https://www.suse.com/> Support Level : Level 3 Installed Size : 45.6 KiB Installed : Yes Status : up-to-date Source package : sap-suse-cluster-connector-3.1.0-8.1.src Summary : SUSE High Availability Setup for SAP Products

  1. [A] 更新 SAP 資源代理程式[A] Update SAP resource agents

    資源代理程式套件必須有修補程式才能使用新的組態,本文會有該修補程式的說明。A patch for the resource-agents package is required to use the new configuration, that is described in this article. 您可以查看您是否已使用下列命令安裝了修補程式You can check, if the patch is already installed with the following command

    sudo grep 'parameter name="IS_ERS"' /usr/lib/ocf/resource.d/heartbeat/SAPInstance
    

    輸出應該會類似The output should be similar to

    <parameter name="IS_ERS" unique="0" required="0">
    

    如果 grep 命令找不到 IS_ERS 參數,您必須安裝 SUSE 下載頁面上所列出的修補程式If the grep command does not find the IS_ERS parameter, you need to install the patch listed on the SUSE download page

    # example for patch for SLES 12 SP1
    sudo zypper in -t patch SUSE-SLE-HA-12-SP1-2017-885=1
    # example for patch for SLES 12 SP2
    sudo zypper in -t patch SUSE-SLE-HA-12-SP2-2017-886=1
    
  2. [A] 設定主機名稱解析[A] Setup host name resolution

    您可以使用 DNS 伺服器,或修改所有節點上的 /etc/hosts。You can either use a DNS server or modify the /etc/hosts on all nodes. 這個範例示範如何使用 /etc/hosts 檔案。This example shows how to use the /etc/hosts file. 取代下列命令中的 IP 位址和主機名稱Replace the IP address and the hostname in the following commands

    sudo vi /etc/hosts
    

    將下列幾行插入至 /etc/hosts。Insert the following lines to /etc/hosts. 變更 IP 位址和主機名稱以符合您的環境Change the IP address and hostname to match your environment

    
    # IP address of cluster node 1
    10.1.1.18    anftstsapcl1
    # IP address of cluster node 2
    10.1.1.6     anftstsapcl2
    # IP address of the load balancer frontend configuration for SAP Netweaver ASCS
    10.1.1.20    anftstsapvh
    # IP address of the load balancer frontend configuration for SAP Netweaver ERS
    10.1.1.21    anftstsapers
    

準備進行 SAP NetWeaver 安裝Prepare for SAP NetWeaver installation

  1. [A] 建立共用目錄[A] Create the shared directories

    sudo mkdir -p /sapmnt/QAS
    sudo mkdir -p /usr/sap/trans
    sudo mkdir -p /usr/sap/QAS/SYS
    sudo mkdir -p /usr/sap/QAS/ASCS00
    sudo mkdir -p /usr/sap/QAS/ERS01
    
    sudo chattr +i /sapmnt/QAS
    sudo chattr +i /usr/sap/trans
    sudo chattr +i /usr/sap/QAS/SYS
    sudo chattr +i /usr/sap/QAS/ASCS00
    sudo chattr +i /usr/sap/QAS/ERS01
    
  2. [A] 設定 autofs[A] Configure autofs

    
    sudo vi /etc/auto.master
    # Add the following line to the file, save and exit
    /- /etc/auto.direct
    

    使用下列命令建立檔案Create a file with

    
    sudo vi /etc/auto.direct
    # Add the following lines to the file, save and exit
    /sapmnt/QAS -nfsvers=3,nobind,sync 10.1.0.4:/sapmntqas
    /usr/sap/trans -nfsvers=3,nobind,sync 10.1.0.4:/trans
    /usr/sap/QAS/SYS -nfsvers=3,nobind,sync 10.1.0.5:/usrsapqassys
    

    注意

    目前,Azure NetApp Files 僅支援 NFSv3。Currently Azure NetApp Files supports only NFSv3. 不要省略 nfsvers = 3 參數。Don't omit the nfsvers=3 switch.

    重新開機 autofs 以掛接新的共用Restart autofs to mount the new shares

    
       sudo systemctl enable autofs
       sudo service autofs restart
      
  3. [A] 設定分頁檔[A] Configure SWAP file

    sudo vi /etc/waagent.conf
    
    # Set the property ResourceDisk.EnableSwap to y
    # Create and use swapfile on resource disk.
    ResourceDisk.EnableSwap=y
    
    # Set the size of the SWAP file with property ResourceDisk.SwapSizeMB
    # The free space of resource disk varies by virtual machine size. Make sure that you do not set a value that is too big. You can check the SWAP space with command swapon
    # Size of the swapfile.
    ResourceDisk.SwapSizeMB=2000
    

    重新啟動代理程式以啟動變更Restart the Agent to activate the change

    sudo service waagent restart
    

安裝 SAP NetWeaver ASCS/ERSInstalling SAP NetWeaver ASCS/ERS

  1. [1] 為 ASCS 執行個體建立虛擬 IP 資源和健康情況探查[1] Create a virtual IP resource and health-probe for the ASCS instance

    重要

    最近的測試顯示的情況下,netcat 會因待處理專案而停止回應要求,而且其限制只會處理一個連接。Recent testing revealed situations, where netcat stops responding to requests due to backlog and its limitation of handling only one connection. Netcat 資源會停止接聽 Azure 負載平衡器要求,而浮動 IP 會變成無法使用。The netcat resource stops listening to the Azure Load balancer requests and the floating IP becomes unavailable.
    針對現有的 Pacemaker 叢集,我們建議以 socat 取代 netcat,並遵循Azure 負載平衡器偵測強化中的指示。For existing Pacemaker clusters, we recommend replacing netcat with socat, following the instructions in Azure Load-Balancer Detection Hardening. 請注意,變更將需要短暫的停機時間。Note that the change will require brief downtime.

    sudo crm node standby anftstsapcl2
    
    sudo crm configure primitive fs_QAS_ASCS Filesystem device='10.1.0.4:/usrsapqas' directory='/usr/sap/QAS/ASCS00' fstype='nfs' \
      op start timeout=60s interval=0 \
      op stop timeout=60s interval=0 \
      op monitor interval=20s timeout=40s
    
    sudo crm configure primitive vip_QAS_ASCS IPaddr2 \
      params ip=10.1.1.20 cidr_netmask=24 \
      op monitor interval=10 timeout=20
    
    sudo crm configure primitive nc_QAS_ASCS anything \
      params binfile="/usr/bin/socat" cmdline_options="-U TCP-LISTEN:62000,backlog=10,fork,reuseaddr /dev/null" \
      op monitor timeout=20s interval=10 depth=0
    
    sudo crm configure group g-QAS_ASCS fs_QAS_ASCS nc_QAS_ASCS vip_QAS_ASCS \
       meta resource-stickiness=3000
    

    請確定叢集狀態正常,且所有資源皆已啟動。Make sure that the cluster status is ok and that all resources are started. 資源在哪一個節點上執行並不重要。It is not important on which node the resources are running.

    sudo crm_mon -r
    
    # Node anftstsapcl2: standby
    # Online: [ anftstsapcl1 ]
    # 
    # Full list of resources:
    #
    # Resource Group: g-QAS_ASCS
    #     fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
    #     nc_QAS_ASCS        (ocf::heartbeat:anything):      Started anftstsapcl1
    #     vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
    # stonith-sbd     (stonith:external/sbd): Started anftstsapcl2
    
  2. [1] 安裝 SAP NetWeaver ASCS[1] Install SAP NetWeaver ASCS

    使用對應至 ASCS 負載平衡器前端設定之 IP 位址的虛擬主機,在第一個節點上以 root 身分安裝 SAP NetWeaver ASCS,例如anftstsapvh10.1.1.20和您用於探查負載平衡器的實例編號,例如00Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the ASCS, for example anftstsapvh, 10.1.1.20 and the instance number that you used for the probe of the load balancer, for example 00.

    您可以使用 sapinst 參數 SAPINST_REMOTE_ACCESS_USER 來允許非 root 使用者連線到 sapinst。You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst. 您可以使用參數 SAPINST_USE_HOSTNAME,使用虛擬主機名稱來安裝 SAP。You can use parameter SAPINST_USE_HOSTNAME to install SAP, using virtual hostname.

    sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=virtual_hostname
    

    如果安裝無法在/usr/sap/QAS/ASCS00中建立子資料夾,請嘗試設定 ASCS00資料夾的擁有者和群組,然後重試。If the installation fails to create a subfolder in /usr/sap/QAS/ASCS00, try setting the owner and group of the ASCS00 folder and retry.

    
    chown qasadm /usr/sap/QAS/ASCS00
    chgrp sapsys /usr/sap/QAS/ASCS00
    
  3. [1] 為 ERS 執行個體建立虛擬 IP 資源和健康情況探查[1] Create a virtual IP resource and health-probe for the ERS instance

    
    sudo crm node online anftstsapcl2
    sudo crm node standby anftstsapcl1
    
    sudo crm configure primitive fs_QAS_ERS Filesystem device='10.1.0.4:/usrsapqasers' directory='/usr/sap/QAS/ERS01' fstype='nfs' \
      op start timeout=60s interval=0 \
      op stop timeout=60s interval=0 \
      op monitor interval=20s timeout=40s
    
    sudo crm configure primitive vip_QAS_ERS IPaddr2 \
      params ip=10.1.1.21 cidr_netmask=24 \
      op monitor interval=10 timeout=20
    
    sudo crm configure primitive nc_QAS_ERS anything \
     params binfile="/usr/bin/socat" cmdline_options="-U TCP-LISTEN:62101,backlog=10,fork,reuseaddr /dev/null" \
     op monitor timeout=20s interval=10 depth=0
    
    # WARNING: Resources nc_QAS_ASCS,nc_QAS_ERS violate uniqueness for parameter "binfile": "/usr/bin/socat"
    # Do you still want to commit (y/n)? y
    
    sudo crm configure group g-QAS_ERS fs_QAS_ERS nc_QAS_ERS vip_QAS_ERS
    

    請確定叢集狀態正常,且所有資源皆已啟動。Make sure that the cluster status is ok and that all resources are started. 資源在哪一個節點上執行並不重要。It is not important on which node the resources are running.

    sudo crm_mon -r
    
    # Node anftstsapcl1: standby
    # Online: [ anftstsapcl2 ]
    # 
    # Full list of resources:
    #
    # stonith-sbd     (stonith:external/sbd): Started anftstsapcl2
    #  Resource Group: g-QAS_ASCS
    #      fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
    #      nc_QAS_ASCS        (ocf::heartbeat:anything):      Started anftstsapcl2
    #      vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
    #  Resource Group: g-QAS_ERS
    #      fs_QAS_ERS (ocf::heartbeat:Filesystem):    Started anftstsapcl2
    #      nc_QAS_ERS (ocf::heartbeat:anything):      Started anftstsapcl2
    #      vip_QAS_ERS  (ocf::heartbeat:IPaddr2):     Started anftstsapcl2
    
  4. [2] 安裝 SAP NetWeaver ERS[2] Install SAP NetWeaver ERS

    使用對應至 ERS 負載平衡器前端設定之 IP 位址的虛擬主機(例如anftstsapers10.1.1.21和您用於探查負載平衡器的實例號碼),在第二個節點上以 Root 身分安裝 SAP NetWeaver ERS,例如01Install SAP NetWeaver ERS as root on the second node using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the ERS, for example anftstsapers, 10.1.1.21 and the instance number that you used for the probe of the load balancer, for example 01.

    您可以使用 sapinst 參數 SAPINST_REMOTE_ACCESS_USER 來允許非 root 使用者連線到 sapinst。You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst. 您可以使用參數 SAPINST_USE_HOSTNAME,使用虛擬主機名稱來安裝 SAP。You can use parameter SAPINST_USE_HOSTNAME to install SAP, using virtual hostname.

    sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=virtual_hostname
    

    注意

    請使用 SWPM SP 20 PL 05 或更高版本。Use SWPM SP 20 PL 05 or higher. 較低版本無法正確設定權限,因而會讓安裝失敗。Lower versions do not set the permissions correctly and the installation will fail.

    如果安裝無法在/usr/sap/QAS/ERS01中建立子資料夾,請嘗試設定 ERS01資料夾的擁有者和群組,然後重試。If the installation fails to create a subfolder in /usr/sap/QAS/ERS01, try setting the owner and group of the ERS01 folder and retry.

    
    chown qasadm /usr/sap/QAS/ERS01
    chgrp sapsys /usr/sap/QAS/ERS01
    
  5. [1] 調整 ASCS/SCS 和 ERS 執行個體設定檔[1] Adapt the ASCS/SCS and ERS instance profiles

    • ASCS/SCS 設定檔ASCS/SCS profile
    
    sudo vi /sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh
    
    # Change the restart command to a start command
    #Restart_Program_01 = local $(_EN) pf=$(_PF)
    Start_Program_01 = local $(_EN) pf=$(_PF)
    
    # Add the following lines
    service/halib = $(DIR_CT_RUN)/saphascriptco.so
    service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector
    
    # Add the keep alive parameter
    enque/encni/set_so_keepalive = true
    
    • ERS 設定檔ERS profile
    
    sudo vi /sapmnt/QAS/profile/QAS_ERS01_anftstsapers
    
    # Change the restart command to a start command
    #Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
    Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
    
    # Add the following lines
    service/halib = $(DIR_CT_RUN)/saphascriptco.so
    service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector
    
    # remove Autostart from ERS profile
    # Autostart = 1
    
  6. [A] 設定保持運作[A] Configure Keep Alive

    SAP NetWeaver 應用程式伺服器和 ASCS/SCS 之間的通訊是透過軟體負載平衡器來路由傳送。The communication between the SAP NetWeaver application server and the ASCS/SCS is routed through a software load balancer. 在逾時時間 (可設定) 過後,負載平衡器就會將非作用中的連線中斷。The load balancer disconnects inactive connections after a configurable timeout. 為防止這個情況,您需要在 SAP NetWeaver ASCS/SCS 設定檔中設定參數,並變更 Linux 系統設定。To prevent this you need to set a parameter in the SAP NetWeaver ASCS/SCS profile and change the Linux system settings. 如需詳細資訊,請參閱 SAP 附注 1410736Read SAP Note 1410736 for more information.

    ASCS/SCS 設定檔參數 enque/encni/set_so_keepalive 已在最後一個步驟中新增。The ASCS/SCS profile parameter enque/encni/set_so_keepalive was already added in the last step.

    
    # Change the Linux system configuration
    sudo sysctl net.ipv4.tcp_keepalive_time=120
    
  7. [A] 在安裝過後設定 SAP 使用者[A] Configure the SAP users after the installation

    
    # Add sidadm to the haclient group
    sudo usermod -aG haclient qasadm
    
  8. [1] 在 sapservice 檔案中新增 ASCS 和 ERS SAP 服務[1] Add the ASCS and ERS SAP services to the sapservice file

    在第二個節點中新增 ASCS 服務項目,並將 ERS 服務項目複製到第一個節點。Add the ASCS service entry to the second node and copy the ERS service entry to the first node.

    
    cat /usr/sap/sapservices | grep ASCS00 | sudo ssh anftstsapcl2 "cat >>/usr/sap/sapservices"
    sudo ssh anftstsapcl2 "cat /usr/sap/sapservices" | grep ERS01 | sudo tee -a /usr/sap/sapservices
    
  9. [1] 建立 SAP 叢集資源[1] Create the SAP cluster resources

如果使用佇列伺服器1架構(ENSA1),請定義資源,如下所示:If using enqueue server 1 architecture (ENSA1), define the resources as follows:

sudo crm configure property maintenance-mode="true"
   
   sudo crm configure primitive rsc_sap_QAS_ASCS00 SAPInstance \
    operations \$id=rsc_sap_QAS_ASCS00-operations \
    op monitor interval=11 timeout=60 on_fail=restart \
    params InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \
    AUTOMATIC_RECOVER=false \
    meta resource-stickiness=5000 failure-timeout=60 migration-threshold=1 priority=10
   
   sudo crm configure primitive rsc_sap_QAS_ERS01 SAPInstance \
    operations \$id=rsc_sap_QAS_ERS01-operations \
    op monitor interval=11 timeout=60 on_fail=restart \
    params InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" AUTOMATIC_RECOVER=false IS_ERS=true \
    meta priority=1000
   
   sudo crm configure modgroup g-QAS_ASCS add rsc_sap_QAS_ASCS00
   sudo crm configure modgroup g-QAS_ERS add rsc_sap_QAS_ERS01
   
   sudo crm configure colocation col_sap_QAS_no_both -5000: g-QAS_ERS g-QAS_ASCS
   sudo crm configure location loc_sap_QAS_failover_to_ers rsc_sap_QAS_ASCS00 rule 2000: runs_ers_QAS eq 1
   sudo crm configure order ord_sap_QAS_first_start_ascs Optional: rsc_sap_QAS_ASCS00:start rsc_sap_QAS_ERS01:stop symmetrical=false
   
   sudo crm node online anftstsapcl1
   sudo crm configure property maintenance-mode="false"
   

SAP 引進了對佇列伺服器2的支援,包括複寫(從 SAP NW 7.52 開始)。SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. 從 ABAP Platform 1809 開始,預設會安裝排入佇列的伺服器2。Starting with ABAP Platform 1809, enqueue server 2 is installed by default. 請參閱適用于排入佇列伺服器2支援的 SAP 附注2630416See SAP note 2630416 for enqueue server 2 support. 如果使用 [排入佇列伺服器2架構(ENSA2)],請定義資源,如下所示:If using enqueue server 2 architecture (ENSA2), define the resources as follows:

sudo crm configure property maintenance-mode="true"
   
   sudo crm configure primitive rsc_sap_QAS_ASCS00 SAPInstance \
    operations \$id=rsc_sap_QAS_ASCS00-operations \
    op monitor interval=11 timeout=60 on_fail=restart \
    params InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \
    AUTOMATIC_RECOVER=false \
    meta resource-stickiness=5000
   
   sudo crm configure primitive rsc_sap_QAS_ERS01 SAPInstance \
    operations \$id=rsc_sap_QAS_ERS01-operations \
    op monitor interval=11 timeout=60 on_fail=restart \
    params InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" AUTOMATIC_RECOVER=false IS_ERS=true
   
   sudo crm configure modgroup g-QAS_ASCS add rsc_sap_QAS_ASCS00
   sudo crm configure modgroup g-QAS_ERS add rsc_sap_QAS_ERS01
   
   sudo crm configure colocation col_sap_QAS_no_both -5000: g-QAS_ERS g-QAS_ASCS
   sudo crm configure order ord_sap_QAS_first_start_ascs Optional: rsc_sap_QAS_ASCS00:start rsc_sap_QAS_ERS01:stop symmetrical=false
   
   sudo crm node online anftstsapcl1
   sudo crm configure property maintenance-mode="false"
   

如果您要從舊版升級並切換至排入佇列伺服器2,請參閱 SAP 附注2641019If you are upgrading from an older version and switching to enqueue server 2, see SAP note 2641019.

請確定叢集狀態正常,且所有資源皆已啟動。Make sure that the cluster status is ok and that all resources are started. 資源在哪一個節點上執行並不重要。It is not important on which node the resources are running.

sudo crm_mon -r
   # Full list of resources:
   #
   # stonith-sbd     (stonith:external/sbd): Started anftstsapcl2
   #  Resource Group: g-QAS_ASCS
   #      fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
   #      nc_QAS_ASCS        (ocf::heartbeat:anything):      Started anftstsapcl1
   #      vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
   #      rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
   #  Resource Group: g-QAS_ERS
   #      fs_QAS_ERS (ocf::heartbeat:Filesystem):    Started anftstsapcl2
   #      nc_QAS_ERS (ocf::heartbeat:anything):      Started anftstsapcl2
   #      vip_QAS_ERS        (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
   #      rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
   

SAP NetWeaver 應用程式伺服器準備SAP NetWeaver application server preparation

某些資料庫需要在應用程式伺服器上執行資料庫執行個體安裝。Some databases require that the database instance installation is executed on an application server. 準備應用程式伺服器虛擬機器,以便在這些情況下使用它們。Prepare the application server virtual machines to be able to use them in these cases.

以下步驟假設您將應用程式伺服器安裝在與 ASCS/SCS 和 HANA 伺服器不同的伺服器上。The steps bellow assume that you install the application server on a server different from the ASCS/SCS and HANA servers. 否則,您就不必進行以下某些步驟 (例如設定主機名稱解析)。Otherwise some of the steps below (like configuring host name resolution) are not needed.

下列專案的前面會加上 [A] -適用于 PAS 和 .aas, [P] -僅適用于僅適用于 .aas 的 pas 或 [ S]The following items are prefixed with either [A] - applicable to both PAS and AAS, [P] - only applicable to PAS or [S] - only applicable to AAS.

  1. [A] 設定作業系統[A] Configure operating system

    縮減已變更的快取大小。Reduce the size of the dirty cache. 如需詳細資訊,請參閱使用大量 RAM 的 SLES 11/12 伺服器出現低寫入效能 (英文)。For more information, see Low write performance on SLES 11/12 servers with large RAM.

    
    sudo vi /etc/sysctl.conf
    # Change/set the following settings
    vm.dirty_bytes = 629145600
    vm.dirty_background_bytes = 314572800
    
  2. [A] 設定主機名稱解析[A] Setup host name resolution

    您可以使用 DNS 伺服器,或修改所有節點上的 /etc/hosts。You can either use a DNS server or modify the /etc/hosts on all nodes. 這個範例示範如何使用 /etc/hosts 檔案。This example shows how to use the /etc/hosts file. 取代下列命令中的 IP 位址和主機名稱Replace the IP address and the hostname in the following commands

    sudo vi /etc/hosts
    

    將下列幾行插入至 /etc/hosts。Insert the following lines to /etc/hosts. 變更 IP 位址和主機名稱以符合您的環境Change the IP address and hostname to match your environment

    
    # IP address of the load balancer frontend configuration for SAP NetWeaver ASCS/SCS
    10.1.1.20 anftstsapvh
    # IP address of the load balancer frontend configuration for SAP NetWeaver ERS
    10.1.1.21 anftstsapers
    # IP address of all application servers
    10.1.1.15 anftstsapa01
    10.1.1.16 anftstsapa02
    
  3. [A] 建立 sapmnt 目錄[A] Create the sapmnt directory

    
    sudo mkdir -p /sapmnt/QAS
    sudo mkdir -p /usr/sap/trans
    
    sudo chattr +i /sapmnt/QAS
    sudo chattr +i /usr/sap/trans
    
  4. [P] 建立 PAS 目錄[P] Create the PAS directory

    
    sudo mkdir -p /usr/sap/QAS/D02
    sudo chattr +i /usr/sap/QAS/D02
    
  5. [S] 建立 .aas 目錄[S] Create the AAS directory

    
    sudo mkdir -p /usr/sap/QAS/D03
    sudo chattr +i /usr/sap/QAS/D03
    
  6. [P] 在 PAS 上設定 autofs[P] Configure autofs on PAS

    sudo vi /etc/auto.master
    
    # Add the following line to the file, save and exit
    /- /etc/auto.direct
    

    使用下列命令建立新檔案Create a new file with

    
    sudo vi /etc/auto.direct
    # Add the following lines to the file, save and exit
    /sapmnt/QAS -nfsvers=3,nobind,sync 10.1.0.4:/sapmntqas
    /usr/sap/trans -nfsvers=3,nobind,sync 10.1.0.4:/trans
    /usr/sap/QAS/D02 -nfsvers=3,nobind,sync 10.1.0.5:/usrsapqaspas
    

    重新開機 autofs 以掛接新的共用Restart autofs to mount the new shares

    
    sudo systemctl enable autofs
    sudo service autofs restart
    
  7. [P] 在 .Aas 上設定 autofs[P] Configure autofs on AAS

    sudo vi /etc/auto.master
    
    # Add the following line to the file, save and exit
    /- /etc/auto.direct
    

    使用下列命令建立新檔案Create a new file with

    
    sudo vi /etc/auto.direct
    # Add the following lines to the file, save and exit
    /sapmnt/QAS -nfsvers=3,nobind,sync 10.1.0.4:/sapmntqas
    /usr/sap/trans -nfsvers=3,nobind,sync 10.1.0.4:/trans
    /usr/sap/QAS/D03 -nfsvers=3,nobind,sync 10.1.0.4:/usrsapqasaas
    

    重新開機 autofs 以掛接新的共用Restart autofs to mount the new shares

    
    sudo systemctl enable autofs
    sudo service autofs restart
    
  8. [A] 設定分頁檔[A] Configure SWAP file

    
    sudo vi /etc/waagent.conf
    
    # Set the property ResourceDisk.EnableSwap to y
    # Create and use swapfile on resource disk.
    ResourceDisk.EnableSwap=y
    
    # Set the size of the SWAP file with property ResourceDisk.SwapSizeMB
    # The free space of resource disk varies by virtual machine size. Make sure that you do not set a value that is too big. You can check the SWAP space with command swapon
    # Size of the swapfile.
    ResourceDisk.SwapSizeMB=2000
    

    重新啟動代理程式以啟動變更Restart the Agent to activate the change

    sudo service waagent restart
    

安裝資料庫Install database

在此範例中,SAP NetWeaver 安裝在 SAP Hana 上。In this example, SAP NetWeaver is installed on SAP HANA. 您可以針對此安裝使用每個支援的資料庫。You can use every supported database for this installation. 如需有關如何在 Azure 中安裝 SAP Hana 的詳細資訊,請參閱azure 虛擬機器(vm)上 SAP Hana 的高可用性For more information on how to install SAP HANA in Azure, see High Availability of SAP HANA on Azure Virtual Machines (VMs). 如需支援的資料庫清單,請參閱SAP 附注 1928533For a list of supported databases, see SAP Note 1928533.

  • 執行 SAP 資料庫執行個體安裝Run the SAP database instance installation

    使用對應至資料庫負載平衡器前端設定之 IP 位址的虛擬主機名稱,將 SAP NetWeaver 資料庫實例安裝為根目錄。Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the database.

    您可以使用 sapinst 參數 SAPINST_REMOTE_ACCESS_USER 來允許非 root 使用者連線到 sapinst。You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst.

    sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
    

SAP NetWeaver 應用程式伺服器安裝SAP NetWeaver application server installation

請遵循下列步驟來安裝 SAP 應用程式伺服器。Follow these steps to install an SAP application server.

  1. [A] 準備應用程式伺服器遵循上述的SAP NetWeaver 應用程式伺服器準備一章中的步驟來準備應用程式伺服器。[A] Prepare application server Follow the steps in the chapter SAP NetWeaver application server preparation above to prepare the application server.

  2. [A] 安裝 sap NetWeaver 應用程式伺服器安裝主要或其他 sap NetWeaver 應用程式伺服器。[A] Install SAP NetWeaver application server Install a primary or additional SAP NetWeaver applications server.

    您可以使用 sapinst 參數 SAPINST_REMOTE_ACCESS_USER 來允許非 root 使用者連線到 sapinst。You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst.

    sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
    
  3. [A] 更新 secure store SAP Hana[A] Update SAP HANA secure store

    將 SAP HANA 安全存放區更新為指向 SAP HANA 系統複寫設定的虛擬名稱。Update the SAP HANA secure store to point to the virtual name of the SAP HANA System Replication setup.

    執行下列命令來列出項目Run the following command to list the entries

    
    hdbuserstore List
    

    這樣應該會列出所有項目,且看起來應該類似This should list all entries and should look similar to

    
    DATA FILE       : /home/qasadm/.hdb/anftstsapa01/SSFS_HDB.DAT
    KEY FILE        : /home/qasadm/.hdb/anftstsapa01/SSFS_HDB.KEY
    
    KEY DEFAULT
      ENV : 10.1.1.5:30313
      USER: SAPABAP1
      DATABASE: QAS
    

    輸出會顯示預設項目的 IP 位址指向虛擬機器,而不是指向負載平衡器的 IP 位址。The output shows that the IP address of the default entry is pointing to the virtual machine and not to the load balancer's IP address. 這個項目必須變更才能指向負載平衡器的虛擬機器主機名稱。This entry needs to be changed to point to the virtual hostname of the load balancer. 請務必使用相同的埠(在上述輸出中為30313 )和資料庫名稱(上述輸出中的QAS )!Make sure to use the same port (30313 in the output above) and database name (QAS in the output above)!

    
    su - qasadm
    hdbuserstore SET DEFAULT qasdb:30313@QAS SAPABAP1 <password of ABAP schema>
    

測試叢集設定Test the cluster setup

下列測試是SUSE 的最佳做法指南中的測試案例複本。The following tests are a copy of the test cases in the best practices guides of SUSE. 為方便起見,我們將案例複製過來。They are copied for your convenience. 請同時閱讀最佳做法指南,並執行所有可能已新增的其他測試。Always also read the best practices guides and perform all additional tests that might have been added.

  1. 測試 HAGetFailoverConfig、HACheckConfig 和 HACheckFailoverConfigTest HAGetFailoverConfig, HACheckConfig, and HACheckFailoverConfig

    以 <sapsid>adm 身份在目前執行 ASCS 執行個體的節點上執行下列命令。Run the following commands as <sapsid>adm on the node where the ASCS instance is currently running. 如果命令失敗,並且出現「失敗:記憶體不足」,則可能是因為您的主機名稱中有短破折號。If the commands fail with FAIL: Insufficient memory, it might be caused by dashes in your hostname. 這是已知問題,SUSE 會透過 sap-suse-cluster-connector 套件提供修正程式。This is a known issue and will be fixed by SUSE in the sap-suse-cluster-connector package.

    
    anftstsapcl1:qasadm 52> sapcontrol -nr 00 -function HAGetFailoverConfig
    07.03.2019 20:08:59
    HAGetFailoverConfig
    OK
    HAActive: TRUE
    HAProductVersion: SUSE Linux Enterprise Server for SAP Applications 12 SP3
    HASAPInterfaceVersion: SUSE Linux Enterprise Server for SAP Applications 12 SP3 (sap_suse_cluster_connector 3.1.0)
    HADocumentation: https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices/
    HAActiveNode: anftstsapcl1
    HANodes: anftstsapcl1, anftstsapcl2
    
    anftstsapcl1:qasadm 54> sapcontrol -nr 00 -function HACheckConfig
    07.03.2019 23:28:29
    HACheckConfig
    OK
    state, category, description, comment
    SUCCESS, SAP CONFIGURATION, Redundant ABAP instance configuration, 2 ABAP instances detected
    SUCCESS, SAP CONFIGURATION, Redundant Java instance configuration, 0 Java instances detected
    SUCCESS, SAP CONFIGURATION, Enqueue separation, All Enqueue server separated from application server
    SUCCESS, SAP CONFIGURATION, MessageServer separation, All MessageServer separated from application server
    SUCCESS, SAP CONFIGURATION, ABAP instances on multiple hosts, ABAP instances on multiple hosts detected
    SUCCESS, SAP CONFIGURATION, Redundant ABAP SPOOL service configuration, 2 ABAP instances with SPOOL service detected
    SUCCESS, SAP STATE, Redundant ABAP SPOOL service state, 2 ABAP instances with active SPOOL service detected
    SUCCESS, SAP STATE, ABAP instances with ABAP SPOOL service on multiple hosts, ABAP instances with active ABAP SPOOL service on multiple hosts detected
    SUCCESS, SAP CONFIGURATION, Redundant ABAP BATCH service configuration, 2 ABAP instances with BATCH service detected
    SUCCESS, SAP STATE, Redundant ABAP BATCH service state, 2 ABAP instances with active BATCH service detected
    SUCCESS, SAP STATE, ABAP instances with ABAP BATCH service on multiple hosts, ABAP instances with active ABAP BATCH service on multiple hosts detected
    SUCCESS, SAP CONFIGURATION, Redundant ABAP DIALOG service configuration, 2 ABAP instances with DIALOG service detected
    SUCCESS, SAP STATE, Redundant ABAP DIALOG service state, 2 ABAP instances with active DIALOG service detected
    SUCCESS, SAP STATE, ABAP instances with ABAP DIALOG service on multiple hosts, ABAP instances with active ABAP DIALOG service on multiple hosts detected
    SUCCESS, SAP CONFIGURATION, Redundant ABAP UPDATE service configuration, 2 ABAP instances with UPDATE service detected
    SUCCESS, SAP STATE, Redundant ABAP UPDATE service state, 2 ABAP instances with active UPDATE service detected
    SUCCESS, SAP STATE, ABAP instances with ABAP UPDATE service on multiple hosts, ABAP instances with active ABAP UPDATE service on multiple hosts detected
    SUCCESS, SAP STATE, SCS instance running, SCS instance status ok
    SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version (anftstsapvh_QAS_00), SAPInstance includes is-ers patch
    SUCCESS, SAP CONFIGURATION, Enqueue replication (anftstsapvh_QAS_00), Enqueue replication enabled
    SUCCESS, SAP STATE, Enqueue replication state (anftstsapvh_QAS_00), Enqueue replication active
    
    anftstsapcl1:qasadm 55> sapcontrol -nr 00 -function HACheckFailoverConfig
    07.03.2019 23:30:48
    HACheckFailoverConfig
    OK
    state, category, description, comment
    SUCCESS, SAP CONFIGURATION, SAPInstance RA sufficient version, SAPInstance includes is-ers patch
    
  2. 手動移轉 ASCS 執行個體Manually migrate the ASCS instance

    開始測試之前的資源狀態:Resource state before starting the test:

    
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_ASCS        (ocf::heartbeat:anything):      Started anftstsapcl2
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rscsap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    stonith-sbd     (stonith:external/sbd): Started anftstsapcl1
     Resource Group: g-QAS_ERS
         fs_QAS_ERS (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ERS (ocf::heartbeat:anything):      Started anftstsapcl1
         vip_QAS_ERS        (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Starting anftstsapcl1
    

    以 root 身份執行下列命令,來遷移 ASCS 執行個體。Run the following commands as root to migrate the ASCS instance.

    
    anftstsapcl1:~ # crm resource migrate rsc_sap_QAS_ASCS00 force
    INFO: Move constraint created for rsc_sap_QAS_ASCS00
    
    anftstsapcl1:~ # crm resource unmigrate rsc_sap_QAS_ASCS00
    INFO: Removed migration constraints for rsc_sap_QAS_ASCS00
    
    # Remove failed actions for the ERS that occurred as part of the migration
    anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ERS01
    

    測試完成之後的資源狀態:Resource state after the test:

    
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ASCS        (ocf::heartbeat:anything):      Started anftstsapcl1
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    stonith-sbd     (stonith:external/sbd): Started anftstsapcl1
     Resource Group: g-QAS_ERS
         fs_QAS_ERS (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_ERS (ocf::heartbeat:anything):      Started anftstsapcl2
         vip_QAS_ERS        (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    
  3. 測試 HAFailoverToNodeTest HAFailoverToNode

    開始測試之前的資源狀態:Resource state before starting the test:

    
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ASCS        (ocf::heartbeat:anything):      Started anftstsapcl1
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    stonith-sbd     (stonith:external/sbd): Started anftstsapcl1
     Resource Group: g-QAS_ERS
         fs_QAS_ERS (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_ERS (ocf::heartbeat:anything):      Started anftstsapcl2
         vip_QAS_ERS        (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    

    以 <sapsid>adm 身份執行下列命令,來遷移 ASCS 執行個體。Run the following commands as <sapsid>adm to migrate the ASCS instance.

    
    anftstsapcl1:qasadm 53> sapcontrol -nr 00 -host anftstsapvh -user qasadm <password> -function HAFailoverToNode ""
    
    # run as root
    # Remove failed actions for the ERS that occurred as part of the migration
    anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ERS01
    # Remove migration constraints
    anftstsapcl1:~ # crm resource clear rsc_sap_QAS_ASCS00
    #INFO: Removed migration constraints for rsc_sap_QAS_ASCS00
    

    測試完成之後的資源狀態:Resource state after the test:

    
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_ASCS        (ocf::heartbeat:anything):      Started anftstsapcl2
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    stonith-sbd     (stonith:external/sbd): Started anftstsapcl1
     Resource Group: g-QAS_ERS
         fs_QAS_ERS (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ERS (ocf::heartbeat:anything):      Started anftstsapcl1
         vip_QAS_ERS        (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    
  4. 模擬節點損毀Simulate node crash

    開始測試之前的資源狀態:Resource state before starting the test:

    
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_ASCS        (ocf::heartbeat:anything):      Started anftstsapcl2
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    stonith-sbd     (stonith:external/sbd): Started anftstsapcl1
     Resource Group: g-QAS_ERS
         fs_QAS_ERS (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ERS (ocf::heartbeat:anything):      Started anftstsapcl1
         vip_QAS_ERS        (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    

    以 root 身份在執行 ASCS 執行個體的節點上執行下列命令Run the following command as root on the node where the ASCS instance is running

    anftstsapcl2:~ # echo b > /proc/sysrq-trigger
    

    如果您使用 SBD,Pacemaker 應不會在已終止的節點上自動啟動。If you use SBD, Pacemaker should not automatically start on the killed node. 節點在重新啟動後的狀態應該會像下面這樣。The status after the node is started again should look like this.

    Online:
    Online: [ anftstsapcl1 ]
    OFFLINE: [ anftstsapcl2 ]
    
    Full list of resources:
    
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ASCS        (ocf::heartbeat:anything):      Started anftstsapcl1
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    stonith-sbd     (stonith:external/sbd): Started anftstsapcl1
     Resource Group: g-QAS_ERS
         fs_QAS_ERS (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ERS (ocf::heartbeat:anything):      Started anftstsapcl1
         vip_QAS_ERS        (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    
    Failed Actions:
    * rsc_sap_QAS_ERS01_monitor_11000 on anftstsapcl1 'not running' (7): call=166, status=complete, exitreason='',
     last-rc-change='Fri Mar  8 18:26:10 2019', queued=0ms, exec=0ms
    

    使用下列命令在已清除的節點上啟動 Pacemaker,並清除 SBD 訊息及失敗的資源。Use the following commands to start Pacemaker on the killed node, clean the SBD messages, and clean the failed resources.

    
    # run as root
    # list the SBD device(s)
    anftstsapcl2:~ # cat /etc/sysconfig/sbd | grep SBD_DEVICE=
    # SBD_DEVICE="/dev/disk/by-id/scsi-36001405b730e31e7d5a4516a2a697dcf;/dev/disk/by-id/scsi-36001405f69d7ed91ef54461a442c676e;/dev/disk/by-id/scsi-360014058e5f335f2567488882f3a2c3a"
    
    anftstsapcl2:~ # sbd -d /dev/disk/by-id/scsi-36001405772fe8401e6240c985857e11 -d /dev/disk/by-id/scsi-36001405f69d7ed91ef54461a442c676e -d /dev/disk/by-id/scsi-360014058e5f335f2567488882f3a2c3a message anftstsapcl2 clear
    
    anftstsapcl2:~ # systemctl start pacemaker
    anftstsapcl2:~ # crm resource cleanup rsc_sap_QAS_ASCS00
    anftstsapcl2:~ # crm resource cleanup rsc_sap_QAS_ERS01
    

    測試完成之後的資源狀態:Resource state after the test:

    
    Full list of resources:
    
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ASCS        (ocf::heartbeat:anything):      Started anftstsapcl1
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    stonith-sbd     (stonith:external/sbd): Started anftstsapcl1
     Resource Group: g-QAS_ERS
         fs_QAS_ERS (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_ERS (ocf::heartbeat:anything):      Started anftstsapcl2
         vip_QAS_ERS        (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    
  5. 測試以手動方式重新啟動 ASCS 執行個體Test manual restart of ASCS instance

    開始測試之前的資源狀態:Resource state before starting the test:

    
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_ASCS        (ocf::heartbeat:anything):      Started anftstsapcl2
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    stonith-sbd     (stonith:external/sbd): Started anftstsapcl1
     Resource Group: g-QAS_ERS
         fs_QAS_ERS (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ERS (ocf::heartbeat:anything):      Started anftstsapcl1
         vip_QAS_ERS        (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    

    例如,透過編輯交易 su01 中的使用者來建立佇列鎖定。Create an enqueue lock by, for example edit a user in transaction su01. 在執行 ASCS 實例的節點上,以 < sapsid>adm 的形式執行下列命令。Run the following commands as <sapsid>adm on the node where the ASCS instance is running. 這些命令會停止 ASCS 執行個體,並重新啟動它。The commands will stop the ASCS instance and start it again. 如果使用佇列伺服器1架構,此測試中的佇列鎖定應該會遺失。If using enqueue server 1 architecture, the enqueue lock is expected to be lost in this test. 如果使用佇列伺服器2架構,將會保留排入佇列。If using enqueue server 2 architecture, the enqueue will be retained.

    anftstsapcl2:qasadm 51> sapcontrol -nr 00 -function StopWait 600 2
    

    ASCS 執行個體應會立即在 Pacemaker 中停用The ASCS instance should now be disabled in Pacemaker

      rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Stopped (disabled)
    

    在相同節點上再次啟動 ASCS 執行個體。Start the ASCS instance again on the same node.

    anftstsapcl2:qasadm 52> sapcontrol -nr 00 -function StartWait 600 2
    

    如果使用佇列伺服器複寫1架構,而且後端應該已重設,則應該遺失交易 su01 的排入佇列鎖定。The enqueue lock of transaction su01 should be lost, if using enqueue server replication 1 architecture and the back-end should have been reset. 測試完成之後的資源狀態:Resource state after the test:

    
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_ASCS        (ocf::heartbeat:anything):      Started anftstsapcl2
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    stonith-sbd     (stonith:external/sbd): Started anftstsapcl1
     Resource Group: g-QAS_ERS
         fs_QAS_ERS (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ERS (ocf::heartbeat:anything):      Started anftstsapcl1
         vip_QAS_ERS        (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    
  6. 終止訊息伺服器處理程序Kill message server process

    開始測試之前的資源狀態:Resource state before starting the test:

    
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_ASCS        (ocf::heartbeat:anything):      Started anftstsapcl2
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    stonith-sbd     (stonith:external/sbd): Started anftstsapcl1
     Resource Group: g-QAS_ERS
         fs_QAS_ERS (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ERS (ocf::heartbeat:anything):      Started anftstsapcl1
         vip_QAS_ERS        (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    

    以 root 身份執行下列命令,找出訊息伺服器的處理程序,並將其終止。Run the following commands as root to identify the process of the message server and kill it.

    anftstsapcl2:~ # pgrep ms.sapQAS | xargs kill -9
    

    如果您只終止訊息伺服器一次,sapstart 會將伺服器重新啟動。If you only kill the message server once, it will be restarted by sapstart. 如果您終止伺服器的次數足夠,則 Pacemaker 最終會將 ASCS 執行個體移到另一個節點。If you kill it often enough, Pacemaker will eventually move the ASCS instance to the other node. 以 root 身份執行下列命令,以在測試之後清除 ASCS 和 ERS 執行個體的資源狀態。Run the following commands as root to clean up the resource state of the ASCS and ERS instance after the test.

    
    anftstsapcl2:~ # crm resource cleanup rsc_sap_QAS_ASCS00
    anftstsapcl2:~ # crm resource cleanup rsc_sap_QAS_ERS01
    

    測試完成之後的資源狀態:Resource state after the test:

    
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ASCS        (ocf::heartbeat:anything):      Started anftstsapcl1
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    stonith-sbd     (stonith:external/sbd): Started anftstsapcl1
     Resource Group: g-QAS_ERS
         fs_QAS_ERS (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_ERS (ocf::heartbeat:anything):      Started anftstsapcl2
         vip_QAS_ERS        (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    
  7. 終止佇列伺服器處理程序Kill enqueue server process

    開始測試之前的資源狀態:Resource state before starting the test:

    
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ASCS        (ocf::heartbeat:anything):      Started anftstsapcl1
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    stonith-sbd     (stonith:external/sbd): Started anftstsapcl1
     Resource Group: g-QAS_ERS
         fs_QAS_ERS (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_ERS (ocf::heartbeat:anything):      Started anftstsapcl2
         vip_QAS_ERS        (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    

    以 root 身份在執行 ASCS 執行個體的節點上執行下列命令,以終止佇列伺服器。Run the following commands as root on the node where the ASCS instance is running to kill the enqueue server.

    anftstsapcl1:~ # pgrep en.sapQAS | xargs kill -9
    

    ASCS 執行個體應會立即容錯移轉到另一個節點。The ASCS instance should immediately fail over to the other node. 在 ASCS 執行個體啟動之後,ERS 執行個體應該也會進行容錯移轉。The ERS instance should also fail over after the ASCS instance is started. 以 root 身份執行下列命令,以在測試之後清除 ASCS 和 ERS 執行個體的資源狀態。Run the following commands as root to clean up the resource state of the ASCS and ERS instance after the test.

    
    anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ASCS00
    anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ERS01
    

    測試完成之後的資源狀態:Resource state after the test:

    
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_ASCS        (ocf::heartbeat:anything):      Started anftstsapcl2
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    stonith-sbd     (stonith:external/sbd): Started anftstsapcl1
     Resource Group: g-QAS_ERS
         fs_QAS_ERS (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ERS (ocf::heartbeat:anything):      Started anftstsapcl1
         vip_QAS_ERS        (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    
  8. 終止佇列複寫伺服器處理程序Kill enqueue replication server process

    開始測試之前的資源狀態:Resource state before starting the test:

    
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_ASCS        (ocf::heartbeat:anything):      Started anftstsapcl2
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    stonith-sbd     (stonith:external/sbd): Started anftstsapcl1
     Resource Group: g-QAS_ERS
         fs_QAS_ERS (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ERS (ocf::heartbeat:anything):      Started anftstsapcl1
         vip_QAS_ERS        (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    

    以 root 身份在執行 ERS 執行個體的節點上執行下列命令,以終止佇列複寫伺服器處理程序。Run the following command as root on the node where the ERS instance is running to kill the enqueue replication server process.

    anftstsapcl1:~ # pgrep er.sapQAS | xargs kill -9
    

    如果您只執行一次命令,sapstart 將會重新開機進程。If you only run the command once, sapstart will restart the process. 如果您經常執行它,sapstart 將不會重新開機進程,而且資源會處於停止狀態。If you run it often enough, sapstart will not restart the process and the resource will be in a stopped state. 以 root 身份執行下列命令,以在測試之後清除 ERS 執行個體的資源狀態。Run the following commands as root to clean up the resource state of the ERS instance after the test.

    anftstsapcl1:~ # crm resource cleanup rsc_sap_QAS_ERS01
    

    測試完成之後的資源狀態:Resource state after the test:

    
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_ASCS        (ocf::heartbeat:anything):      Started anftstsapcl2
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    stonith-sbd     (stonith:external/sbd): Started anftstsapcl1
     Resource Group: g-QAS_ERS
         fs_QAS_ERS (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ERS (ocf::heartbeat:anything):      Started anftstsapcl1
         vip_QAS_ERS        (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    
  9. 終止佇列 sapstartsrv 程序Kill enqueue sapstartsrv process

    開始測試之前的資源狀態:Resource state before starting the test:

    
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_ASCS        (ocf::heartbeat:anything):      Started anftstsapcl2
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    stonith-sbd     (stonith:external/sbd): Started anftstsapcl1
     Resource Group: g-QAS_ERS
         fs_QAS_ERS (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ERS (ocf::heartbeat:anything):      Started anftstsapcl1
         vip_QAS_ERS        (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    

    以 root 身份在執行 ASCS 的節點上執行下列命令。Run the following commands as root on the node where the ASCS is running.

    
    anftstsapcl2:~ # pgrep -fl ASCS00.*sapstartsrv
    #67625 sapstartsrv
    
    anftstsapcl2:~ # kill -9 67625
    

    應一律由 Pacemaker 資源代理程式來重新啟動 sapstartsrv 處理程序。The sapstartsrv process should always be restarted by the Pacemaker resource agent. 測試完成之後的資源狀態:Resource state after the test:

    
     Resource Group: g-QAS_ASCS
         fs_QAS_ASCS        (ocf::heartbeat:Filesystem):    Started anftstsapcl2
         nc_QAS_ASCS        (ocf::heartbeat:anything):      Started anftstsapcl2
         vip_QAS_ASCS       (ocf::heartbeat:IPaddr2):       Started anftstsapcl2
         rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance):   Started anftstsapcl2
    stonith-sbd     (stonith:external/sbd): Started anftstsapcl1
     Resource Group: g-QAS_ERS
         fs_QAS_ERS (ocf::heartbeat:Filesystem):    Started anftstsapcl1
         nc_QAS_ERS (ocf::heartbeat:anything):      Started anftstsapcl1
         vip_QAS_ERS        (ocf::heartbeat:IPaddr2):       Started anftstsapcl1
         rsc_sap_QAS_ERS01  (ocf::heartbeat:SAPInstance):   Started anftstsapcl1
    

後續步驟Next steps