您现在访问的是微软AZURE全球版技术文档网站,若需要访问由世纪互联运营的MICROSOFT AZURE中国区技术文档网站,请访问 https://docs.azure.cn.

使用第 2 层外延式网络迁移工作负荷Migrate workloads using Layer 2 stretched networks

本指南将介绍如何使用第 2 层 VPN (L2VPN) 将第 2 层网络从本地环境延伸到 CloudSimple 私有云。In this guide, you will learn how to use Layer 2 VPN (L2VPN) to stretch a Layer 2 network from your on-premises environment to your CloudSimple Private Cloud. 利用此解决方案,可在本地 VMware 环境中运行的工作负载迁移到同一子网地址空间内的 Azure 中的私有云,而无需重新对工作负载设置 IP。This solution enables migration of workloads running in your on-premises VMware environment to the Private Cloud in Azure within the same subnet address space without having to re-IP your workloads.

基于 L2VPN 延伸第 2 层网络可在本地 VMware 网络中有或没有基于 NSX 的网络的情况下实现。L2VPN based stretching of Layer 2 networks can work with or without NSX-based networks in your on-premises VMware environment. 如果本地没有用于工作负载的基于 NSX 的网络,则可以使用独立 NSX Edge 服务网关。If you don't have NSX-based networks for workloads on-premises, you can use a standalone NSX Edge Services Gateway.

备注

本指南介绍通过站点到站点 VPN 连接本地和私有云数据中心的方案。This guide covers the scenario where on-premises and the Private Cloud datacenters are connected over Site-to-Site VPN.

部署方案Deployment scenario

要使用 L2VPN 延伸本地网络,必须配置 L2VPN 服务器(目标 NSX-T Tier0 路由器)和 L2VPN 客户端(源独立客户端)。To stretch your on-premises network using L2VPN, you must configure an L2VPN server (destination NSX-T Tier0 router) and an L2VPN client (source standalone client).

在此部署方案中,私有云通过站点到站点 VPN 隧道连接到本地环境,该隧道允许本地管理和 vMotion 子网与私有云管理和 vMotion 子网通信。In this deployment scenario, your Private Cloud is connected to your on-premises environment via a Site-to-Site VPN tunnel that allows on-premises management and vMotion subnets to communicate with the Private Cloud management and vMotion subnets. 这种安排对于跨 vCenter vMotion (xVC-vMotion) 是必需的。This arrangement is necessary for Cross vCenter vMotion (xVC-vMotion). NSX-T Tier0 路由器作为 L2VPN 服务器部署在私有云中。A NSX-T Tier0 router is deployed as an L2VPN server in the Private Cloud.

独立 NSX Edge 作为 L2VPN 客户端部署在本地环境中,并随后与 L2VPN 服务器配对。Standalone NSX Edge is deployed in your on-premises environment as an L2VPN client and subsequently paired with the L2VPN server. 将在各端创建 GRE 隧道终结点,并将其配置为将本地第 2 层网络“延伸”到私有云。A GRE tunnel endpoint is created on each side and configured to 'stretch' the on-premises Layer 2 network to your Private Cloud. 下图演示了这一配置。This configuration is depicted in the following figure.

部署方案

若要详细了解使用 L2 VPN 进行迁移的信息,请参阅 VMware 文档中的虚拟专用网To learn more about migration using L2 VPN, see Virtual Private Networks in the VMware documentation.

部署该解决方案的先决条件Prerequisites for deploying the solution

在部署和配置该解决方案前,请确认已满足以下先决条件:Verify that the following are in place before deploying and configuring the solution:

  • 本地 vSphere 版本为 6.7U1+ 或 6.5P03+。The on-premises vSphere version is 6.7U1+ or 6.5P03+.
  • 本地 vSphere 许可证位于 Enterprise Plus 级别(用于 vSphere 分布式交换机)。The on-premises vSphere license is at the Enterprise Plus level (for vSphere Distributed Switch).
  • 标识要延伸到私有云的工作负载第 2 层网络。Identify the workload Layer 2 network to be stretched to your Private Cloud.
  • 标识本地环境中用于部署 L2VPN 客户端设备的第 2 层网络。Identify a Layer 2 network in your on-premises environment for deploying your L2VPN client appliance.
  • 已创建私有云A Private Cloud is already created.
  • 独立 NSX-T Edge 设备的版本与私有云环境中使用的 NSX-T Manager 版本 (NSX-T 2.3.0) 兼容。The version of the standalone NSX-T Edge appliance is compatible with the NSX-T Manager version (NSX-T 2.3.0) used in your Private Cloud environment.
  • 已在本地 vCenter 中创建了已启用伪造传输的 trunk 端口组。A trunk port group has been created in the on-premises vCenter with forged transmits enabled.
  • 已保留一个公共 IP 地址用于 NSX-T 独立客户端上行链路 IP 地址,并且已有 1:1 NAT 可用于在两个地址之间进行转换。A public IP address has been reserved to use for the NSX-T standalone client uplink IP address, and 1:1 NAT is in place for translation between the two addresses.
  • 已将 az.cloudsimple.io 域的本地 DNS 服务器上的 DNS 转发设置为指向私有云 DNS 服务器。DNS forwarding is set on the on-premises DNS servers for the az.cloudsimple.io domain to point to the Private Cloud DNS servers.
  • RTT 延迟小于或等于 150 ms,需要满足此条件 vMotion 才能跨两个站点工作。RTT latency is less than or equal to 150 ms, as required for vMotion to work across the two sites.

限制和注意事项Limitations and considerations

下表列出了支持的 vSphere 版本和网络适配器类型。The following table lists supported vSphere versions and network adaptor types.

vSphere 版本vSphere version 源 vSwitch 类型Source vSwitch type 虚拟 NIC 驱动程序Virtual NIC driver 目标 vSwitch 类型Target vSwitch Type 支持?Supported?
AllAll DVSDVS AllAll DVSDVS Yes
vSphere 6.7UI 或更高版本、6.5P03 或更高版本vSphere 6.7UI or higher, 6.5P03 or higher DVSDVS VMXNET3VMXNET3 N-VDSN-VDS Yes
vSphere 6.7UI 或更高版本、6.5P03 或更高版本vSphere 6.7UI or higher, 6.5P03 or higher DVSDVS E1000E1000 N-VDSN-VDS 不受特定 VWware 的支持Not supported per VWware
vSphere 6.7UI 或 6.5P03、NSX-V 或低于 NSX-T2.2 的版本、6.5P03 或更高版本vSphere 6.7UI or 6.5P03, NSX-V or versions below NSX-T2.2, 6.5P03 or higher AllAll AllAll N-VDSN-VDS 不受特定 VWware 的支持Not supported per VWware

从 VMware NSX-T 2.3 版本起:As of the VMware NSX-T 2.3 release:

  • 不能同时路由私有云端上通过 L2VPN 延伸到本地的逻辑交换机。The logical switch on the Private Cloud side that is stretched to on-premises over L2VPN can't be routed at the same time. 延伸的逻辑交换机不能连接到逻辑路由器。The stretched logical switch can't be connected to a logical router.
  • 仅可使用 API 调用配置 L2VPN 和基于路由的 IPSEC VPN。L2VPN and route-based IPSEC VPNs can only be configured using API calls.

有关详细信息,请参阅 VMware 文档中的虚拟专用网For more information, see Virtual Private Networks in the VMware documentation.

L2 VPN 部署寻址示例Sample L2 VPN deployment addressing

部署独立 ESG(L2 VPN 客户端)的本地网络On-premises network where the standalone ESG (L2 VPN client) is deployed

Item Value
网络名称Network name MGMT_NET_VLAN469MGMT_NET_VLAN469
VLANVLAN 469469
CIDRCIDR 10.250.0.0/2410.250.0.0/24
独立 Edge 设备 IP 地址Standalone Edge appliance IP address 10.250.0.11110.250.0.111
独立 Edge 设备 NAT IP 地址Standalone Edge appliance NAT IP address 192.227.85.167192.227.85.167

要延伸的本地网络On-premises network to be stretched

Item Value
VLANVLAN 472472
CIDRCIDR 10.250.3.0/2410.250.3.0/24

用于 NSX-T Tier0 路由器的私有云 IP 架构(L2 VPN 服务器)Private Cloud IP schema for NSX-T Tier0 Router (L2 VPN serve)

Item Value
环回接口Loopback interface 192.168.254.254/32192.168.254.254/32
隧道接口Tunnel interface 5.5.5.1/295.5.5.1/29
逻辑交换机(已延伸)Logical switch (stretched) Stretch_LSStretch_LS
环回接口(NAT IP 地址)Loopback interface (NAT IP address) 104.40.21.81104.40.21.81

要映射到延伸网络的私有云网络Private Cloud network to be mapped to the stretched network

Item Value
VLANVLAN 712712
CIDRCIDR 10.200.15.0/2410.200.15.0/24

获取 L2VPN 所需的逻辑路由器 IDFetch the logical router ID needed for L2VPN

以下步骤演示了如何获取 IPsec 和 L2VPN 服务的 Tier0 DR 逻辑路由器实例的逻辑路由器 ID。The following steps show how to fetch the logical-router ID of Tier0 DR logical router instance for the IPsec and L2VPN services. 稍后在实现 L2VPN 时需要该逻辑路由器 ID。The logical-router ID is needed later when implementing the L2VPN.

  1. 登录 NSX-T Manager https://*nsx-t-manager-ip-address*,选择“网络” > “路由器” > “Provider-LR” > “概述” 。Sign in to NSX-T Manager https://*nsx-t-manager-ip-address* and select Networking > Routers > Provider-LR > Overview. 对于“高可用性模式”,请选择“主动-待机” 。For High Availability Mode, select Active-Standby. 此操作将打开弹出窗口,其中显示 Tier0 路由器当前处于活跃状态的 Edge VM。This action opens a pop-up window that shows the Edge VM on which the Tier0 router is currently active.

    选择“主动-待机”

  2. 选择“Fabric” > “节点” > “Edge” 。Select Fabric > Nodes > Edges. 记下在上一步中标识的活跃 Edge VM (Edge VM1) 的管理 IP 地址。Make a note of the management IP address of the active Edge VM (Edge VM1) identified in the previous step.

    记下管理 IP

  3. 打开与 Edge VM 的管理 IP 地址的 SSH 会话。Open an SSH session to the management IP address of the Edge VM. 以用户名“admin”和密码“CloudSimple 123!”运行 get logical-router 命令 。Run the get logical-router command with username admin and password CloudSimple 123!.

    显示打开的 SSH 会话的屏幕截图。

  4. 如果未看到“DR-Provider-LR”条目,请完成以下步骤。If you don't see an entry 'DR-Provider-LR', complete the following steps.

  5. 创建两个支持覆盖的逻辑交换机。Create two overlay-backed logical switches. 一个逻辑交换机延伸到迁移的工作负载所在的本地位置。One logical switch is stretched to on-premises where the migrated workloads reside. 另一个逻辑交换机为虚拟交换机。Another logical switch is a dummy switch. 有关说明,请参阅 VMware 文档中的创建逻辑交换机For instructions, see Create a Logical Switch in the VMware documentation.

    创建逻辑交换机

  6. 使用链接本地 IP 地址或本地或私有云中的任何非重叠子网将虚拟交换机附加到 Tier1 路由器。Attach the dummy switch to the Tier1 router with a link local IP address or any non-overlapping subnet from on-premises or your Private Cloud. 请参阅 VMware 文档中的在第 1 层逻辑路由器上添加下行链路端口See Add a Downlink Port on a Tier-1 Logical Router in the VMware documentation.

    附加虚拟交换机

  7. 再次对 Edge VM 的 SSH 会话运行 get logical-router 命令。Run the get logical-router command again on the SSH session of the Edge VM. “DR-Provider-LR”逻辑路由器的 UUID 将随之显示。The UUID of the 'DR-Provider-LR' logical router is displayed. 记下 UUID,在配置 L2VPN 时需要使用。Make a note of the UUID, which is required when configuring the L2VPN.

    显示逻辑路由器 UUID 的屏幕截图。

获取 L2VPN 所需的逻辑交换机 IDFetch the logical-switch ID needed for L2VPN

  1. 登录 NSX-T Manager(https://nsx-t-manager-ip-address)。Sign in to NSX-T Manager (https://nsx-t-manager-ip-address).

  2. 选择“网络” > “切换” > “交换机” > “<\Logical switch”> > “概述” 。Select Networking > Switching > Switches > <\Logical switch> > Overview.

  3. 记下延伸逻辑交换机的 UUID,在配置 L2VPN 时需要使用。Make a note of the UUID of the stretch logical switch, which is required when configuring the L2VPN.

    获取逻辑路由器输出

L2VPN 的路由和安全注意事项Routing and security considerations for L2VPN

若要在 NSX-T Tier0 路由器与独立 NSX Edge 客户端之间建立基于 IPsec 路由的 VPN,则 NSX-T Tier0 路由器的环回接口必须能够通过 UDP 500/4500 与本地 NSX 独立客户端的公共 IP 地址进行通信。To establish an IPsec route-based VPN between the NSX-T Tier0 router and the standalone NSX Edge client, the loopback interface of the NSX-T Tier0 router must be able to communicate with the public IP address of NSX standalone client on-premises over UDP 500/4500.

允许将 UDP 500/4500 用于 IPsecAllow UDP 500/4500 for IPsec

  1. 在 CloudSimple 门户中为 NSX-T Tier0 环回接口创建公共 IP 地址Create a public IP address for the NSX-T Tier0 loopback interface in the CloudSimple portal.

  2. 创建防火墙表,其中包含允许 UDP 500/4500 入站流量的有状态规则,并将防火墙表附加到 NSX-T HostTransport 子网。Create a firewall table with stateful rules that allow UDP 500/ 4500 inbound traffic and attach the firewall table to the NSX-T HostTransport subnet.

  1. 为环回接口网络创建 NULL 路由。Create a null route for the loopback interface network. 登录 NSX-T Manager,选择“网络” > “路由” > “路由器” > “Provider-LR” > “路由” > “静态路由” 。Sign in to NSX-T Manager and select Networking > Routing > Routers > Provider-LR > Routing > Static Routes. 单击“添加”。Click Add. 对于“网络”,请输入环回接口 IP 地址。For Network, enter the loopback interface IP address. 对于“下一个跃点”,单击“添加”,指定“NULL”作为下一个跃点,并将“管理距离”保留为默认值 1 。For Next Hops, click Add, specify 'Null' for the next hop, and keep the default of 1 for Admin Distance.

    添加静态路由

  2. 创建 IP 前缀列表。Create an IP prefix list. 登录 NSX-T Manager,选择“网络” > “路由” > “路由器” > “Provider-LR” > “路由” > “IP 前缀列表” 。Sign in to NSX-T Manager and select Networking > Routing > Routers > Provider-LR > Routing > IP Prefix Lists. 单击“添加”。Click Add. 输入名称以标识列表。Enter a name to identify the list. 对于“前缀”,请单击“添加”两次 。For Prefixes, click Add twice. 在第一行,对“网络”输入“0.0.0.0/0”,对“操作”输入“拒绝” 。In the first line, enter '0.0.0.0/0' for Network and 'Deny' for Action. 在第二行,对“网络”选择“任何”,对“操作”选择“允许” 。In the second line, select Any for Network and Permit for Action.

  3. 将 IP 前缀列表附加到两个 BGP 邻居 (TOR)。Attach the IP prefix list to both BGP neighbors (TOR). 将 IP 前缀列表附加到 BGP 邻居可防止将 BGP 中的默认路由播发到 TOR 交换机。Attaching the IP prefix list to the BGP neighbor prevents the default route from being advertised in BGP to the TOR switches. 但是,包含 NULL 路由的任何其他路由都将向 TOR 交换机播发环回接口 IP 地址。However, any other route that includes the null route will advertise the loopback interface IP address to the TOR switches.

    创建 IP 前缀列表

  4. 登录 NSX-T Manager,选择“网络” > “路由” > “路由器” > “Provider-LR” > “路由” > “BGP” > “邻居” 。Sign in to NSX-T Manager and select Networking > Routing > Routers > Provider-LR > Routing > BGP > Neighbors. 选择第一个邻居。Select the first neighbor. 单击“编辑” > “地址系列” 。Click Edit > Address Families. 对于 IPv4 系列,请编辑“Out Filter”列,然后选择所创建的 IP 前缀列表。For the IPv4 family, Edit the Out Filter column and select the IP prefix list that you created. 单击“ 保存”。Click Save. 对第二个邻居重复此步骤。Repeat this step for the second neighbor.

    附加 IP 前缀列表 1 附加 IP 前缀列表2Attach IP prefix list 1 Attach IP prefix list 2

  5. 将 NULL 静态路由重新分发到 BGP。Redistribute the null static route into BGP. 若要将环回接口路由播发到基础网络,必须将 NULL 静态路由重新分发到 BGP。To advertise the loopback interface route to the underlay, you must redistribute the null static route into BGP. 登录 NSX-T Manager,选择“网络” > “路由” > “路由器” > “Provider-LR” > “路由” > “路由重新分发” > “邻居” 。Sign in to NSX-T Manager and select Networking > Routing > Routers > Provider-LR > Routing > Route Redistribution > Neighbors. 选择“Provider-LR-Route_Redistribution”,然后单击“编辑” 。Select Provider-LR-Route_Redistribution and click Edit. 选中“静态”复选框,然后单击“保存” 。Select the Static checkbox and click Save.

    将 NULL 静态路由重新分发到 BGP

在 NSX-T Tier0 路由器上配置基于路由的 VPNConfigure a route-based VPN on the NSX-T Tier0 router

使用以下模板填写在 NSX-T Tier0 路由器上配置基于路由的 VPN 的所有详细信息。Use the following template to fill in all the details for configuring a route-based VPN on the NSX-T Tier0 router. 后续的 POST 调用中需要每个 POST 调用中的 UUID。The UUIDs in each POST call are required in subsequent POST calls. L2VPN 的环回和隧道接口的 IP 地址必须唯一,且不能与本地或私有云网络重叠。The IP addresses for the loopback and tunnel interfaces for L2VPN must be unique and not overlap with the on-premises or Private Cloud networks.

为用于 L2VPN 的环回和隧道接口选择的 IP 地址必须唯一,且不能与本地或私有云网络重叠。The IP addresses chosen for loopback and tunnel interface used for L2VPN must be unique and not overlap with the on-premises or Private Cloud networks. 环回接口网络必须始终为 /32。The loopback interface network must always be /32.

Loopback interface ip : 192.168.254.254/32
Tunnel interface subnet : 5.5.5.0/29
Logical-router ID : UUID of Tier0 DR logical router obtained in section "Steps to fetch Logical-Router ID needed for L2VPN"
Logical-switch ID(Stretch) : UUID of Stretch Logical Switch obtained earlier
IPSec Service ID :
IKE profile ID :
DPD profile ID :
Tunnel Profile ID :
Local-endpoint ID :
Peer end-point ID :
IPSec VPN session ID (route-based) :
L2VPN service ID :
L2VPN session ID :
Logical-Port ID :
Peer Code :

对于以下所有 API 调用,请将 IP 地址替换为你的 NSX-T Manager IP 地址。For all of the following API calls, replace the IP address with your NSX-T Manager IP address. 可以从 POSTMAN 客户端或使用 curl 命令来运行所有这些 API 调用。You can run all these API calls from the POSTMAN client or by using curl commands.

在逻辑路由器上启用 IPSec VPN 服务Enable the IPSec VPN service on the logical router

POST   https://192.168.110.201/api/v1/vpn/ipsec/services/
{
"resource_type": "IPSecVPNService",
"description": "Manage VPN service",
"display_name": "IPSec VPN service",
"logical_router_id": "Logical-router ID",
"ike_log_level": "INFO",
"enabled": true
}

创建配置文件:IKECreate profiles: IKE

POST https://192.168.110.201/api/v1/vpn/ipsec/ike-profiles
 
{
"resource_type": "IPSecVPNIKEProfile",
"description": "IKEProfile for siteA",
"display_name": "IKEProfile siteA",
"encryption_algorithms": ["AES_128"],
"ike_version": "IKE_V2",
"digest_algorithms": ["SHA2_256"],
"sa_life_time":21600,
"dh_groups": ["GROUP14"]
}

创建配置文件:DPDCreate profiles: DPD

POST  https://192.168.110.201/api/v1/vpn/ipsec/dpd-profiles  

{
"resource_type": "IPSecVPNDPDProfile",
"display_name": "nsx-default-dpd-profile",
"enabled": true
}

创建配置文件:隧道Create profiles: Tunnel

POST  https://192.168.110.201/api/v1/vpn/ipsec/tunnel-profiles
 
{
"resource_type": "IPSecVPNTunnelProfile",
"display_name": "nsx-default-tunnel-profile",
"enable_perfect_forward_secrecy": true,
"encryption_algorithms": ["AES_GCM_128"],
"digest_algorithms": [],
"sa_life_time":3600,
"dh_groups": ["GROUP14"],
"encapsulation_mode": "TUNNEL_MODE",
"transform_protocol": "ESP",
"df_policy": "COPY"
}

创建本地终结点Create a local endpoint

POST https://192.168.110.201/api/v1/vpn/ipsec/local-endpoints
 
{
"resource_type": "IPSecVPNLocalEndpoint",
"description": "Local endpoint",
"display_name": "Local endpoint",
"local_id": "<Public IP of Loopback interface>",
"ipsec_vpn_service_id": {
"target_id": "IPSec VPN service ID"},
"local_address": "<IP of Loopback interface>",
"trust_ca_ids": [],
"trust_crl_ids": []
}

创建对等终结点Create a peer endpoint

POST https://192.168.110.201/api/v1/vpn/ipsec/peer-endpoints

{
"resource_type": "IPSecVPNPeerEndpoint",
"description": "Peer endpoint for site B",
"display_name": "Peer endpoint for site B",
"connection_initiation_mode": "INITIATOR",
"authentication_mode": "PSK",
"ipsec_tunnel_profile_id": "IPSec Tunnel profile ID",
"dpd_profile_id": "DPD profile ID",
"psk":"nsx",
"ike_profile_id": "IKE profile ID",
"peer_address": "<Public IP of Standalone client",
"peer_id": "<Public IP of Standalone client>"
}

创建基于路由的 VPN 会话Create a route-based VPN session

POST :  https://192.168.110.201/api/v1/vpn/ipsec/sessions
 
{
"resource_type": "RouteBasedIPSecVPNSession",
"peer_endpoint_id": "Peer Endpoint ID",
"ipsec_vpn_service_id": "IPSec VPN service ID",
"local_endpoint_id": "Local Endpoint ID",
"enabled": true,
"tunnel_ports": [
{
"ip_subnets": [
{
"ip_addresses": [
 "5.5.5.1"
],
"prefix_length": 24
}
  ]
}
]
}

在 NSX-T Tier0 路由器上配置 L2VPNConfigure L2VPN on NSX-T Tier0 router

在每次 POST 调用后填充以下信息。Fill in the following information after every POST call. 后续的 POST 调用中需要这些 ID。The IDs are required in subsequent POST calls.

L2VPN Service ID:
L2VPN Session ID:
Logical Port ID:

创建 L2VPN 服务Create the L2VPN service

以下 GET 命令的输出将为空,因为配置尚未完成。The output of the following GET command will be blank, because the configuration is not complete yet.

GET : https://192.168.110.201/api/v1/vpn/l2vpn/services

对于以下 POST 命令,逻辑路由器 ID 是之前获得的 Tier0 DR 逻辑路由器的 UUID。For the following POST command, the logical router ID is the UUID of the Tier0 DR logical router obtained earlier.

POST : https://192.168.110.201/api/v1/vpn/l2vpn/services

{
"logical_router_id": "Logical Router ID",
"enable_full_mesh" : true
}

创建 L2VPN 会话Create the L2VPN session

对于以下 POST 命令,L2VPN 服务 ID 是刚刚获得的 ID,IPsec VPN 会话 ID 是在上一节中获得的 ID。For the following POST command, the L2VPN service ID is the ID that you just obtained and the IPsec VPN session ID is the ID obtained in the previous section.

POST: https://192.168.110.201/api/v1/vpn/l2vpn/sessions

{
"l2vpn_service_id" : "L2VPN service ID",
"transport_tunnels" : [
    {
    "target_id" : "IPSec VPN session ID"
    }]
}

这些调用会创建一个 GRE 隧道终结点。These calls create a GRE tunnel endpoint. 要查看状态,请运行下面的命令。To check the status, run the following command.

edge-2> get tunnel-port
Tunnel      : 44648dae-8566-5bc9-a065-b1c4e5c3e03f
IFUID       : 328
LOCAL       : 169.254.64.1
REMOTE      : 169.254.64.2
ENCAP       : GRE

Tunnel      : cf950ca1-5cf8-5438-9b1a-d2c8c8e7229b
IFUID       : 318
LOCAL       : 192.168.140.155
REMOTE      : 192.168.140.152
ENCAP       : GENEVE

Tunnel      : 63639321-87c5-529e-8a61-92c1939799b2
IFUID       : 304
LOCAL       : 192.168.140.155
REMOTE      : 192.168.140.156
ENCAP       : GENEVE

创建已指定隧道 ID 的逻辑端口Create logical port with the tunnel ID specified

    POST https://192.168.110.201/api/v1/logical-ports/

{
"resource_type": "LogicalPort",
"display_name": "Extend logicalSwitch, port for service",
"logical_switch_id": "Logical switch ID",
"admin_state" : "UP",
"attachment": {
"attachment_type":"L2VPN_SESSION",
"id":"L2VPN session ID",
"context" : {
"resource_type" : "L2VpnAttachmentContext",
    "tunnel_id" : 10
}
    }
        }

获取 NSX 端的 L2VPN 的对等代码Obtain the peer code for L2VPN on the NSX-T side

获取 NSX-T 终结点的对等代码。Obtain the peer code of the NSX-T endpoint. 配置远程终结点时需要该对等代码。The peer code is required when configuring the remote endpoint. 可以从上一节中获取 L2VPN 。The L2VPN can be obtained from the previous section. 有关详细信息,请参阅 NSX-T 2.3 API 指南For more information, see the NSX-T 2.3 API Guide.

GET https://192.168.110.201/api/v1/vpn/l2vpn/sessions/<session-id>/peer-codes

部署 NSX-T 独立客户端(本地)Deploy the NSX-T standalone client (on-premises)

部署前,请确认本地防火墙规则允许来自/传入 CloudSimple 公共 IP 地址(之前已为 NSX-T T0 路由器环回接口保留该地址)的入站和出站 UDP 500/4500 流量。Before deploying, verify that your on-premises firewall rules allow inbound and outbound UDP 500/4500 traffic from/to the CloudSimple public IP address that was reserved earlier for the NSX-T T0 router loopback interface.

  1. 下载独立 NSX Edge 客户端 OVF,并将下载的捆绑包中的文件提取到文件夹中。Download the Standalone NSX Edge Client OVF and Extract the files from the downloaded bundle into a folder.

    下载独立 NSX Edge 客户端

  2. 前往包含所有提取文件的文件夹。Go to the folder with all the extracted files. 选择所有 vmdk(对大型设备选择 NSX-l2t-client-large.mf 和 NSX-l2t-client-large.ovf;对超大型设备选择 NSX-l2t-client-Xlarge.mf 和 NSX-l2t-client-Xlarge.ovf)。Select all the vmdks (NSX-l2t-client-large.mf and NSX-l2t-client-large.ovf for large appliance size or NSX-l2t-client-Xlarge.mf and NSX-l2t-client-Xlarge.ovf for extra large size appliance size). 单击“下一步”。Click Next.

    选择  显示所选 vmdk 文件的模板屏幕截图。Select template Screenshot that shows the selected vmdks files.

  3. 输入 NSX-T 独立客户端的名称,然后单击“下一步”。Enter a name for the NSX-T standalone client and click Next.

    输入模板名称

  4. 单击“下一步”以访问数据存储设置。Click Next as needed to reach the datastore settings. 选择适用于 NSX-T 独立客户端的数据存储,然后单击“下一步”。Select the appropriate datastore for NSX-T standalone client and click Next.

    选择数据存储

  5. 为 NSX-T 独立客户端的 Trunk 接口 (Trunk PG)、公共接口(上行链路 PG)和 HA 接口(上行链路 PG)选择正确的端口组。Select the correct port groups for Trunk (Trunk PG), Public (Uplink PG) and HA interface (Uplink PG) for the NSX-T standalone client. 单击“下一步”。Click Next.

    选择端口组

  6. 在“自定义模板”屏幕中填写以下详细信息,然后单击“下一步” :Fill the following details in the Customize template screen and click Next:

    展开 L2T:Expand L2T:

    • 对等地址。Peer Address. 输入 Azure CloudSimple 门户上为 NSX-T Tier0 环回接口保留的 IP 地址。Enter the IP address reserved on Azure CloudSimple portal for NSX-T Tier0 Loopback interface.
    • 对等代码。Peer Code. 粘贴从 L2VPN 服务器部署的最后一步中获取的对等代码。Paste the peer code obtained from the last step of L2VPN Server deployment.
    • 子接口 VLAN(隧道 ID)。Sub Interfaces VLAN (Tunnel ID). 输入要延伸的 VLAN ID。Enter the VLAN ID to be stretched. 在括号 () 中,输入之前配置的隧道 ID。In parentheses (), enter the tunnel ID that was previously configured.

    展开上行链路接口:Expand Uplink Interface:

    • DNS IP 地址。DNS IP Address. 输入本地 DNS IP 地址。Enter the on-premises DNS IP address.

    • 默认网关。Default Gateway. 输入 VLAN 的默认网关,该网关将充当此客户端的默认网关。Enter the default gateway of the VLAN that will act as a default gateway for this client.

    • IP 地址。IP Address. 输入独立客户端的上行链路 IP 地址。Enter the uplink IP address of the standalone client.

    • 前缀长度。Prefix Length. 输入上行链路 VLAN/子网的前缀长度。Enter the prefix length of the uplink VLAN/subnet.

    • CLI 管理/启用/根用户密码。CLI admin/enable/root User Password. 为管理/启用/根帐户设置密码。Set the password for admin /enable /root account.

      自定义模板 自定义模板 - 更多信息Customize template Customize template - more

  7. 检查设置,并单击“完成”。Review the settings and click Finish.

    完成配置

配置本地接收器端口Configure an on-premises sink port

如果某个 VPN 站点未部署 NSX,则可以通过在该站点上部署独立 NSX Edge 来配置 L2 VPN。If one of the VPN sites doesn't have NSX deployed, you can configure an L2 VPN by deploying a standalone NSX Edge at that site. 将使用不由 NSX 管理的主机上的 OVF 文件部署独立 NSX Edge。A standalone NSX Edge is deployed using an OVF file on a host that is not managed by NSX. 这会部署一个 NSX Edge 服务网关设备,用作 L2 VPN 客户端。This deploys an NSX Edge Services Gateway appliance to function as an L2 VPN client.

如果独立 Edge Trunk vNIC 连接到 vSphere 分布式交换机,则需要使用混杂模式或使用接收器端口才能使 L2 VPN 正常工作。If a standalone edge trunk vNIC is connected to a vSphere Distributed Switch, either promiscuous mode or a sink port is required for L2 VPN function. 使用混杂模式可能会导致重复的 ping 和重复的响应。Using promiscuous mode can cause duplicate pings and duplicate responses. 出于此原因,请在 L2 VPN 独立 NSX Edge 配置中使用接收器端口模式。For this reason, use sink port mode in the L2 VPN standalone NSX Edge configuration. 请参阅 VMware 文档中的配置接收器端口See the Configure a sink port in the VMware documentation.

IPsec VPN 和 L2VPN 验证IPsec VPN and L2VPN verification

使用以下命令从独立 NSX-T Edge 验证 IPsec 和 L2VPN 会话。Use the following commands to verify IPsec and L2VPN sessions from standalone NSX-T Edge.

nsx-l2t-edge> show service ipsec
-----------------------------------------------------------------------
vShield Edge IPSec Service Status:
IPSec Server is running.
AESNI is enabled.
Total Sites: 1, 1 UP, 0 Down
Total Tunnels: 1, 1 UP, 0 Down
----------------------------------
Site:  10.250.0.111_0.0.0.0/0-104.40.21.81_0.0.0.0/0
Channel: PeerIp: 104.40.21.81    LocalIP: 10.250.0.111  Version: IKEv2  Status: UP
Tunnel: PeerSubnet: 0.0.0.0/0    LocalSubnet: 0.0.0.0/0   Status: UP
----------------------------------
nsx-l2t-edge> show service l2vpn
L2 VPN is running
----------------------------------------
L2 VPN type: Client/Spoke

SITENAME                       IPSECSTATUS          VTI                  GRE
1ecb00fb-a538-4740-b788-c9049e8cb6c6 UP                   vti-100              l2t-1

使用以下命令从 NSX-T Tier0 路由器验证 IPsec 和 L2VPN 会话。Use the following commands to verify IPsec and L2VPN sessions from the NSX-T Tier0 router.

edge-2> get ipsecvpn session
Total Number of Sessions: 1

IKE Session ID : 3
UUID           : 1ecb00fb-a538-4740-b788-c9049e8cb6c6
Type           : Route

Local IP       : 192.168.254.254      Peer IP        : 192.227.85.167
Local ID       : 104.40.21.81         Peer ID        : 192.227.85.167
Session Status : Up

Policy Rules
    VTI UUID       : 4bf96e3b-e50b-49cc-a16e-43a6390e3d53
    ToRule ID      : 560874406            FromRule ID    : 2708358054
    Local Subnet   : 0.0.0.0/0            Peer Subnet    : 0.0.0.0/0
    Tunnel Status  : Up
edge-2> get l2vpn session
Session       : f7147137-5dd0-47fe-9e53-fdc2b687b160
Tunnel        : b026095b-98c8-5932-96ff-dda922ffe316
IPSEC Session : 1ecb00fb-a538-4740-b788-c9049e8cb6c6
Status        : UP

在本地环境中,使用以下命令验证 NSX-T 独立客户端 VM 所在的 ESXi 主机上的接收器端口。Use the following commands to verify the sink port on the ESXi host where the NSX-T standalone client VM resides in the on-premises environment.

 [root@esxi02:~] esxcfg-vswitch -l |grep NSX
  53                  1           NSXT-client-large.eth2
  225                1           NSXT-client-large.eth1
  52                  1           NSXT-client-large.eth0
[root@esxi02:~] net-dvs -l | grep "port\ [0-9]\|SINK\|com.vmware.common.alias"
                com.vmware.common.alias = csmlab2DS ,   propType = CONFIG
        port 24:
        port 25:
        port 26:
        port 27:
        port 13:
        port 19:
        port 169:
        port 54:
        port 110:
        port 6:
        port 107:
        port 4:
        port 199:
        port 168:
        port 201:
        port 0:
        port 49:
        port 53:
        port 225:
                com.vmware.etherswitch.port.extraEthFRP =   SINK
        port 52: