Windows Container Networking

Please reference Docker Container Networking for general docker networking commands, options, and syntax. With the exception of any cases described in this document all Docker networking commands are supported on Windows with the same syntax as on Linux. Please note, however, that the Windows and Linux network stacks are different, and as such you will find that some Linux network commands (e.g. ifconfig) are not supported on Windows.

Basic networking architecture

This topic provides an overview of how Docker creates and manages networks on Windows. Windows containers function similarly to virtual machines in regards to networking. Each container has a virtual network adapter (vNIC) which is connected to a Hyper-V virtual switch (vSwitch). Windows support five different networking drivers or modes which can be created through Docker: nat, overlay, transparent, l2bridge, and l2tunnel. Depending on your physical network infrastructure and single- vs multi-host networking requirements, you should choose the network driver which best suits your needs.

The first time the docker engine runs, it will create a default NAT network, 'nat', which uses an internal vSwitch and a Windows component named WinNAT. If there are any pre-existing external vSwitches on the host which were created through PowerShell or Hyper-V Manager, they will also be available to Docker using the transparent network driver and can be seen when you run the docker network ls command

  • An internal vSwitch is one which is not directly connected to a network adapter on the container host

  • An external vSwitch is one which is directly connected to a network adapter on the container host

The 'nat' network is the default network for containers running on Windows. Any containers that are run on Windows without any flags or arguments to implement specific network configurations will be attached to the default 'nat' network, and automatically assigned an IP address from the 'nat' network's internal prefix IP range. The default internal IP prefix used for 'nat' is 172.16.0.0/16.

Windows Container Network Drivers

In addition to leveraging the default 'nat' network created by Docker on Windows, users can define custom container networks. User-defined networks can be created using using the Docker CLI, docker network create -d <NETWORK DRIVER TYPE> <NAME>, command. On Windows, the following network driver types are available:

  • nat – containers attached to a network created with the 'nat' driver will receive an IP address from the user-specified (--subnet) IP prefix. Port forwarding / mapping from the container host to container endpoints is supported.

    Note: Multiple NAT networks are now supported with Windows 10 Creators Update!

  • transparent – containers attached to a network created with the 'transparent' driver will be directly connected to the physical network. IPs from the physical network can be assigned statically (requires user-specified --subnet option) or dynamically using an external DHCP server.

  • overlay - New! when the docker engine is running in swarm mode, containers attached to an overlay network can communicate with other containers attached to the same network across multiple container hosts. Each overlay network that is created on a Swarm cluster is created with its own IP subnet, defined by a private IP prefix. The overlay network driver uses VXLAN encapsulation.

    Requires Windows Server 2016 with KB4015217 or Windows 10 Creators Update

  • l2bridge - containers attached to a network created with the 'l2bridge' driver will be in the same IP subnet as the container host. The IP addresses must be assigned statically from the same prefix as the container host. All container endpoints on the host will have the same MAC address due to Layer-2 address translation (MAC re-write) operation on ingress and egress.

    Requires Windows Server 2016 or Windows 10 Creators Update

  • l2tunnel - this driver should only be used in a Microsoft Cloud Stack

To learn how to connect container endpoints to an overlay virtual network with the Microsoft SDN stack, reference the Attaching Containers to a Virtual Network topic.

Windows 10 Creators Update introduced platform support to add a new container endpoint to a running container (i.e. 'hot-add'). This will light-up end-to-end pending an outstanding Docker pull request

Network Topologies and IPAM

The table below shows how network connectivity is provided for internal (container-to-container) and external connections for each network driver.

IPAM

IP Addresses are allocated and assigned differently for each networking driver. Windows uses the Host Networking Service (HNS) to provide IPAM for the nat driver and works with Docker Swarm Mode (internal KVS) to provide IPAM for overlay. All other network drivers use an external IPAM.

Details on Windows Container Networking

Isolation (Namespace) with Network Compartments

Each container endpoint is placed in its own network compartment which is analogous to a network namespace in Linux. The management host vNIC and host network stack are located in the default network compartment. In order to enforce network isolation between containers on the same host, a network compartment is created for each Windows Server and Hyper-V Container into which the network adapter for the container is installed. Windows Server containers use a Host vNIC to attach to the virtual switch. Hyper-V Containers use a Synthetic VM NIC (not exposed to the Utility VM) to attach to the virtual switch.

Get-NetCompartment

Windows Firewall Security

The Windows Firewall is used to enforce network security through port ACLs.

Note: by default all container endpoints attached to an overlay network have an ALLOW ALL rule created

Container Network Management with Host Network Service

The picture below shows how the Host Networking Service (HNS) and the Host Compute Service (HCS) work together to create containers and attach endpoints to a network.

Advanced Network Options in Windows

Several network driver options are supported to take advantage of Windows-specific capabilities and features.

Switch Embedded Teaming with Docker Networks

Applies to all network drivers

You can take advantage of Switch Embedded Teaming when creating container host networks for use by Docker by specifying multiple network adapters (separated by commas) with the -o com.docker.network.windowsshim.interface option.

C:\> docker network create -d transparent -o com.docker.network.windowsshim.interface="Ethernet 2", "Ethernet 3" TeamedNet

Set the VLAN ID for a Network

Applies to transparent and l2bridge network drivers

To set a VLAN ID for a network, use the option, -o com.docker.network.windowsshim.vlanid=<VLAN ID> to the docker network create command. For instance, you might use the following command to create a transparent network with a VLAN ID of 11:

C:\> docker network create -d transparent -o com.docker.network.windowsshim.vlanid=11 MyTransparentNetwork

When you set the VLAN ID for a network, you are setting VLAN isolation for any container endpoints that will be attached to that network.

Ensure that your host network adapter (physical) is in trunk mode to enable all tagged traffic to be processed by the vSwitch with the vNIC (container endpoint) port in access mode on the correct VLAN.

Specify the Name of a Network to the HNS Service

Applies to all network drivers

Ordinarily, when you create a container network using docker network create, the network name that you provide is used by the Docker service but not by the HNS service. If you are creating a network, you can specify the name that it is given by the HNS service using the option, -o com.docker.network.windowsshim.networkname=<network name> to the docker network create command. For instance, you might use the following command to create a transparent network with a name that is specified to the HNS service:

C:\> docker network create -d transparent -o com.docker.network.windowsshim.networkname=MyTransparentNetwork MyTransparentNetwork

Bind a Network to a Specific Network Interface

Applies to all network drivers except 'nat'

To bind a network (attached through the Hyper-V virtual switch) to a specific network interface, use the option, -o com.docker.network.windowsshim.interface=<Interface> to the docker network create command. For instance, you might use the following command to create a transparent network which is attached to the "Ethernet 2" network interface:

C:\> docker network create -d transparent -o com.docker.network.windowsshim.interface="Ethernet 2" TransparentNet2

Note: The value for com.docker.network.windowsshim.interface is the network adapter's Name, which can be found with:

PS C:\> Get-NetAdapter

Specify the DNS Suffix and/or the DNS Servers of a Network

Applies to all network drivers

Use the option, -o com.docker.network.windowsshim.dnssuffix=<DNS SUFFIX> to specify the DNS suffix of a network, and the option, -o com.docker.network.windowsshim.dnsservers=<DNS SERVER/S> to specify the DNS servers of a network. For example, you might use the following command to set the DNS suffix of a network to "example.com" and the DNS servers of a network to 4.4.4.4 and 8.8.8.8:

C:\> docker network create -d transparent -o com.docker.network.windowsshim.dnssuffix=abc.com -o com.docker.network.windowsshim.dnsservers=4.4.4.4,8.8.8.8 MyTransparentNetwork

VFP

The Virtual Filtering Platform (VFP) extension is a Hyper-V virtual switch, forwarding extension used to enforce network policy and manipulate packets. For instance, VFP is used by the 'overlay' network driver to perform VXLAN encapsulation and by the 'l2bridge' driver to perform MAC re-write on ingresss and egress. The VFP extension is only present on Windows Server 2016 and Windows 10 Creators Update. To check and see if this is running correctly a user run two commands:

Get-Service vfpext

# This should indicate the extension is Running: True 
Get-VMSwitchExtension  -VMSwitchName <vSwitch Name> -Name "Microsoft Azure VFP Switch Extension"

Tips & Insights

Here's a list of handy tips and insights, inspired by common questions on Windows container networking that we hear from the community...

HNS requires that IPv6 is enabled on container host machines

As part of KB4015217 HNS requires that IPv6 is enabled on Windows container hosts. If you're running into an error such as the one below, there's a chance that IPv6 is disabled on your host machine.

docker: Error response from daemon: container e15d99c06e312302f4d23747f2dfda4b11b92d488e8c5b53ab5e4331fd80636d encountered an error during CreateContainer: failure in a Windows system call: Element not found.

We're working on platform changes to automatically detect/prevent this issue. Currently the following workaround can be used to ensure IPv6 is enabled on your host machine:

C:\> reg delete HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip6\Parameters  /v DisabledComponents  /f

Moby Linux VMs use DockerNAT switch with Docker for Windows (a product of Docker CE) instead of HNS internal vSwitch

Docker for Windows (the Windows driver for the Docker CE engine) on Windows 10 will use an Internal vSwitch named 'DockerNAT' to connect Moby Linux VMs to the container host. Developers using Moby Linux VMs on Windows should be aware that their hosts are using the DockerNAT vSwitch rather than the vSwitch that is created by the HNS service (which is the default switch used for Windows containers).

To use DHCP for IP assignment on a virtual container host enable MACAddressSpoofing

If the container host is virtualized, and you wish to use DHCP for IP assignment, you must enable MACAddressSpoofing on the virtual machine's network adapter. Otherwise, the Hyper-V host will block network traffic from the containers in the VM with multiple MAC addresses. You can enable MACAddressSpoofing with this PowerShell command:

PS C:\> Get-VMNetworkAdapter -VMName ContainerHostVM | Set-VMNetworkAdapter -MacAddressSpoofing On

Creating multiple transparent networks on a single container host

If you wish to create more than one transparent network you must specify to which (virtual) network adapter the external Hyper-V Virtual Switch should bind. To specify the interface for a network, use the following syntax:

# General syntax:
C:\> docker network create -d transparent -o com.docker.network.windowsshim.interface=<INTERFACE NAME> <NETWORK NAME> 

# Example:
C:\> docker network create -d transparent -o com.docker.network.windowsshim.interface="Ethernet 2" myTransparent2

Remember to specify --subnet and --gateway when using static IP assignment

When using static IP assignment, you must first ensure that the --subnet and --gateway parameters are specified when the network is created. The subnet and gateway IP address should be the same as the network settings for the container host - i.e. the physical network. For example, here's how you might create a transparent network then run an endpoint on that network using static IP assignment:

# Example: Create a transparent network using static IP assignment
# A network create command for a transparent container network corresponding to the physical network with IP prefix 10.123.174.0/23
C:\> docker network create -d transparent --subnet=10.123.174.0/23 --gateway=10.123.174.1 MyTransparentNet
# Run a container attached to MyTransparentNet
C:\> docker run -it --network=MyTransparentNet --ip=10.123.174.105 windowsservercore cmd

DHCP IP assignment not supported with L2Bridge networks

Only static IP assignment is supported with container networks created using the l2bridge driver. As stated above, remember to use the --subnet and --gateway parameters to create a network that's configured for static IP assignment.

Networks that leverage external vSwitch must each have their own network adapter

Note that if multiple networks which use an external vSwitch for connectivity (e.g. Transparent, L2 Bridge, L2 Transparent) are created on the same container host, each of them requires its own network adapter.

IP assignment on stopped vs. running containers

Static IP assignment is performed directly on the container's network adapter and must only be performed when the container is in a STOPPED state. "Hot-add" of container network adapters or changes to the network stack is not supported (in Windows Server 2016) while the container is running.

Note: This behavior is changing on Windows 10, Creators Update as the platform now supports "hot-add". This capability will light-up E2E after this outstanding Docker pull request is merged

Existing vSwitch (not visible to Docker) can block transparent network creation

If you encounter an error in creating a transparent network, it is possible that there is an external vSwitch on your system which was not automatically discovered by Docker and is therefore preventing the transparent network from being bound to your container host's external network adapter.

When creating a transparent network, Docker creates an external vSwitch for the network then tries to bind the switch to an (external) network adapter - the adapter could be a VM Network Adapter or the physical network adapter. If a vSwitch has already been created on the container host, and it is visible to Docker, the Windows Docker engine will use that switch instead of creating a new one. However, if the vSwitch which was created out-of-band (i.e. created on the container host using HYper-V Manager or PowerShell) and is not yet visible to Docker, the Windows Docker engine will try create a new vSwitch and then be unable to connect the new switch to the container host external network adapter (because the network adapter will already be connected to the switch that was created out-of-band).

For example, this issue would arise if you were to first create a new vSwitch on your host while the Docker service was running, then try to create a transparent network. In this case, Docker would not recognize the switch that you created and it would create a new vSwitch for the transparent network.

There are three approaches for solving this issue:

  • You can of course delete the vSwitch that was created out-of-band, which will allow docker to create a new vSwitch and connect it to the host network adapter without issue. Before choosing this approach, ensure that your out-of-band vSwitch is not being used by other services (e.g. Hyper-V).
  • Alternatively, if you decide to use an external vSwitch that was created out-of-band, restart the Docker and HNS services to make the switch visible to Docker. PS C:\> restart-service hns PS C:\> restart-service docker
  • Another option is to use the '-o com.docker.network.windowsshim.interface' option to bind the transparent network's external vSwitch to a specific network adapter which is not already in use on the container host (i.e. a network adapter other than the one being used by the vSwitch that was created out-of-band). The '-o' option is described further above, in the Transparent Network section of this document.

Unsupported features and network options

The following networking options are not supported in Windows and cannot be passed to docker run:

  • Container linking (e.g. --link) - Alternative is rely on Service Discovery
  • IPv6 addresses (e.g. --ip6)
  • DNS Options (e.g. --dns-option)
  • Multiple DNS search domains (e.g. --dns-search)

The following networking options and features are not supported in Windows and cannot be passed to docker network create:

  • --aux-address
  • --internal
  • --ip-range
  • --ipam-driver
  • --ipam-opt
  • --ipv6

The following networking options are not supported on Docker services

  • Data-plane encryption (e.g. --opt encrypted)

Windows Server 2016 Work-arounds

Although we continue to add new features and drive development, some of these features will not be back-ported to older platforms. Instead, the best plan of action is to "get on the train" for latest updates to Windows 10 and Windows Server. The section below lists some work-arounds and caveats which apply to Windows Server 2016 and older versions of Windows 10 (i.e. before 1704 Creators Update)

Multiple NAT networks on WS2016 Container Host

The partitions for any new NAT networks must be created under the larger internal NAT networking prefix. The prefix can be found by running the following command from PowerShell and referencing the "InternalIPInterfaceAddressPrefix" field.

PS C:\> Get-NetNAT

For example, the host's NAT network internal prefix might be, 172.16.0.0/16. In this case, Docker can be used to create additional NAT networks as long as they are a subset of the 172.16.0.0/16 prefix. For example, two NAT networks could be created with the IP prefixes 172.16.1.0/24 (gateway, 172.16.1.1) and 172.16.2.0/24 (gateway, 172.16.2.1).

C:\> docker network create -d nat --subnet=172.16.1.0/24 --gateway=172.16.1.1 CustomNat1
C:\> docker network create -d nat --subnet=172.16.2.0/24 --gateway=172.16.1.1 CustomNat2

The newly created networks can be listed using:

C:\> docker network ls

Docker Compose

Docker Compose can be used to define and configure container networks alongside the containers/services that will be using those networks. The Compose 'networks' key is used as the top-level key in defining the networks to which containers will be connected. For example, the syntax below defines the preexisting NAT network created by Docker to be the 'default' network for all containers/services defined in a given Compose file.

networks:
 default:
  external:
   name: "nat"

Similarly, the following syntax can be used to define a custom NAT network.

Note: The 'custom NAT network' defined in the below example is defined as a partition of the container host's pre-existing NAT internal prefix. See the above section, 'Multiple NAT Networks,' for more context.

networks:
  default:
    driver: nat
    ipam:
      driver: default
      config:
      - subnet: 172.16.3.0/24

For further information on defining/configuring container networks using Docker Compose, refer to the Compose File reference.

Service Discovery

Service Discovery is only supported for certain Windows network drivers.

Local Service Discovery Global Service Discovery
nat YES NA
overlay YES YES
transparent NO NO
l2bridge NO NO