Network Solutions

Once you have setup a Kubernetes master node you are ready to pick a networking solution. There are multiple ways to make the virtual cluster subnet routable across nodes. Pick one of the following options for Kubernetes on Windows today:

  1. Use a CNI plugin such as Flannel to setup an overlay network for you.
  2. Use a CNI plugin such as Flannel to program routes for you (uses l2bridge networking mode).
  3. Configure a smart top-of-rack (ToR) switch to route the subnet.

Tip

There is a fourth networking solution on Windows which leverages Open vSwitch (OvS) and Open Virtual Network (OVN). Documenting this is out of scope for this document, but you can read these instructions to set it up.

Flannel in vxlan mode

Flannel in vxlan mode can be used to setup a configurable virtual overlay network which uses VXLAN tunneling to route packets between nodes.

Prepare Kubernetes master for Flannel

Some minor preparation is recommended on the Kubernetes master in our cluster. It is recommended to enable bridged IPv4 traffic to iptables chains when using Flannel. This can be done using the following command:

sudo sysctl net.bridge.bridge-nf-call-iptables=1

Download & configure Flannel

Download the most recent Flannel manifest:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

There are two sections you should modify to enable the vxlan networking backend:

  1. In the net-conf.json section of your kube-flannel.yml, double-check:
  • The cluster subnet (e.g. "10.244.0.0/16") is set as desired.
  • VNI 4096 is set in the backend
  • Port 4789 is set in the backend
  1. In the cni-conf.json section of your kube-flannel.yml, change the network name to "vxlan0".

After applying the above steps, your net-conf.json should look as follows:

  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan",
        "VNI" : 4096,
        "Port": 4789
      }
    }

Note

The VNI must be set to 4096 and port 4789 for Flannel on Linux to interoperate with Flannel on Windows. Support for other VNIs is coming soon. See VXLAN for an explanation of these fields.

Your cni-conf.json should look as follows:

cni-conf.json: |
    {
      "name": "vxlan0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }

Tip

For more information on the above options, please consult official CNI flannel, portmap, and bridge plugin docs for Linux.

Launch Flannel & validate

Launch Flannel using:

kubectl apply -f kube-flannel.yml

Next, since the Flannel pods are Linux-based, apply the Linux NodeSelector patch to kube-flannel-ds DaemonSet to only target Linux (we will launch the Flannel "flanneld" host-agent process on Windows later when joining):

kubectl patch ds/kube-flannel-ds-amd64 --patch "$(cat node-selector-patch.yml)" -n=kube-system

Tip

If any nodes aren't x86-64 based, replace -amd64 above with your processor architecture.

After a few minutes, you should see all the pods as running if the Flannel pod network was deployed.

kubectl get pods --all-namespaces

text

The Flannel DaemonSet should also have the NodeSelector beta.kubernetes.io/os=linux applied.

kubectl get ds -n kube-system

text

Tip

For the remaining flannel-ds-* DaemonSets, these can be either ignored or deleted as they won't be scheduled if there are no nodes matching that processor architecture.

Tip

Confused? Here is a complete example kube-flannel.yml for Flannel v0.11.0 with these steps pre-applied for default cluster subnet 10.244.0.0/16.

Once successful, continue to the next steps.

Flannel in host-gateway mode

Alongside Flannel vxlan, another option for Flannel networking is host-gateway mode (host-gw), which entails the programming of static routes on each node to other node's pod subnets using the target node's host address as a next hop.

Prepare Kubernetes master for Flannel

Some minor preparation is recommended on the Kubernetes master in our cluster. It is recommended to enable bridged IPv4 traffic to iptables chains when using Flannel. This can be done using the following command:

sudo sysctl net.bridge.bridge-nf-call-iptables=1

Download & configure Flannel

Download the most recent Flannel manifest:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

There is one file you need to change in order to enable host-gw networking across both Windows/Linux.

In the net-conf.json section of your kube-flannel.yml, double-check that:

  1. The type of network backend being used is set to host-gw instead of vxlan.
  2. The cluster subnet (e.g. "10.244.0.0/16") is set as desired.

After applying the 2 steps, your net-conf.json should look as follows:

net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "host-gw"
      }
    }

Launch Flannel & validate

Launch Flannel using:

kubectl apply -f kube-flannel.yml

Next, since the Flannel pods are Linux-based, apply our Linux NodeSelector patch to kube-flannel-ds DaemonSet to only target Linux (we will launch the Flannel "flanneld" host-agent process on Windows later when joining):

kubectl patch ds/kube-flannel-ds-amd64 --patch "$(cat node-selector-patch.yml)" -n=kube-system

Tip

If any nodes aren't x86-64 based, replace -amd64 above with the desired processor architecture.

After a few minutes, you should see all the pods as running if the Flannel pod network was deployed.

kubectl get pods --all-namespaces

text

The Flannel DaemonSet should also have the NodeSelector applied.

kubectl get ds -n kube-system

text

Tip

For the remaining flannel-ds-* DaemonSets, these can be either ignored or deleted as they won't be scheduled if there are no nodes matching that processor architecture.

Tip

Confused? Here is a complete example kube-flannel.yml For Flannel v0.11.0 with these 2 steps pre-applied for default cluster subnet 10.244.0.0/16.

Once successful, continue to the next steps.

Configuring a ToR switch

Note

You can skip this section if you chose Flannel as your networking solution. Configuration of the ToR switch occurs outside of your actual nodes. For more details on this, please see official Kubernetes docs.

Next steps

In this section, we covered how to pick and configure a networking solution. Now you are ready for step 4: