Kubernetes on Windows
With the latest release of Kubernetes 1.9 and Windows Server version 1709, users can take advantage of the latest features in Windows networking:
- shared pod compartments: infrastructure and worker pods now share a network compartment (analagous to a Linux namespace)
- endpoint optimization: thanks to compartment sharing, container services need to track (at least) half as many endpoints as before
- data-path optimization: improvements to the Virtual Filtering Platform and the Host Networking Service allow kernel-based load-balancing
This page serves as a guide for getting started joining a brand new Windows node to an existing Linux-based cluster. To start completely from scratch, refer to this page — one of many resources available for deploying a Kubernetes cluster — to set a master up from scratch the same way we did.
These are definitions for some terms that are referenced throughout this guide:
- The external network is the network across which your nodes communicate.
- The cluster subnet is a routable virtual network; nodes are assigned smaller subnets from this for their pods to use.
- The service subnet is a non-routable, purely virtual subnet on 11.0/16 that is used by pods to uniformally access services without caring about the network topology. It is translated to/from routable address space by
kube-proxyrunning on the nodes.
What you will accomplish
By the end of this guide, you will have:
- Configured a Linux master node.
- Joined a Windows worker node to it.
- Prepared our network topology.
- Deployed a sample Windows service.
- Covered common problems and mistakes.
Preparing the Linux Master
Regardless of whether you followed the instructions or already have an existing cluster, only one thing is needed from the Linux master is Kubernetes' certificate configuration. This could be in /etc/kubernetes/admin.conf, ~/.kube/config, or elsewhere depending on your setup.
Preparing a Windows node
Note
All code snippets in Windows sections are to be run in elevated PowerShell.
Kubernetes uses Docker as its container orchestrator, so we need to install it. You can follow the official Docs instructions, the Docker instructions, or try these steps:
Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Install-Package -Name Docker -ProviderName DockerMsftProvider
Restart-Computer -Force
If you are behind a proxy, the following PowerShell environment variables must be defined:
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://proxy.example.com:80/", [EnvironmentVariableTarget]::Machine)
[Environment]::SetEnvironmentVariable("HTTPS_PROXY", "http://proxy.example.com:443/", [EnvironmentVariableTarget]::Machine)
There is a collection of scripts on this Microsoft repository that helps you join this node to the cluster. You can download the ZIP file directly here. The only thing you need is the Kubernetes/windows folder, the contents of which should be moved to C:\k\:
wget https://github.com/Microsoft/SDN/archive/master.zip -o master.zip
Expand-Archive master.zip -DestinationPath master
mkdir C:/k/
mv master/SDN-master/Kubernetes/windows/* C:/k/
rm -recurse -force master,master.zip
Copy the certificate file identified earlier to this new C:\k directory.
Network topology
There are multiple ways to make the virtual cluster subnet routable. You can:
- Configure host-gateway mode, setting static next-hop routes between nodes to enable pod-to-pod communication.
- Configure a smart top-of-rack (ToR) switch to route the subnet.
- Use a third-party overlay plugin such as Flannel (Windows support for Flannel is in beta).
Creating the "pause" image
Now that docker is installed, you need to prepare a "pause" image that's used by Kubernetes to prepare the infrastructure pods.
docker pull microsoft/windowsservercore:1709
docker tag microsoft/windowsservercore:1709 microsoft/windowsservercore:latest
cd C:/k/
docker build -t kubeletwin/pause .
Note
We tag it as the :latest because the sample service you will be deploying later depends on it, though this may not actually be the latest Windows Server Core image available. It's important to be careful of conflicting container images; not having the expected tag can cause a docker pull of an incompatible container image, causing deployment problems.
Downloading binaries
In the meantime while the pull occurs, download the following client-side binaries from Kubernetes:
kubectl.exekubelet.exekube-proxy.exe
You can download these from the links in the CHANGELOG.md file of the latest 1.9 release. As of this writing, that is 1.9.1, and the Windows binaries are here. Use a tool like 7-Zip to extract the archive and place the binaries in C:\k\.
In order to make the kubectl command available outside of the C:\k\ directory, modify the PATH environment variable:
$env:Path += ";C:\k"
If you would like to make this change permanent, modify the variable in machine target:
[Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\k", [EnvironmentVariableTarget]::Machine)
Joining the cluster
Verify that cluster configuration is valid using:
kubectl version
If you are receiving a connection error,
Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
check if the configuration has been discovered properly:
kubectl config view
In order to change location where kubectl looks for the configuration file, you may pass the --kubeconfig parameter or modify KUBECONFIG environment variable. For example, if the configuration is located at C:\k\config:
$env:KUBECONFIG="C:\k\config"
To make this setting permanent for current user's scope:
[Environment]::SetEnvironmentVariable("KUBECONFIG", "C:\k\config", [EnvironmentVariableTarget]::User)
The node is now ready to join the cluster. In two separate, elevated PowerShell windows, run these scripts (in this order). The -ClusterCidr parameter in the first script is the configured cluster subnet; here, it's 192.168.0.0/16.
./start-kubelet.ps1 -ClusterCidr 192.168.0.0/16
./start-kubeproxy.ps1
The Windows node will be visible from the Linux master under kubectl get nodes within a minute!
Validating your network topology
There are a few basic tests that validate a proper network configuration:
Node to node connectivity: pings between master and Windows worker nodes should succeed in both directions.
Pod subnet to node connectivity: pings between the virtual pod interface and the nodes. Find the gateway address under
route -nandipconfigon Linux and Windows, respectively, looking for thecbr0interface.
If any of these basic tests don't work, try the troubleshooting page to solve common issues.
Running a Sample Service
You'll be deploying a very simple PowerShell-based web service to ensure you joined the cluster successfully and our network is properly configured.
On the Linux master, download and run the service:
wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/WebServer.yaml -O win-webserver.yaml
kubectl apply -f win-webserver.yaml
watch kubectl get pods -o wide
This creates a deployment and a service, then watch the pods indefinitely to track their status; simply press Ctrl+C to exit the watch command when done observing.
If all went well, it is possible to:
- see four containers under a
docker pscommand on the Windows side. curlon the pod IPs on port 80 from the Linux master gets a web server response; this demonstrates proper node to pod communication across the network.curlon the node IP on port 4444 gets a web server response; this demonstrates proper host-to-container port mapping.- ping between pods (including across hosts, if you have more than one Windows node) via
docker exec; this demonstrates proper pod-to-pod communication curlthe virtual service IP (seen underkubectl get services) from the Linux master and from individual pods.curlthe service name with the Kubernetes default DNS suffix, demonstrating DNS functionality.
Warning
Windows nodes are not able to access the service IP. This is a known platform limitation that will be serviced.



