Kubernetes on Windows
With the latest release of Kubernetes 1.9 and Windows Server version 1709, users can take advantage of the latest features in Windows networking:
- shared pod compartments: infrastructure and worker pods now share a network compartment (analagous to a Linux namespace)
- endpoint optimization: thanks to compartment sharing, container services need to track (at least) half as many endpoints as before
- data-path optimization: improvements to the Virtual Filtering Platform and the Host Networking Service allow kernel-based load-balancing
This page serves as a guide for getting started joining a brand new Windows node to an existing Linux-based cluster. To start completely from scratch, refer to this page — one of many resources available for deploying a Kubernetes cluster — to set a master up from scratch the same way we did.
If you would like to deploy a cluster on Azure, the open source ACS-Engine tool makes this easy. A step by step walkthrough is available.
- The external network is the network across which your nodes communicate.
- The cluster subnet is a routable virtual network; nodes are assigned smaller subnets from this for their pods to use.
- The service subnet is a non-routable, purely virtual subnet on 11.0/16 that is used by pods to uniformally access services without caring about the network topology. It is translated to/from routable address space by
kube-proxyrunning on the nodes.
What you will accomplish
By the end of this guide, you will have:
Preparing the Linux Master
Regardless of whether you followed the instructions or already have an existing cluster, only one thing is needed from the Linux master is Kubernetes' certificate configuration. This could be in
~/.kube/config, or elsewhere depending on your setup.
Preparing a Windows node
All code snippets in Windows sections are to be run in elevated PowerShell.
Install-Module -Name DockerMsftProvider -Repository PSGallery -Force Install-Package -Name Docker -ProviderName DockerMsftProvider Restart-Computer -Force
If you are behind a proxy, the following PowerShell environment variables must be defined:
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://proxy.example.com:80/", [EnvironmentVariableTarget]::Machine) [Environment]::SetEnvironmentVariable("HTTPS_PROXY", "http://proxy.example.com:443/", [EnvironmentVariableTarget]::Machine)
There is a collection of scripts on this Microsoft repository that helps you join this node to the cluster. You can download the ZIP file directly here. The only thing you need is the
Kubernetes/windows folder, the contents of which should be moved to
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 wget https://github.com/Microsoft/SDN/archive/master.zip -o master.zip Expand-Archive master.zip -DestinationPath master mkdir C:/k/ mv master/SDN-master/Kubernetes/windows/* C:/k/ rm -recurse -force master,master.zip
Copy the certificate file identified earlier to this new
There are multiple ways to make the virtual cluster subnet routable. You can:
- Configure host-gateway mode, setting static next-hop routes between nodes to enable pod-to-pod communication.
- Configure a smart top-of-rack (ToR) switch to route the subnet.
- Use a third-party overlay plugin such as Flannel (Windows support for Flannel is in beta).
Creating the "pause" image
docker is installed, you need to prepare a "pause" image that's used by Kubernetes to prepare the infrastructure pods.
docker pull microsoft/windowsservercore:1709 docker tag microsoft/windowsservercore:1709 microsoft/windowsservercore:latest cd C:/k/ docker build -t kubeletwin/pause .
We tag it as the
:latest because the sample service you will be deploying later depends on it, though this may not actually be the latest Windows Server Core image available. It's important to be careful of conflicting container images; not having the expected tag can cause a
docker pull of an incompatible container image, causing deployment problems.
In the meantime while the
pull occurs, download the following client-side binaries from Kubernetes:
You can download these from the links in the
CHANGELOG.md file of the latest 1.9 release. As of this writing, that is 1.9.1, and the Windows binaries are here. Use a tool like 7-Zip to extract the archive and place the binaries in
In order to make the
kubectl command available outside of the
C:\k\ directory, modify the
PATH environment variable:
$env:Path += ";C:\k"
If you would like to make this change permanent, modify the variable in machine target:
[Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\k", [EnvironmentVariableTarget]::Machine)
Joining the cluster
Verify that cluster configuration is valid using:
If you are receiving a connection error,
Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
check if the configuration has been discovered properly:
kubectl config view
In order to change location where
kubectl looks for the configuration file, you may pass the
--kubeconfig parameter or modify
KUBECONFIG environment variable. For example, if the configuration is located at
To make this setting permanent for current user's scope:
[Environment]::SetEnvironmentVariable("KUBECONFIG", "C:\k\config", [EnvironmentVariableTarget]::User)
The node is now ready to join the cluster. In two separate, elevated PowerShell windows, run these scripts (in this order). The
-ClusterCidr parameter in the first script is the configured cluster subnet; here, it's
./start-kubelet.ps1 -ClusterCidr 192.168.0.0/16 ./start-kubeproxy.ps1
The Windows node will be visible from the Linux master under
kubectl get nodes within a minute!
Validating your network topology
There are a few basic tests that validate a proper network configuration:
Node to node connectivity: pings between master and Windows worker nodes should succeed in both directions.
Pod subnet to node connectivity: pings between the virtual pod interface and the nodes. Find the gateway address under
ipconfigon Linux and Windows, respectively, looking for the
If any of these basic tests don't work, try the troubleshooting page to solve common issues.
Running a Sample Service
You'll be deploying a very simple PowerShell-based web service to ensure you joined the cluster successfully and our network is properly configured.
On the Linux master, download and run the service:
wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/WebServer.yaml -O win-webserver.yaml kubectl apply -f win-webserver.yaml watch kubectl get pods -o wide
This creates a deployment and a service, then watch the pods indefinitely to track their status; simply press
Ctrl+C to exit the
watch command when done observing.
If all went well, it is possible to:
- see 4 containers under a
docker pscommand on the Windows node
- see 2 pods under a
kubectl get podscommand from the Linux master
curlon the pod IPs on port 80 from the Linux master gets a web server response; this demonstrates proper node to pod communication across the network.
- ping between pods (including across hosts, if you have more than one Windows node) via
docker exec; this demonstrates proper pod-to-pod communication
curlthe virtual service IP (seen under
kubectl get services) from the Linux master and from individual pods.
curlthe service name with the Kubernetes default DNS suffix, demonstrating DNS functionality.
Windows nodes will not be able to access the service IP. This is a known platform limitation that will be improved in the next update to Windows Server.
It is also possible to access services hosted in pods through their respective nodes by mapping a port on the node. There is another sample YAML available with a mapping of port 4444 on the node to port 80 on the pod to demonstrate this feature. To deploy it, follow the same steps as before:
wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/PortMapping.yaml -O win-webserver-port-mapped.yaml kubectl apply -f win-webserver-port-mapped.yaml watch kubectl get pods -o wide
It should now be possible to
curl on the node IP on port 4444 and receive a web server response. Keep in mind that this limits scaling to a single pod per node since it must enforce a one-to-one mapping.