Kubernetes Cluster automated deployment on Azure – First Step

Last week was published a video which I thing may be usefully completed by a few articles that gives more details … This video was about a solution we have built with my friend Hervé Leclerc (Alter Way CTO) in order to automate a Kubernetes Cluster deployment on Azure. This was the opportunity for Hervé and I to give an overview of this Open Source implementation.

These are the articles I have planned to write:

· Kubernetes and Microsoft Azure

· Programmatic scripting of Kubernetes deployment

· Provisioning the Azure Kubernetes infrastructure with a declarative template

· Configuring the Azure Kubernetes infrastructure with Bash scripts and Ansible tasks

· Automating the Kubernetes UI dashboard configuration

· Using Kubernetes…


Let’s start first by presenting Kubernetes and why and how to deploy it in Azure.


Kubernetes is an open-source orchestration platform for automating deployment, operations, and scaling of applications across multiple hosts. It targets applications composed of multiple Docker containers, such as distributed micro-services and provides ways for containers to find and communicate with each other. It enables users to ask a cluster to run a set of containers with an automatic scheduling and choice of the hosts to run those containers on. It regroups containers that make up an application into logical units for easy management with self-healing mechanisms (auto-restarting, re-scheduling, and replicating containers) and discovery.

Kubernetes has been created by Google but contributors to this project now include IBM, Microsoft, Red Hat, CoreOS, Mesosphere, and others…

Kubernetes concepts

A Kubernetes cluster is made up of several nodes with different roles such as master, minions, and etcd (https://github.com/kubernetes/kubernetes#concepts). The master is responsible for managing the cluster and issuing requests to be performed by the minions. The minions are the Kubernetes nodes that actually run Docker workloads. Pods represent a co-located group of application containers with shared volumes. They are scheduled to run and are balanced across the minions by the master. Replication controllers manage the lifecycle of pods and ensure that a specified number of pods are running at any given time, while services provide a single, stable name and address for a set of pods. Etcd is a distributed, highly-available, consistent key-value store for shared configuration and service discovery which Kubernetes uses for persistent storage of all of its REST API objects.

Kubernetes networking model

For communicating over a network, containers are tied to the IP addresses of the host machines and must rely on port-mapping to reach the desired container. This makes it difficult for applications running inside containers to advertise their external IP and port as that information is not available to them. Kubernetes allocates an IP address to each pod. It is required to allocate a different block of IPs to each node in the cluster as the node is added, so any process in one pod could communicate with another one running in a second pod through IP connectivity.

This can be achieved by configuring a network to route Pod IPs or by creating an overlay network with weave or flannel:

· A weave network consists of a number of “peers” - weave routers residing on different hosts. Weave routers establish TCP connections to each other, over which they perform a protocol handshake and subsequently exchange topology information.

· A flannel network gives each container an IP that can be used for container-to-container communication. It uses packet encapsulation to create a virtual overlay network that spans the cluster. It gives each host an IP subnet from which the Docker daemon is able to allocate IPs to the individual containers. flannel uses etcd to store mappings between the virtual IP and host addresses.

Kubernetes cluster deployment topology

Depending on networking model (weave or flannel), Kubernetes cluster deployment topology should correspond to the following representation.


Why Kubernetes on Microsoft Azure?

Docker brings a more flexible deployment model using containers as its runtime. Kubernetes offers a way to manage those containers that imposes less overhead on servers when multiple workloads are shared on a single machine. Moreover, containers can be fired up and removed at a much faster pace than with machine virtualization. Kubernetes is intended to run on physical hosts as well as cloud providers such as Microsoft Azure. That’s why Scott Guthrie, Executive Vice President of the Cloud and Enterprise group at Microsoft has stated: “Microsoft will help contribute code to Kubernetes to enable customers to easily manage containers that can run anywhere. This will make it easier to build multi-cloud solutions including targeting Microsoft Azure.”

Running Kubernetes on Azure

Kubernetes documentation offers detailed guidance on the way to deploy a Kubernetes cluster to Azure (http://kubernetes.io/docs/getting-started-guides/coreos/azure). The proposed implementation relies on the usage of CoreOs virtual machines, with Weave Network for communication. Provisioning of Kubernetes master, minions and etcd nodes is done through Azure CLI and call to a node.js script “./create-kubernetes-cluster.js”, which is published on https://github.com/kubernetes/kubernetes. This script provisions a cluster with a ring of three dedicated etcd nodes, one kubernetes master and two kubernetes minions for the containers workloads deployment.

Automating a Kubernetes cluster deployment in Azure

Automating a Kubernetes cluster deployment in Azure requires first cluster provisioning, and then each cluster node configuration. Scripted cluster provisioning in Azure can be done in many ways, that’s why we propose three different solutions: PowerShell ARM cmdlets, Azure CLI and json ARM Templates. Configuration requires the ability to extend the configuration of the different nodes. This can be done through cloud-config file CoreOs mechanism. It can also be achieved through the usage of Ansible tasks. We have also explored and implemented both ways.

So first solution is to provision a Kubernetes through PowerShell or Azure CLI using cloud-config file. Second solution is an ARM json template built upon linked templates, that relies on Ansible tasks for Kubernetes roles setup.

All the following code is published online on https://github.com/DXFrance/AzureKubernetes.

Stay tuned for the next article…