November 2018

Volume 33 Number 12

[Containers]

Up and Running with Azure Kubernetes Services

By Chander Dhall

In the world of container orchestration, everyone seems to being talking about Kubernetes. Originally designed by Google, Kubernetes is an open source platform for orchestrating Docker (or any Open Container Initiative) containers across clusters of virtual machines (VMs), with support for deployment, rollbacks, scaling and a host of other features. Administering a Kubernetes cluster can be a complex endeavor, so the Azure team has provided a managed solution called Azure Kubernetes Service (AKS) to make the process considerably easier.

In this article, I’ll demonstrate how to deploy an AKS cluster, create a secure Azure Container Registry (ACR), deploy an ASP.NET Web API application, and expose that application on the Internet via a Kubernetes Ingress and Azure DNS. This is not intended to be an in-depth article on Kubernetes, but rather everything you need to get up and running quickly with an application using the Azure AKS offering.

One of the benefits of Azure AKS is that the control plane, consisting of the master and configuration nodes, is fully managed. The Kubernetes control plane typically comprises at least one master node and one, three or five etcd configuration nodes. As you can imagine, managing these core services can be tedious and costly. With AKS, you can upgrade the core services or scale out additional worker nodes with a click of a button. Additionally, at this time there are no additional charges for these management nodes. You only pay for the worker nodes that run your services.

The code discussed in this article can be found at bit.ly/2zaFQeq. Included in the repository is a basic ASP.NET Web API application, along with a Dockerfile and Kubernetes manifests.

Creating a Kubernetes AKS Cluster

The Azure CLI is used for creating and managing resources in an Azure Cloud subscription. It can be found at bit.ly/2OcwFQb. Throughout this article, l’ll be using it to manage various Azure services. Azure Cloud Shell (bit.ly/2yESmTP) is a Web-based shell that allows developers to run commands without installing the CLI locally.

Let’s get started by creating a resource group to hold the AKS cluster and container registry, with this bit of code:

> group create --name my-aks-cluster --location eastus2

Once the resource group is created, I create a cluster with a single node. While VMs as small as the B1 burstable images are allowed, it’s suggested that you use at least a 2-core, 7GB memory instance (D-series or greater). Anything smaller has a tendency to be unreliable when scaling and upgrading a cluster. You’ll also need to take into consideration that AKS currently only supports nodes of a single type, so you cannot decide to scale up to a larger VM instance at a later time. Adding nodes of a different type will be supported in the future with multiple node pools, but for now you need to choose a node size that fits the needs of the services that you plan to run.

Sit back and be patient while the cluster is being created, as it often takes upward of 10 to 12 minutes. Here’s the code to kick off the operation:

> az aks create --resource-group my-aks-cluster --name my-aks-cluster
  --node-count 1 --generate-ssh-keys --kubernetes-version 1.11.2
  --node-vm-size Standard_D2s_v3

Getting Docker images into the AKS cluster requires the user of a Docker registry. Using a public Docker registry is acceptable for open source distributions, but most projects will want to secure application code in a private registry.

Azure provides a secure, managed Docker registry solution with Azure Container Registry (ACR). To setup an ACR instance, run the following command:

> az acr create --resource-group my-aks-cluster
  --name <REGISTRY_NAME> --sku Basic --admin-enabled true

Note that the registry name must be unique across all of the ACR-hosted registry names on Azure.

The Kubernetes CLI

Kubectl is used to interact with an AKS cluster. It’s available for all OSes and has multiple approaches to installation. You can find more information at bit.ly/2Q58CnJ. There’s also a Web-based dashboard that can be very helpful for getting a quick overview of the state of the cluster, but it doesn’t cover every API operation available and you may often find yourself falling back to the kubectl CLI. Even if you’re not a fan of command-line operations, over time you’ll likely grow to appreciate the power of kubectl. In combination with the Azure CLI, any operation can be performed without leaving the shell.

Once kubectl has been installed, credentials can be imported locally to authenticate to the cluster using the following command:

> az aks get-credentials --resource-group my-aks-cluster
  --name my-aks-cluster

Running this command updates ~/.kube/config with your cluster uri and signing authority and credentials. It also adds a context for setting the cluster as the current configuration. The kubectl configuration can hold context for multiple clusters, which can easily be switched using the kubectl config command. Additionally, there are open source utilities available to make switching contexts easier (kubectx and kubectxwin).

Once the credentials have been imported, connectivity to the cluster can be tested by listing the running nodes with the kubectl get nodes command. You should see something like this:

> kubectl get nodes
NAME                     STATUS    ROLES     AGE       VERSION
aks-default-34827916-0   Ready     agent     1d        v1.11.2

Adding a Container Registry Secret for Deployments

Kubernetes has a secure way to store sensitive data using Secrets. For instance, to prevent ACR credentials from being stored in the source code, a secret should be created and referenced from the Kubernetes deployment manifests. To retrieve the credentials for the ACR service, run the following command:

> az acr credential show --name <REGISTRY_NAME>

Next, use kubectl to generate a special type of secret (docker-registry) that’s designed specifically to store credential tokens provided by Docker. The code that follows will create a secret called my-docker-creds using the credentials that were retrieved from the ACR query. Be aware that the username is case-sensitive and ACR will make it lowercase by default for the built-in admin account:

> kubectl create secret docker-registry my-docker-creds
  --docker-server=<REGISTRY_NAME>.azurecr.io --docker-username=<REGISTRY_USERNAMEE>
  --docker-password=<REGISTRY_PASSWORD> --docker-email=<ANY_EMAIL>

Finally, confirm that the secret was created by running the following command:

> kubectl describe secrets my-docker-creds
Name:         my-docker-creds
Namespace:    default
Type:  kubernetes.io/dockerconfigjson
Data
====
.dockerconfigjson:  223 bytes

Creating a Docker Container

All applications in AKS are deployed as Docker containers. Here’s the code for a Dockerfile that creates a Docker image that can be shipped to the cluster:

FROM microsoft/dotnet:2.1-sdk AS builder
COPY . /app
WORKDIR /app
RUN dotnet publish -f netcoreapp2.1 -c Debug -o /publish
FROM microsoft/dotnet:2.1.3-aspnetcore-runtime
COPY --from=builder /publish .
ENTRYPOINT ["/bin/bash", "-c", "dotnet WebApiApp.dll"]

This Dockerfile uses a multi-stage approach that splits the build into separate stages for build and runtime. This reduces the size of the overall image significantly by not including the entire SDK for distribution.

Pushing the Image to the Registry

Docker works on the concept of local images and containers to execute images. A Docker image cannot be pushed directly to the cluster. Instead, the image must be hosted in a location that Kubernetes can access to pull the image locally to the cluster. The ACR registry is a secure location that allows images to be managed centrally between development, continuous integration and cluster environments.

The image must be built and tagged with the format <REGISTRY>/<REPOSITORY>/<IMAGE>:<TAG> so that Docker will know where to push the image upstream. The repository, which can have any name, provides a way to separate out registry images into logical groups. The following code demonstrates how to build and tag an image before pushing to ACR. While the latest tag is supported, when working with Kubernetes it’s highly advisable to use semantic versioning. It makes managing deployments and rollbacks much easier when you can leverage version numbers. Here’s the code:

> az acr login --name <REGISTRY_NAME>
> docker build -t <REGISTRY_NAME>.azurecr.io/api/my-webapi-app:1.0 .
> docker push <REGISTRY_NAME>.azurecr.io/api/my-webapi-app:1.0
baf6b1178a5b: Pushed
b3f8eefa2758: Pushed
393dd8f4a879: Pushed
0ad9ffac9ae9: Pushed
8ea427f58308: Pushed
cdb3f9544e4c: Pushed
1.0: digest: sha256:47399f3b2365a9 size: 1579

Now confirm that the image was pushed upstream by running:

> az acr login --name <REGISTRY_NAME>
> az acr repository list --name <REGISTRY_NAME>

Deploying the Application

Kubernetes uses manifests to describe every object in the cluster. Manifests are yaml files that are managed through the Kubernetes API. A deployment manifest type is used to describe the resources, image source and desired state of the application. Figure 1 is a simple manifest that tells Kubernetes which container to use, the number of desired running instances of the container and labels to help describe the application in the cluster. It also adds the name of the secret to authenticate to ACR when pulling the remote image.

Figure 1 Deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-webapi-app
spec:
  selector:
    matchLabels:
      app: my-webapi-app
  replicas: 2
  template:
    metadata:
      labels:
        app: my-webapi-app
    spec:
      containers:
      - name: my-webapi-app
        image: <REGISTRY_NAME>.azurecr.io/api/my-webapi-app:1.0
        livenessProbe:
          initialDelaySeconds: 10
          path: /health
          periodSeconds: 5
        ports:
        - containerPort: 80
      imagePullSecrets:
      - name: my-docker-creds

Deploy the manifest with the following command:

> kubectl apply -f ./deployment.yaml

Kubernetes uses a concept called a pod to group one or more containers into a logical, scalable instance within the cluster. Typically, you’ll have one container per pod. This allows you to independently scale any service for your application. A common misconception is to put all the services of an application—such as Web Application and Database—in a single pod. Doing this doesn’t allow the Web front end to scale individually from the database, and you’ll lose many of the benefits of Kubernetes as a result.

There’s a common scenario where it’s acceptable to have an additional container in a pod—it’s a concept called a sidecar. Imagine a container that observes your application container and provides metrics or logging. Placing both containers in a single pod provides real benefits in this instance. Otherwise, it’s generally best to keep the ratio of one container per pod until you have a solid understanding of the limitations (and benefits) of grouping containers.

Once the deployment has completed, the status of the application pod can be checked with the following command:

> kubectl get pods
NAME                             READY     STATUS    RESTARTS   AGE
my-webapi-app-64cdf6b449-9hsks   2/2       Running   0          2m

Note that two instances of the pods are running to satisfy the replica set.

Creating a Service

Now that the application Docker container is deployed into the cluster, a Service is required to make it discoverable. A Kubernetes Service makes your pods discoverable to other pods within the cluster. It does this by registering itself with the cluster’s internal DNS. It also provides load balancing across all of the pod replicas, and manages pod availability during pod upgrades. A service is a very powerful Kubernetes concept that’s required to provide availability during rolling, blue/green, and canary deployment upgrades to an application. The following command will create a service for the deployment:

> kubectl expose deployment/my-webapi-app
service "my-webapi-app" exposed

Now run the following command to view the service running in the cluster:

> kubectl get services
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
my-web-api   ClusterIP   10.0.0.157   <none>        443/TCP   1d

By default, services are only accessible from within the cluster, hence the absence of an external-ip. The kubectl CLI provides a convenience to open a proxy between the local machine and the cluster to interactively check if it’s running, which can be seen in this code:

> kubectl port-forward services/my-webapi-app 8080:80
> curl https://localhost:8080
StatusCode        : 200
StatusDescription : OK
Content           : Hello from our Kubernetes hosted service!

Adding HTTP Routing

Kubernetes is secure by default and you must explicitly expose services that you wish to access from outside the cluster. This is an excellent design feature from a security perspective, but can be confusing to a first-time user. The most common way to access HTTP-based services inside the cluster is by using a Kubernetes Ingress controller. Ingress controllers provide a way to route requests to internal services based on a hostname and path through an HTTP proxy entrypoint.

Before Ingress was added to Kubernetes, the primary way to expose a service was by using a LoadBalancer service type. This would cause a proliferation of load balancers—one per service—that would each need to be separately managed. With Ingress, every service in the cluster can be accessed by a single Azure Load Balancer, significantly reducing cost and complexity. 

AKS provides a convenient add-on to extend the cluster with an Nginx proxy that acts as an Ingress controller for handling these requests. It can be enabled via the Azure CLI with the following command:

> az aks enable-addons --resource-group my-aks-cluster
  --name my-aks-cluster --addons http_application_routing

You can confirm that the routing services are running by issuing the command shown in Figure 2.

Figure 2 Confirm Running Routing Services

> kubectl get pods --all-namespaces
NAMESPACE     NAME                                                              READY     
kube-system   addon-http-application-routing-default-http-backend-74d455htfw9   1/1      
kube-system   addon-http-application-routing-external-dns-7cf57b9cc7-lqhl5      1/1     
kube-system   addon-http-application-routing-nginx-ingress-controller-5595b2v   1/1

You should see three new controllers in the list. The Ingress controller, external DNS and the default back end. The default back end is used to provide a response to clients when a route can’t be found to an existing service. It’s very similar to a 404 Not Found handler in a typical ASP.NET application, except that it runs as a separate Docker container. It’s worth noting that while HTTP add-on is great for experiments, it’s not intended for production use.

Exposing the Service

An Ingress is a combination of an Ingress controller and an Ingress definition. Each service will have an Ingress definition that tells the Ingress controller how to expose the service. The following command will get the DNS name for the cluster:

> az aks show --resource-group my-aks-cluster
  --name my-aks-cluster --query
  addonProfiles.httpApplicationRouting.config.HTTPApplicationRoutingZoneName
  -o table
Result
--------------------------------------
<CLUSTER_PREFIX>.eastus2.aksapp.io

The ingress annotation kubernetes.io/ingress.class notifies the Ingress controller to handle this specification, as shown in Figure 3. Using the cluster DNS name resolved earlier, a subdomain will be added for the host along with a “/” root path. Additionally, the service name and its internally exposed port must be added to tell the Ingress controller where to route the request within the cluster.

Figure 3 Ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-webapi-app
  annotations:
    kubernetes.io/ingress.class: addon-http-application-routing
spec:
  rules:
  - host: my-webapi-app.<CLUSTER_PREFIX>.eastus2.aksapp.io
    http:
      paths:
      - path: /
        backend:
          serviceName: my-webapi-app
          servicePort: 80

The Ingress manifest can then be applied with this command:

> kubectl apply -f ./ingress.yaml

It can take a few minutes for the DNS entries to be created and propagated, so please be patient. The status of the DNS service can be checked from within the Azure Portal, like so:

> curl https://my-webapi-app.<CLUSTER_PREFIX>.eastus2.aksapp.io
StatusCode        : 200
StatusDescription : OK
Content           : Hello from our Kubernetes hosted service!

Wrapping Up

At this point, we have a single-node AKS cluster running alongside an ACR service hosting the application Docker images with secured secrets. This should be a great starting point for exploring the many additional capabilities that Azure AKS Kubernetes can offer. I have a simple philosophy, “Buy what enables you and build what differentiates you.” As you can see, Azure simplifies Kubernetes so that both developers and DevOps professionals can focus on more critical tasks.


Chander Dhall*, CEO of Cazton, is an eight-time awarded Microsoft MVP, Google developer expert and world-renowned technology leader in architecting and implementing solutions.*

Thanks to Microsoft for providing a technical review of this article: Brendan Burns


Discuss this article in the MSDN Magazine forum