Understanding Containers - the layer between the operating system’s kernel and the application
Guest blog by Christo Lolov, Microsoft Student Partner at Imperial College London
I am a Third Year, Computing MEng at Imperial College London and a have a number of blogs already published on various areas of technology which interest me see https://blogs.msdn.microsoft.com/uk_faculty_connection/?s=Lolov
Virtualization splits hardware resources across multiple consumers, be it operating systems, applications, processes or else.
A virtual machine is an operating system (guest machine), which runs on top of another operating system (host machine) and is made to believe it is the one and only operating system controlling the hardware. The host machine can be a full operating system, like Windows, or it can be one specialized in only managing virtual machines.
Virtual machines can be used for many things.
Some software depends on a specific operating system. Buying new hardware and installing the required operating system is one approach. Another is to create a virtual machine on your computer, which then allows you to run the platform dependent software.
Most applications require a configured environment to run. As such, all application dependencies can be installed on a clean virtual machine and then the application can be distributed with a copy of the virtual machine.
While virtual machines are very useful, they are not always the best approach. An alternative solution are containers.
Let’s say you have some programs already installed on your computer. You would like to install a new one. When you install it, however, it turns out that one of the other already present programs modified configuration files, which the newly installed program was expecting to be unmodified. You could solve this problem, by installing each program in a virtual machine, but it would be easier if you could make the programs believe they were installed alone. A container simulates such program paradise. It makes the program believe it is running on a clean operating system. In fact, some containers allow multiple programs to share a container. Imagine you had three programs that work very well together and are already installed, and you would like to install three new programs which work very well together, but not very well when the first three are around. A container allows you to isolate the two groups of programs so that they can only see other members of the same group. Some containers offer an even finer separation. They allow each program to run, thinking it is running on its own. The difference between a container and a virtual machine is that a container is used to virtualize applications, while a virtual machine is used to virtualize a whole operating system.
A container is software, which is put between the operating system’s kernel and the application. It controls what the application sees and how it can interact with it. All containers require access to an operating system’s kernel. However, containers are not part of the kernel, they just call functionality it provides.
There are three popular types of containers:
A Linux container requires the Linux kernel. It is used to make numerous applications believe that they were installed on a clean operating system. An arbitrary number of applications can be put in a Linux container. There can be numerous Linux containers on a machine, but there will be only one Linux kernel that they turn towards.
A Docker container requires the Linux kernel. It is used to make a singular application believe it is running alone on the operating system. There can be numerous Docker containers on a machine, but there will be only one Linux kernel that they turn towards. Docker containers used to be implemented using Linux containers. At some point, Docker containers swapped Linux containers with their own library, which would call the needed kernel functionality directly.
A Windows container requires the Windows kernel. It is used to make a singular application believe it is running alone on the operating system. There are two types of Windows containers: a Windows Server container and a Hyper-V container. There can be numerous Windows Server containers, but there will be only one Windows kernel that they turn towards. A Hyper-V container runs in a lightweight virtual machine where it also installs the kernel the container needs. Each Hyper-V container brings its own kernel. As such, one would use a Windows Server container in a trusted environment and a Hyper-V container in a hostile environment. If everything is kept the same, Windows Server containers should have a smaller overhead, because all containers will share the same kernel.
Windows and Containers
Windows supports both Windows and Docker containers, albeit they cannot run at the same time. One tool which can be used to control the containers is Docker.
Docker is an application which deals with the whole life cycle of containers. Docker containers are only one part of Docker. This text mainly references the containers and not the overall application. Docker containers require the Linux kernel and they run on Windows within a lightweight virtual machine called MobyLinux.
Containers are very suitable for a cloud such as Microsoft Azure because they allow application isolation and portability.
Windows and Docker containers are intended for singular applications, but only one application does not mean only one container. Throughout a day an application can have periods in which it is used more and less. Putting an application in a container means that if you require more of it, you can create more containers and if you require less, then you can remove some.
Nowadays people strive to write applications from smaller components. What you would, in fact, strive to put in a container is such a component, rather than the whole application. Say you have an application which has two parts: a database and a component, which does a lot of processing for every request. When you have a lot of requests you would like to have many containers doing the processing and fewer containers holding the database. When you have fewer requests, you will just scale down the number of processing components. The advantage is that you do not have to scale the whole application, just because of one component under stress. You can only scale the component under stress.
To be able to create containers, however, you need to have a schema of what those containers are going to be. These schemas are known as container images.
A container image is what a container is created from. It specifies what will be present in the container and what commands will be ran. You can instantiate numerous containers holding the same application from a single container image. The term for a container spawned from an image is a container instance.
Images can be built on top of other images or they can be base ones. Say your application needs a Java environment. You pull an image containing the Java environment, copy your application on top of it and now you have a new image containing both the application and its dependencies that you can allow others to use.
A container lives in a different place to the image it was created from. To deploy a container on Azure you will need to specify where the image for the container lives. A container registry is a place where container images are kept. Microsoft provides such a functionality with the Azure Container Registry.
Containers on Azure
That raises the question where can one deploy a container on Azure?
If you would like to deploy a singular container Microsoft offers Azure’s Container Instance.
One common feature that all applications have at some point in time is that they crash. When they crash, they need to be restarted. The same functionality is true for applications in containers. Restarting one container is easy. Restarting ten becomes cumbersome. As the numbers scale, keeping track of which crash and when they need to be restarted becomes a problem. Let’s say on top of failures, that suddenly there are more people using your application. As such, you would need more containers to handle the traffic. One way to overcome these problems is to have systems which manage containers.
Should you wish to handle more than a few singular containers Microsoft’s Azure Container Service is a solution. It allows you to use one of several container-orchestration systems, the most recent addition to which is Kubernetes.
I hope these notes as a concise explanation of containers. (Container-orchestration systems are interesting, but they require explanation on their own.)
I purposefully skipped writing tutorials on how to use the mentioned Azure services, because there are well-explained ones in Azure’s documentation:
· Azure Container Registry: https://docs.microsoft.com/en-gb/azure/container-registry/container-registry-get-started-portal
· Azure Container Instances: https://docs.microsoft.com/en-us/azure/container-instances/container-instances-quickstart-portal
· Azure Kubernetes Service: https://docs.microsoft.com/en-gb/azure/aks/kubernetes-walkthrough-portal
One could get started with the mentioned Azure services and more with a student account (https://azure.microsoft.com/en-gb/free/free-account-students-faq) or a free account (https://azure.microsoft.com/en-gb/free/free-account-faq).