Define container jobs (YAML)
Azure Pipelines | Azure DevOps Server 2019
By default, jobs run on the host machine where the agent is installed. This is convenient and typically well-suited for projects that are just beginning to adopt Azure Pipelines. Over time, you may find that you want more control over the context where your tasks run.
The Classic editor doesn't support container jobs at this time.
On Linux and Windows agents, jobs may be run on the host or in a container. (On macOS and Red Hat Enterprise Linux 6, container jobs are not available.) Containers provide isolation from the host and allow you to pin specific versions of tools and dependencies. Host jobs require less initial setup and infrastructure to maintain.
Containers offer a lightweight abstraction over the host operating system. You can select the exact versions of operating systems, tools, and dependencies that your build requires. When you specify a container in your pipeline, the agent will first fetch and start the container. Then, each step of the job will run inside the container.
If you need fine-grained control at the individual step level, step targets allow you to choose container or host for each step.
The Azure Pipelines system requires a few things in Linux-based containers:
- Can run Node.js (which the agent provides)
- Does not define an
USERhas access to
groupaddand other privileges commands without
And on your agent host:
- Ensure Docker is installed
- The agent must have permission to access the Docker daemon
Be sure your container has each of these tools available. Some of the extremely stripped-down
containers available on Docker Hub, especially those based on Alpine Linux, don't satisfy these
minimum requirements. Containers with a
ENTRYPOINT might not work, since Azure Pipelines
docker create an awaiting container and
docker exec a series of commands which expect
the container is always up and running.
Also note: the Red Hat Enterprise Linux 6 build of the agent won't run container job. Choose another Linux flavor, such as Red Hat Enterprise Linux 7 or above.
The Windows container must support running Node.js. A base Windows Nano Server container is missing dependencies required to run Node. See this post for more information about what it takes to run Node on Windows Nano Server.
ubuntu-16.04 pools support running containers.
The Hosted macOS pool does not support running containers.
A simple example:
pool: vmImage: 'ubuntu-16.04' container: ubuntu:16.04 steps: - script: printenv
This tells the system to fetch the
ubuntu image tagged
Docker Hub and then start the container. When the
printenv command runs, it will happen inside the
You must specify "Hosted Ubuntu 1604" as the pool name in order to run Linux containers. Other pools won't work.
A Windows example:
pool: vmImage: 'win1803' container: mcr.microsoft.com/windows/servercore:1803 steps: - script: set
Windows requires that the kernel version of the host and container match.
Since this example uses the hosted Windows Container pool, which is running an 1803
build, we also use the
1803 tag for the container.
Containers are also useful for running the same steps in multiple jobs.
In the following example, the same steps run in multiple versions of Ubuntu Linux.
(And we don't have to mention the
jobs keyword, since there's only a single job defined.)
pool: vmImage: 'ubuntu-16.04' strategy: matrix: ubuntu14: containerImage: ubuntu:14.04 ubuntu16: containerImage: ubuntu:16.04 ubuntu18: containerImage: ubuntu:18.04 container: $[ variables['containerImage'] ] steps: - script: printenv
Containers can be hosted on registries other than Docker Hub. To host an image on Azure Container Registry or another private container registry, add a service connection to the private registry. Then you can reference it in a container spec:
container: image: myprivate/registry:ubuntu1604 endpoint: private_dockerhub_connection steps: - script: echo hello
container: image: myprivate.azurecr.io/windowsservercore:1803 endpoint: my_acr_connection steps: - script: echo hello
Other container registries may also work. Amazon ECR doesn't currently work, as there are additional client tools required to convert AWS credentials into something Docker can use to authenticate.
If you need to control container startup, you can specify
container: image: ubuntu:16.04 options: --hostname container-test --ip 192.168.0.1 steps: - script: echo hello
docker create --help will give you the list of supported options.
Reusable container definition
In the following example, the containers are defined in the resources section.
Each container is then referenced later, by referring to its assigned alias.
(Here, we explicitly list the
jobs keyword for clarity.)
resources: containers: - container: u14 image: ubuntu:14.04 - container: u16 image: ubuntu:16.04 - container: u18 image: ubuntu:18.04 jobs: - job: RunInContainer pool: vmImage: 'ubuntu-16.04' strategy: matrix: ubuntu14: containerResource: u14 ubuntu16: containerResource: u16 ubuntu18: containerResource: u18 container: $[ variables['containerResource'] ] steps: - script: printenv