YAML schema reference

Azure Pipelines

This article is a detailed reference guide to Azure Pipelines YAML pipelines. It includes a catalog of all supported YAML capabilities and the available options.

The best way to get started with YAML pipelines is to read the quickstart guide. After that, to learn how to configure your YAML pipeline for your needs, see conceptual topics like Build variables and Jobs.

To learn how to configure your YAML pipeline for your needs, see conceptual topics like Build variables and Jobs.

Pipeline structure

A pipeline is one or more stages that describe a CI/CD process. Stages are the major divisions in a pipeline. The stages "Build this app," "Run these tests," and "Deploy to preproduction" are good examples.

A stage is one or more jobs, which are units of work assignable to the same machine. You can arrange both stages and jobs into dependency graphs. Examples include "Run this stage before that one" and "This job depends on the output of that job."

A job is a linear series of steps. Steps can be tasks, scripts, or references to external templates.

This hierarchy is reflected in the structure of a YAML file like:

  • Pipeline
    • Stage A
      • Job 1
        • Step 1.1
        • Step 1.2
        • ...
      • Job 2
        • Step 2.1
        • Step 2.2
        • ...
    • Stage B
      • ...

Simple pipelines don't require all of these levels. For example, in a single-job build you can omit the containers for stages and jobs because there are only steps. And because many options shown in this article aren't required and have good defaults, your YAML definitions are unlikely to include all of them.

A pipeline is one or more jobs that describe a CI/CD process. A job is a unit of work assignable to the same machine. You can arrange jobs into dependency graphs like "This job depends on the output of that job."

A job is a linear series of steps. Steps can be tasks, scripts, or references to external templates.

This hierarchy is reflected in the structure of a YAML file like:

  • Pipeline
    • Job 1
      • Step 1.1
      • Step 1.2
      • ...
    • Job 2
      • Step 2.1
      • Step 2.2
      • ...

For single-job pipelines, you can omit the jobs container because there are only steps. And because many options shown in this article aren't required and have good defaults, your YAML definitions are unlikely to include all of them.

Conventions

Here are the syntax conventions used in this article:

  • To the left of : is a literal keyword used in pipeline definitions.
  • To the right of : is a data type. The data type can be a primitive type like string or a reference to a rich structure defined elsewhere in this article.
  • The notation [ datatype ] indicates an array of the mentioned data type. For instance, [ string ] is an array of strings.
  • The notation { datatype : datatype } indicates a mapping of one data type to another. For instance, { string: string } is a mapping of strings to strings.
  • The symbol | indicates there are multiple data types available for the keyword. For instance, job | templateReference means either a job definition or a template reference is allowed.

YAML basics

This document covers the schema of an Azure Pipelines YAML file. To learn the basics of YAML, see Learn YAML in Y Minutes. Azure Pipelines doesn't support all YAML features. Unsupported features include anchors, complex keys, and sets. Also, unlike standard YAML, Azure Pipelines depends on seeing stage, job, task, or a task shortcut like script as the first key in a mapping.

Pipeline

name: string  # build numbering format
resources:
  pipelines: [ pipelineResource ]
  containers: [ containerResource ]
  repositories: [ repositoryResource ]
variables: # several syntaxes, see specific section
trigger: trigger
pr: pr
stages: [ stage | templateReference ]

If you have a single stage, you can omit the stages keyword and directly specify the jobs keyword:

# ... other pipeline-level keywords
jobs: [ job | templateReference ]

If you have a single stage and a single job, you can omit the stages and jobs keywords and directly specify the steps keyword:

# ... other pipeline-level keywords
steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
name: string  # build numbering format
resources:
  containers: [ containerResource ]
  repositories: [ repositoryResource ]
variables: # several syntaxes, see specific section
trigger: trigger
pr: pr
jobs: [ job | templateReference ]

If you have a single job, you can omit the jobs keyword and directly specify the steps keyword:

# ... other pipeline-level keywords
steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]

Learn more about:

Stage

A stage is a collection of related jobs. By default, stages run sequentially. Each stage starts only after the preceding stage is complete.

Use approval checks to manually control when a stage should run. These checks are commonly used to control deployments to production environments.

Checks are a mechanism available to the resource owner. They control when a stage in a pipeline consumes a resource. As an owner of a resource like an environment, you can define checks that are required before a stage that consumes the resource can start.

Currently, manual approval checks are supported on environments. For more information, see Approvals.

stages:
- stage: string  # name of the stage (A-Z, a-z, 0-9, and underscore)
  displayName: string  # friendly name to display in the UI
  dependsOn: string | [ string ]
  condition: string
  variables: # several syntaxes, see specific section
  jobs: [ job | templateReference]

Learn more about stages, conditions, and variables.

Job

A job is a collection of steps run by an agent or on a server. Jobs can run conditionally and might depend on earlier jobs.

jobs:
- job: string  # name of the job (A-Z, a-z, 0-9, and underscore)
  displayName: string  # friendly name to display in the UI
  dependsOn: string | [ string ]
  condition: string
  strategy:
    parallel: # parallel strategy; see the following "Parallel" topic
    matrix: # matrix strategy; see the following "Matrix" topic
    maxParallel: number # maximum number of matrix jobs to run simultaneously
  continueOnError: boolean  # 'true' if future jobs should run even if this job fails; defaults to 'false'
  pool: pool # see the following "Pool" schema
  workspace:
    clean: outputs | resources | all # what to clean up before the job runs
  container: containerReference # container to run this job inside of
  timeoutInMinutes: number # how long to run the job before automatically cancelling
  cancelTimeoutInMinutes: number # how much time to give 'run always even if cancelled tasks' before killing them
  variables: # several syntaxes, see specific section
  steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
  services: { string: string | container } # container resources to run as a service container

For more information about workspaces, including clean options, see the workspace topic in Jobs.

Learn more about variables, steps, pools, and server jobs.

Note

If you have only one stage and one job, you can use single-job syntax as a shorter way to describe the steps to run.

Container reference

A container is supported by jobs.

container: string # Docker Hub image reference or resource alias
container:
  image: string  # container image name
  options: string  # arguments to pass to container at startup
  endpoint: string  # endpoint for a private container registry
  env: { string: string }  # list of environment variables to add

Strategies

The matrix and parallel keywords specify mutually exclusive strategies for duplicating a job.

Matrix

Use of a matrix generates copies of a job, each with different input. These copies are useful for testing against different configurations or platform versions.

strategy:
  matrix: { string1: { string2: string3 } }
  maxParallel: number

For each occurrence of string1 in the matrix, a copy of the job is generated. The name string1 is the copy's name and is appended to the name of the job. For each occurrence of string2, a variable called string2 with the value string3 is available to the job.

Note

Matrix configuration names must contain only basic Latin alphabet letters (A-Z and a-z), digits (0-9), and underscores (_). They must start with a letter. Also, their length must be 100 characters or fewer.

The optional maxParallel keyword specifies the maximum number of simultaneous matrix legs to run at once.

If maxParallel is unspecified or set to 0, no limit is applied.

If maxParallel is unspecified, no limit is applied.

Note

The matrix syntax doesn't support automatic job scaling but you can implement similar functionality using the each keyword. For an example, see nedrebo/parameterized-azure-jobs.

Parallel

This strategy specifies how many duplicates of a job should run. It's useful for slicing up a large test matrix. The Visual Studio Test task understands how to divide the test load across the number of scheduled jobs.

strategy:
  parallel: number

Deployment job

A deployment job is a special type of job. It's a collection of steps to run sequentially against the environment. In YAML pipelines, we recommend that you put your deployment steps in a deployment job.

jobs:
- deployment: string   # name of the deployment job (A-Z, a-z, 0-9, and underscore)
  displayName: string  # friendly name to display in the UI
  pool:                # see the following "Pool" schema
    name: string
    demands: string | [ string ]
  workspace:
    clean: outputs | resources | all # what to clean up before the job runs
  dependsOn: string
  condition: string
  continueOnError: boolean                # 'true' if future jobs should run even if this job fails; defaults to 'false'
  container: containerReference # container to run this job inside
  services: { string: string | container } # container resources to run as a service container
  timeoutInMinutes: nonEmptyString        # how long to run the job before automatically cancelling
  cancelTimeoutInMinutes: nonEmptyString  # how much time to give 'run always even if cancelled tasks' before killing them
  variables: # several syntaxes, see specific section
  environment: string  # target environment name and optionally a resource name to record the deployment history; format: <environment-name>.<resource-name>
  strategy:
    runOnce:    #rolling, canary are the other strategies that are supported
      deploy:
        steps:
        - script: [ script | bash | pwsh | powershell | checkout | task | templateReference ]

Steps

A step is a linear sequence of operations that make up a job. Each step runs in its own process on an agent and has access to the pipeline workspace on a local hard drive. This behavior means environment variables aren't preserved between steps but file system changes are.

steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]

For more information about steps, see the schema references for:

All steps, regardless of whether they're documented in this article, support the following properties:

  • displayName
  • name
  • condition
  • continueOnError
  • enabled
  • env
  • timeoutInMinutes

Variables

You can add hard-coded values directly or reference variable groups. Specify variables at the pipeline, stage, or job level.

For a simple set of hard-coded variables, use this mapping syntax:

variables: { string: string }

To include variable groups, switch to this sequence syntax:

variables:
- name: string  # name of a variable
  value: string # value of the variable
- group: string # name of a variable group

You can repeat name/value pairs and group.

Variables can also be set as read only to enhance security.

variables:
- name: myReadOnlyVar
  value: myValue
  readonly: true

You can also include variables from templates.

Template references

Note

Be sure to see the full template expression syntax, which is all forms of ${{ }}.

You can export reusable sections of your pipeline to a separate file. These separate files are known as templates. Azure Pipelines supports four kinds of templates:

Azure Pipelines supports four kinds of templates:

You can also use templates to control what is allowed in a pipeline and to define how parameters can be used.

You can export reusable sections of your pipeline to separate files. These separate files are known as templates. Azure DevOps Server 2019 supports these two kinds of templates:

Templates themselves can include other templates. Azure Pipelines supports a maximum of 50 unique template files in a single pipeline.

Stage templates

You can define a set of stages in one file and use it multiple times in other files.

In the main pipeline:

- template: string # name of template to include
  parameters: { string: any } # provided parameters

In the included template:

parameters: { string: any } # expected parameters
stages: [ stage ]

Job templates

You can define a set of jobs in one file and use it multiple times in other files.

In the main pipeline:

- template: string # name of template to include
  parameters: { string: any } # provided parameters

In the included template:

parameters: { string: any } # expected parameters
jobs: [ job ]

See templates for more about working with job templates.

Step templates

You can define a set of steps in one file and use it multiple times in another file.

In the main pipeline:

steps:
- template: string  # reference to template
  parameters: { string: any } # provided parameters

In the included template:

parameters: { string: any } # expected parameters
steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]

See templates for more about working with templates.

Variable templates

You can define a set of variables in one file and use it multiple times in other files.

In the main pipeline:

- template: string            # name of template file to include
  parameters: { string: any } # provided parameters

In the included template:

parameters: { string: any }   # expected parameters
variables: [ variable ]

Note

The variables keyword uses two syntax forms: sequence and mapping. In mapping syntax, all keys are variable names and their values are variable values. To use variable templates, you must use sequence syntax. Sequence syntax requires you to specify whether you're mentioning a variable (name), a variable group (group), or a template (template). See the variables topic for more.

Parameters

You can use parameters in templates and pipelines.

The type and name fields are required when defining parameters. See all parameter data types.

parameters:
- name: string          # name of the parameter; required
  type: enum            # data types, see below
  default: any          # default value; if no default, then the parameter MUST be given by the user at runtime
  values: [ string ]    # allowed list of values (for some data types)
  secret: bool          # whether to treat this value as a secret; defaults to false

Types

Data type Notes
string string
number may be restricted to values:, otherwise any number-like string is accepted
boolean true or false
object any YAML structure
step a single step
stepList sequence of steps
job a single job
jobList sequence of jobs
deployment a single deployment job
deploymentList sequence of deployment jobs
stage a single stage
stageList sequence of stages

The step, stepList, job, jobList, deployment, deploymentList, stage, and stageList data types all use standard YAML schema format. This example includes string, number, boolean, object, step, and stepList.

parameters:
- name: myString
  type: string
  default: a string
- name: myMultiString
  type: string
  default: default
  values:
  - default
  - ubuntu
- name: myNumber
  type: number
  default: 2
  values:
  - 1
  - 2
  - 4
  - 8
  - 16
- name: myBoolean
  type: boolean
  default: true
- name: myObject
  type: object
  default:
    foo: FOO
    bar: BAR
    things:
    - one
    - two
    - three
    nested:
      one: apple
      two: pear
      count: 3
- name: myStep
  type: step
  default:
    script: echo my step
- name: mySteplist
  type: stepList
  default:
    - script: echo step one
    - script: echo step two

trigger: none

jobs: 
- job: stepList
  steps: ${{ parameters.mySteplist }}
- job: myStep
  steps:
    - ${{ parameters.myStep }}

Resources

A resource is any external service that is consumed as part of your pipeline. An example of a resource is another CI/CD pipeline that produces:

  • Artifacts like Azure Pipelines or Jenkins.
  • Code repositories like GitHub, Azure Repos, or Git.
  • Container-image registries like Azure Container Registry or Docker hub.

Resources in YAML represent sources of pipelines, containers, repositories, and types. For more information on Resources, see here.

General schema

resources:
  pipelines: [ pipeline ]
  repositories: [ repository ]
  containers: [ container ]

Pipeline resource

If you have an Azure pipeline that produces artifacts, your pipeline can consume the artifacts by using the pipeline keyword to define a pipeline resource. You can also enable pipeline-completion triggers.

resources:
  pipelines:
  - pipeline: string  # identifier for the pipeline resource
    project:  string # project for the build pipeline; optional input for current project
    source: string  # source pipeline definition name
    branch: string  # branch to pick the artifact, optional; defaults to all branches
    version: string # pipeline run number to pick artifact, optional; defaults to last successfully completed run
    trigger:     # optional; triggers are not enabled by default.
      branches:
        include: [string] # branches to consider the trigger events, optional; defaults to all branches.
        exclude: [string] # branches to discard the trigger events, optional; defaults to none.

Important

When you define a resource trigger, if its pipeline resource is from the same repo as the current pipeline, triggering follows the same branch and commit on which the event is raised. But if the pipeline resource is from a different repo, the current pipeline is triggered on the master branch.

The pipeline resource metadata as predefined variables

In each run, the metadata for a pipeline resource is available to all jobs as these predefined variables:

resources.pipeline.<Alias>.projectName
resources.pipeline.<Alias>.projectID
resources.pipeline.<Alias>.pipelineName
resources.pipeline.<Alias>.pipelineID
resources.pipeline.<Alias>.runName
resources.pipeline.<Alias>.runID
resources.pipeline.<Alias>.runURI
resources.pipeline.<Alias>.sourceBranch
resources.pipeline.<Alias>.sourceCommit
resources.pipeline.<Alias>.sourceProvider
resources.pipeline.<Alias>.requestedFor
resources.pipeline.<Alias>.requestedForID

You can consume artifacts from a pipeline resource by using a download task. See the download keyword.

Container resource

Container jobs let you isolate your tools and dependencies inside a container. The agent launches an instance of your specified container then runs steps inside it. The container keyword lets you specify your container images.

Service containers run alongside a job to provide various dependencies like databases.

resources:
  containers:
  - container: string  # identifier (A-Z, a-z, 0-9, and underscore)
    image: string  # container image name
    options: string  # arguments to pass to container at startup
    endpoint: string  # reference to a service connection for the private registry
    env: { string: string }  # list of environment variables to add
    ports: [ string ] # ports to expose on the container
    volumes: [ string ] # volumes to mount on the container
    mapDockerSocket: bool # whether to map in the Docker daemon socket; defaults to true

Repository resource

If your pipeline has templates in another repository, you must let the system know about that repository. The repository keyword lets you specify an external repository.

If your pipeline has templates in another repository, or if you want to use multi-repo checkout with a repository that requires a service connection, you must let the system know about that repository. The repository keyword lets you specify an external repository.

resources:
  repositories:
  - repository: string  # identifier (A-Z, a-z, 0-9, and underscore)
    type: enum  # see the following "Type" topic
    name: string  # repository name (format depends on `type`)
    ref: string  # ref name to use; defaults to 'refs/heads/master'
    endpoint: string  # name of the service connection to use (for types that aren't Azure Repos)
    trigger:  # CI trigger for this repository, no CI trigger if skipped (only works for Azure Repos)
      branches:
        include: [ string ] # branch names which will trigger a build
        exclude: [ string ] # branch names which will not
      tags:
        include: [ string ] # tag names which will trigger a build
        exclude: [ string ] # tag names which will not
      paths:
        include: [ string ] # file paths which must match to trigger a build
        exclude: [ string ] # file paths which will not trigger a build

Type

Pipelines support the following values for the repository type: git, github, and bitbucket. The git type refers to Azure Repos Git repos.

  • If you specify type: git, the name value refers to another repository in the same project. An example is name: otherRepo. To refer to a repo in another project within the same organization, prefix the name with that project's name. An example is name: OtherProject/otherRepo.

  • If you specify type: github, the name value is the full name of the GitHub repo and includes the user or organization. An example is name: Microsoft/vscode. GitHub repos require a GitHub service connection for authorization.

  • If you specify type: bitbucket, the name value is the full name of the Bitbucket Cloud repo and includes the user or organization. An example is name: MyBitbucket/vscode. Bitbucket Cloud repos require a Bitbucket Cloud service connection for authorization.

Triggers

Note

Trigger blocks can't contain variables or template expressions.

Push trigger

A push trigger specifies which branches cause a continuous integration build to run. If you specify no push trigger, pushes to any branch trigger a build. Learn more about triggers and how to specify them.

There are three distinct syntax options for the trigger keyword: a list of branches to include, a way to disable CI triggers, and the full syntax for complete control.

List syntax:

trigger: [ string ] # list of branch names

Disablement syntax:

trigger: none # will disable CI builds entirely

Full syntax:

trigger:
  batch: boolean # batch changes if true; start a new build for every push if false (default)
  branches:
    include: [ string ] # branch names which will trigger a build
    exclude: [ string ] # branch names which will not
  tags:
    include: [ string ] # tag names which will trigger a build
    exclude: [ string ] # tag names which will not
  paths:
    include: [ string ] # file paths which must match to trigger a build
    exclude: [ string ] # file paths which will not trigger a build

If you specify an exclude clause without an include clause for branches, tags, or paths, it is equivalent to specifying * in the include clause.

trigger:
  batch: boolean # batch changes if true; start a new build for every push if false (default)
  branches:
    include: [ string ] # branch names which will trigger a build
    exclude: [ string ] # branch names which will not
  paths:
    include: [ string ] # file paths which must match to trigger a build
    exclude: [ string ] # file paths which will not trigger a build

Important

When you specify a trigger, only branches that you explicitly configure for inclusion trigger a pipeline. Inclusions are processed first, and then exclusions are removed from that list. If you specify an exclusion but no inclusions, nothing triggers.

PR trigger

A pull request trigger specifies which branches cause a pull request build to run. If you specify no pull request trigger, pull requests to any branch trigger a build. Learn more about pull request triggers and how to specify them.

Important

YAML PR triggers are supported only in GitHub and Bitbucket Cloud. If you use Azure Repos Git, you can configure a branch policy for build validation to trigger your build pipeline for validation.

Important

YAML PR triggers are supported only in GitHub. If you use Azure Repos Git, you can configure a branch policy for build validation to trigger your build pipeline for validation.

There are three distinct syntax options for the pr keyword: a list of branches to include, a way to disable PR triggers, and the full syntax for complete control.

List syntax:

pr: [ string ] # list of branch names

Disablement syntax:

pr: none # will disable PR builds entirely; will not disable CI triggers

Full syntax:

pr:
  autoCancel: boolean # indicates whether additional pushes to a PR should cancel in-progress runs for the same PR. Defaults to true
  branches:
    include: [ string ] # branch names which will trigger a build
    exclude: [ string ] # branch names which will not
  paths:
    include: [ string ] # file paths which must match to trigger a build
    exclude: [ string ] # file paths which will not trigger a build

If you specify an exclude clause without an include clause for branches or paths, it is equivalent to specifying * in the include clause.

Important

When you specify a pull request trigger, only branches that you explicitly configure for inclusion trigger a pipeline. Inclusions are processed first, and then exclusions are removed from that list. If you specify an exclusion but no inclusions, nothing triggers.

Scheduled trigger

YAML scheduled triggers are unavailable in either this version of Azure DevOps Server or Visual Studio Team Foundation Server. You can use scheduled triggers in the classic editor.

A scheduled trigger specifies a schedule on which branches are built. If you specify no scheduled trigger, no scheduled builds occur. Learn more about scheduled triggers and how to specify them.

schedules:
- cron: string # cron syntax defining a schedule in UTC time
  displayName: string # friendly name given to a specific schedule
  branches:
    include: [ string ] # which branches the schedule applies to
    exclude: [ string ] # which branches to exclude from the schedule
  always: boolean # whether to always run the pipeline or only if there have been source code changes since the last successful scheduled run. The default is false.

Important

When you specify a scheduled trigger, only branches that you explicitly configure for inclusion are scheduled for a build. Inclusions are processed first, and then exclusions are removed from that list. If you specify an exclusion but no inclusions, no branches are built.

Pipeline trigger

Pipeline completion triggers are configured using a pipeline resource. For more information, see Pipeline completion triggers.

Pool

The pool keyword specifies which pool to use for a job of the pipeline. A pool specification also holds information about the job's strategy for running.

In Azure DevOps Server 2019 you can specify a pool at the job level in YAML, and at the pipeline level in the pipeline settings UI. In Azure DevOps Server 2019.1 you can also specify a pool at the pipeline level in YAML if you have a single implicit job.

You can specify a pool at the pipeline, stage, or job level.

The pool specified at the lowest level of the hierarchy is used to run the job.

The full syntax is:

pool:
  name: string  # name of the pool to run this job in
  demands: string | [ string ]  # see the following "Demands" topic
  vmImage: string # name of the VM image you want to use; valid only in the Microsoft-hosted pool

If you use a Microsoft-hosted pool, choose an available virtual machine image.

If you use a private pool and don't need to specify demands, you can shorten the syntax to:

pool: string # name of the private pool to run this job in

Learn more about conditions and timeouts.

Demands

The demands keyword is supported by private pools. You can check for the existence of a capability or a specific string.

pool:
  demands: [ string ]

Environment

The environment keyword specifies the environment or its resource that is targeted by a deployment job of the pipeline. An environment also holds information about the deployment strategy for running the steps defined inside the job.

The full syntax is:

environment:                # create environment and/or record deployments
  name: string              # name of the environment to run this job on.
  resourceName: string      # name of the resource in the environment to record the deployments against
  resourceId: number        # resource identifier
  resourceType: string      # type of the resource you want to target. Supported types - virtualMachine, Kubernetes
  tags: string | [ string ] # tag names to filter the resources in the environment
strategy:                 # deployment strategy
  runOnce:                # default strategy
    deploy:
      steps:
      - script: echo Hello world

If you specify an environment or one of its resources but don't need to specify other properties, you can shorten the syntax to:

environment: environmentName.resourceName
strategy:                 # deployment strategy
  runOnce:              # default strategy
    deploy:
      steps:
      - script: echo Hello world

Server

The server value specifies a server job. Only server tasks like invoking an Azure function app can be run in a server job.

When you use server, a job runs as a server job rather than an agent job.

pool: server

Script

The script keyword is a shortcut for the command-line task. The task runs a script using cmd.exe on Windows and Bash on other platforms.

steps:
- script: string  # contents of the script to run
  displayName: string  # friendly name displayed in the UI
  name: string  # identifier for this step (A-Z, a-z, 0-9, and underscore)
  workingDirectory: string  # initial working directory for the step
  failOnStderr: boolean  # if the script writes to stderr, should that be treated as the step failing?
  condition: string
  continueOnError: boolean  # 'true' if future steps should run even if this step fails; defaults to 'false'
  enabled: boolean  # whether to run this step; defaults to 'true'
  target:
    container: string # where this step will run; values are the container name or the word 'host'
    commands: enum  # whether to process all logging commands from this step; values are `any` (default) or `restricted`
  timeoutInMinutes: number
  env: { string: string }  # list of environment variables to add

If you don't specify a command mode, you can shorten the target structure to:

- script:
  target: string  # container name or the word 'host'

Learn more about conditions, timeouts, and step targets.

Bash

The bash keyword is a shortcut for the shell script task. The task runs a script in Bash on Windows, macOS, and Linux.

steps:
- bash: string  # contents of the script to run
  displayName: string  # friendly name displayed in the UI
  name: string  # identifier for this step (A-Z, a-z, 0-9, and underscore)
  workingDirectory: string  # initial working directory for the step
  failOnStderr: boolean  # if the script writes to stderr, should that be treated as the step failing?
  condition: string
  continueOnError: boolean  # 'true' if future steps should run even if this step fails; defaults to 'false'
  enabled: boolean  # whether to run this step; defaults to 'true'
  target:
    container: string # where this step will run; values are the container name or the word 'host'
    commands: enum  # whether to process all logging commands from this step; values are `any` (default) or `restricted`
  timeoutInMinutes: number
  env: { string: string }  # list of environment variables to add

If you don't specify a command mode, you can shorten the target structure to:

- bash:
  target: string  # container name or the word 'host'

Learn more about conditions, timeouts, and step targets.

pwsh

The pwsh keyword is a shortcut for the PowerShell task when that task's pwsh value is set to true. The task runs a script in PowerShell Core on Windows, macOS, and Linux.

steps:
- pwsh: string  # contents of the script to run
  displayName: string  # friendly name displayed in the UI
  name: string  # identifier for this step (A-Z, a-z, 0-9, and underscore)
  errorActionPreference: enum  # see the following "Error action preference" topic
  ignoreLASTEXITCODE: boolean  # see the following "Ignore last exit code" topic
  failOnStderr: boolean  # if the script writes to stderr, should that be treated as the step failing?
  workingDirectory: string  # initial working directory for the step
  condition: string
  continueOnError: boolean  # 'true' if future steps should run even if this step fails; defaults to 'false'
  enabled: boolean  # whether to run this step; defaults to 'true'
  timeoutInMinutes: number
  env: { string: string }  # list of environment variables to add

Learn more about conditions and timeouts.

PowerShell

The powershell keyword is a shortcut for the PowerShell task. The task runs a script in Windows PowerShell.

steps:
- powershell: string  # contents of the script to run
  displayName: string  # friendly name displayed in the UI
  name: string  # identifier for this step (A-Z, a-z, 0-9, and underscore)
  errorActionPreference: enum  # see the following "Error action preference" topic
  ignoreLASTEXITCODE: boolean  # see the following "Ignore last exit code" topic
  failOnStderr: boolean  # if the script writes to stderr, should that be treated as the step failing?
  workingDirectory: string  # initial working directory for the step
  condition: string
  continueOnError: boolean  # 'true' if future steps should run even if this step fails; defaults to 'false'
  enabled: boolean  # whether to run this step; defaults to 'true'
  timeoutInMinutes: number
  env: { string: string }  # list of environment variables to add

Learn more about conditions and timeouts.

Error action preference

Unless otherwise specified, the error action preference defaults to the value stop, and the line $ErrorActionPreference = 'stop' is prepended to the top of your script.

When the error action preference is set to stop, errors cause PowerShell to terminate the task and return a nonzero exit code. The task is also marked as Failed.

errorActionPreference: stop | continue | silentlyContinue

Ignore last exit code

The last exit code returned from your script is checked by default. A nonzero code indicates a step failure, in which case the system appends your script with:

if ((Test-Path -LiteralPath variable:\LASTEXITCODE)) { exit $LASTEXITCODE }

If you don't want this behavior, specify ignoreLASTEXITCODE: true.

ignoreLASTEXITCODE: boolean

Learn more about conditions and timeouts.

Publish

The publish keyword is a shortcut for the Publish Pipeline Artifact task. The task publishes (uploads) a file or folder as a pipeline artifact that other jobs and pipelines can consume.

steps:
- publish: string # path to a file or folder
  artifact: string # artifact name

Learn more about publishing artifacts.

Download

The download keyword is a shortcut for the Download Pipeline Artifact task. The task downloads artifacts associated with the current run or from another Azure pipeline that is associated as a pipeline resource.

steps:
- download: [ current | pipeline resource identifier | none ] # disable automatic download if "none"
  artifact: string ## artifact name, optional; downloads all the available artifacts if not specified
  patterns: string # patterns representing files to include; optional

Artifact download location

Artifacts from the current pipeline are downloaded to $(Pipeline.Workspace)/.

Artifacts from the associated pipeline resource are downloaded to $(Pipeline.Workspace)/<pipeline resource identifier>/.

Automatic download in deployment jobs

All available artifacts from the current pipeline and from the associated pipeline resources are automatically downloaded in deployment jobs and made available for your deployment. To prevent downloads, specify download: none.

Learn more about downloading artifacts.

Checkout

Nondeployment jobs automatically check out source code. Use the checkout keyword to configure or suppress this behavior.

steps:
- checkout: self  # self represents the repo where the initial Pipelines YAML file was found
  clean: boolean  # if true, execute `execute git clean -ffdx && git reset --hard HEAD` before fetching
  fetchDepth: number  # the depth of commits to ask Git to fetch; defaults to no limit
  lfs: boolean  # whether to download Git-LFS files; defaults to false
  submodules: true | recursive  # set to 'true' for a single level of submodules or 'recursive' to get submodules of submodules; defaults to not checking out submodules
  path: string  # path to check out source code, relative to the agent's build directory (e.g. \_work\1); defaults to a directory called `s`
  persistCredentials: boolean  # if 'true', leave the OAuth token in the Git config after the initial fetch; defaults to false
steps:
- checkout: self | none | repository name # self represents the repo where the initial Pipelines YAML file was found
  clean: boolean  # if true, run `execute git clean -ffdx && git reset --hard HEAD` before fetching
  fetchDepth: number  # the depth of commits to ask Git to fetch; defaults to no limit
  lfs: boolean  # whether to download Git-LFS files; defaults to false
  submodules: true | recursive  # set to 'true' for a single level of submodules or 'recursive' to get submodules of submodules; defaults to not checking out submodules
  path: string  # path to check out source code, relative to the agent's build directory (e.g. \_work\1); defaults to a directory called `s`
  persistCredentials: boolean  # if 'true', leave the OAuth token in the Git config after the initial fetch; defaults to false

To avoid syncing sources at all:

steps:
- checkout: none

Note

If you're running the agent in the Local Service account and want to modify the current repository by using git operations or loading git submodules, give the proper permissions to the Project Collection Build Service Accounts user.

- checkout: self
  submodules: true
  persistCredentials: true

To check out multiple repositories in your pipeline, use multiple checkout steps:

- checkout: self
- checkout: git://MyProject/MyRepo
- checkout: MyGitHubRepo # Repo declared in a repository resource

For more information, see Check out multiple repositories in your pipeline.

Task

Tasks are the building blocks of a pipeline. There's a catalog of tasks available to choose from.

steps:
- task: string  # reference to a task and version, e.g. "VSBuild@1"
  displayName: string  # friendly name displayed in the UI
  name: string  # identifier for this step (A-Z, a-z, 0-9, and underscore)
  condition: string
  continueOnError: boolean  # 'true' if future steps should run even if this step fails; defaults to 'false'
  enabled: boolean  # whether to run this step; defaults to 'true'
  target:
    container: string # where this step will run; values are the container name or the word 'host'
    commands: enum  # whether to process all logging commands from this step; values are `any` (default) or `restricted`
  timeoutInMinutes: number
  inputs: { string: string }  # task-specific inputs
  env: { string: string }  # list of environment variables to add

If you don't specify a command mode, you can shorten the target structure to:

- task:
  target: string  # container name or the word 'host'

Learn more about conditions, timeouts, and step targets.

Syntax highlighting

Syntax highlighting is available for the pipeline schema via a Visual Studio Code extension. You can download Visual Studio Code, install the extension, and check out the project on GitHub. The extension includes a JSON schema for validation.

You also can obtain a schema that's specific to your organization (that is, it contains installed custom tasks) from the Azure DevOps REST API yamlschema endpoint.