YAML schema reference
Azure Pipelines
Here's a detailed reference guide to Azure Pipelines YAML pipelines, including a catalog of all supported YAML capabilities, and the available options.
The best way to get started with YAML pipelines is through the quickstart guide. After that, to learn how to configure your YAML pipeline the way you need it to work, see conceptual topics such as Build variables and Jobs.
To learn how to configure your YAML pipeline the way you need it to work, see conceptual topics such as Build variables and Jobs.
Pipeline structure
Pipelines are made of one or more stages describing a CI/CD process. Stages are the major divisions in a pipeline: "build this app", "run these tests", and "deploy to pre-production" are good examples of stages.
Stages consist of one or more jobs, which are units of work assignable to a particular machine. Both stages and jobs may be arranged into dependency graphs: "run this stage before that one" or "this job depends on the output of that job".
Jobs consist of a linear series of steps. Steps can be tasks, scripts, or references to external templates.
This hierarchy is reflected in the structure of the YAML file.
- Pipeline
- Stage A
- Job 1
- Step 1.1
- Step 1.2
- ...
- Job 2
- Step 2.1
- Step 2.2
- ...
- Job 1
- Stage B
- ...
- Stage A
For simpler pipelines, not all of these levels are required. For example, in a single-job build, you can omit the containers for "stages" and "jobs" since there are only steps. Also, many options shown here are optional and have good defaults, so your YAML definitions are unlikely to include all of them.
Pipelines are made of one or more jobs describing a CI/CD process. Jobs are units of work assignable to a particular machine. Jobs may be arranged into dependency graphs, for example: "this job depends on the output of that job".
Jobs consist of a linear series of steps. Steps can be tasks, scripts, or references to external templates.
This hierarchy is reflected in the structure of the YAML file.
- Pipeline
- Job 1
- Step 1.1
- Step 1.2
- ...
- Job 2
- Step 2.1
- Step 2.2
- ...
- Job 1
For single-job pipelines, you can omit the container "jobs" since there are only steps. Also, many options shown here are optional and have good defaults, so your YAML definitions are unlikely to include all of them.
Conventions
Conventions used in this topic:
- To the left of
:
are literal keywords used in pipeline definitions. - To the right of
:
are data types. These can be primitives like string or references to rich structures defined elsewhere in this topic. [
datatype]
indicates an array of the mentioned data type. For instance,[ string ]
is an array of strings.{
datatype:
datatype}
indicates a mapping of one data type to another. For instance,{ string: string }
is a mapping of strings to strings.|
indicates there are multiple data types available for the keyword. For instance,job | templateReference
means either a job definition or a template reference are allowed.
YAML basics
This document covers the schema of an Azure Pipelines YAML file. To learn the basics of YAML, see Learn YAML in Y Minutes. Note: Azure Pipelines doesn't support all features of YAML, such as anchors, complex keys, and sets.
Pipeline
name: string # build numbering format
resources:
pipelines: [ pipelineResource ]
containers: [ containerResource ]
repositories: [ repositoryResource ]
variables: { string: string } | [ variable | templateReference ]
trigger: trigger
pr: pr
stages: [ stage | templateReference ]
If you have a single stage, you can omit stages
and directly specify jobs:
# ... other pipeline-level keywords
jobs: [ job | templateReference ]
If you have a single stage and a single job, you can omit those keywords and directly specify steps:
# ... other pipeline-level keywords
steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
name: string # build numbering format
resources:
containers: [ containerResource ]
repositories: [ repositoryResource ]
variables: { string: string } | [ variable | templateReference ]
trigger: trigger
pr: pr
jobs: [ job | templateReference ]
If you have a single job, you can omit the jobs
keyword and directly specify steps:
# ... other pipeline-level keywords
steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
Learn more about multi-job pipelines, using containers and repositories in pipelines, triggers, variables, and build number formats.
Stage
A stage is a collection of related jobs. By default, stages run sequentially, starting only after the stage ahead of them has completed.
You can manually control when a stage should run using approval checks. This is commonly used to control deployments to production environments. Checks are a mechanism available to the resource owner to control if and when a stage in a pipeline can consume a resource. As an owner of a resource, such as an environment, you can define checks that must be satisfied before a stage consuming that resource can start.
Currently, manual approval checks are supported on environments. For more information, see Approvals.
stages:
- stage: string # name of the stage, A-Z, a-z, 0-9, and underscore
displayName: string # friendly name to display in the UI
dependsOn: string | [ string ]
condition: string
variables: { string: string } | [ variable | variableReference ]
jobs: [ job | templateReference]
Learn more about stages, conditions, and variables.
Job
A job is a collection of steps to be run by an agent or on the server. Jobs can be run conditionally, and they may depend on earlier jobs.
jobs:
- job: string # name of the job, A-Z, a-z, 0-9, and underscore
displayName: string # friendly name to display in the UI
dependsOn: string | [ string ]
condition: string
strategy:
parallel: # parallel strategy, see below
matrix: # matrix strategy, see below
maxParallel: number # maximum number of matrix jobs to run simultaneously
continueOnError: boolean # 'true' if future jobs should run even if this job fails; defaults to 'false'
pool: pool # see pool schema
workspace:
clean: outputs | resources | all # what to clean up before the job runs
container: containerReference # container to run this job inside
timeoutInMinutes: number # how long to run the job before automatically cancelling
cancelTimeoutInMinutes: number # how much time to give 'run always even if cancelled tasks' before killing them
variables: { string: string } | [ variable | variableReference ]
steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
services: { string: string | container } # container resources to run as a service container
For more information about workspace, including clean options, see the workspace section in Jobs.
Learn more about variables, steps, pools, and server jobs.
Note
If you have only one stage and one job, you can use single-job syntax as a shorter way to describe the steps to run.
Container reference
container
is supported by jobs.
container: string # Docker Hub image reference or resource alias
container:
image: string # container image name
options: string # arguments to pass to container at startup
endpoint: string # endpoint for a private container registry
env: { string: string } # list of environment variables to add
Strategies
matrix
and parallel
are mutually-exclusive strategies for duplicating a job.
Matrix
Matrixing generates copies of a job with different inputs. This is useful for testing against different configurations or platform versions.
strategy:
matrix: { string1: { string2: string3 } }
maxParallel: number
For each string1
in the matrix, a copy of the job will be generated. string1
is the copy's name and will be appended to the name of the job. For each
string2
, a variable called string2
with the value string3
will be available
to the job.
Note
Matrix configuration names must contain only basic Latin alphabet letters (A-Z, a-z), numbers, and underscores (_
).
They must start with a letter.
Also, they must be 100 characters or less.
Optionally, maxParallel
specifies the maximum number of simultaneous matrix legs to run at once.
If not specified or set to 0, no limit will be applied.
If not specified, no limit will be applied.
Parallel
This specifies how many duplicates of the job should run. This is useful for slicing up a large test matrix. The VS Test task understands how to divide the test load across the number of jobs scheduled.
Deployment job
A deployment job is a special type of job that is a collection of steps to be run sequentially against the environment. In YAML pipelines, we recommend that you put your deployment steps in a deployment job.
jobs:
- deployment: string # name of the deployment job, A-Z, a-z, 0-9, and underscore
displayName: string # friendly name to display in the UI
pool: # see pool schema
name: string
demands: string | [ string ]
dependsOn: string
condition: string
continueOnError: boolean # 'true' if future jobs should run even if this job fails; defaults to 'false'
timeoutInMinutes: nonEmptyString # how long to run the job before automatically cancelling
cancelTimeoutInMinutes: nonEmptyString # how much time to give 'run always even if cancelled tasks' before killing them
variables: { string: string } | [ variable | variableReference ]
environment: string # target environment name and optionally a resource-name to record the deployment history; format: <environment-name>.<resource-name>
strategy:
runOnce:
deploy:
steps:
- script: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
Steps
Steps are a linear sequence of operations that make up a job. Each step runs in its own process on an agent and has access to the pipeline workspace on disk. This means environment variables are not preserved between steps but filesystem changes are.
See the schema references for script, bash, pwsh, powershell, checkout, task, and step templates for more details about each.
All steps, whether documented below or not, allow the following properties:
displayName
name
condition
continueOnError
enabled
env
target
timeoutInMinutes
Variables
Hardcoded values can be added directly, or variable groups can be referenced. Variables may be specified at the pipeline, stage, or job level.
For a simple set of hardcoded variables:
variables: { string: string }
To include variable groups, switch to this list syntax:
variables:
- name: string # name of a variable
value: any # value of the variable
- group: string # name of a variable group
name
/value
pairs and group
s can be repeated.
Variables may also be included from templates.
Template references
Note
Be sure to see the full template expression syntax (all forms of ${{ }}
).
You can export reusable sections of your pipeline to a separate file. These separate files are known as templates. Azure Pipelines supports four kinds of templates:
You can export reusable sections of your pipeline to a separate file. These separate files are known as templates. Azure DevOps Server 2019 supports two kinds of templates:
Templates may themselves include other templates. Azure Pipelines supports a maximum of 50 unique template files in a single pipeline.
Stage templates
A set of stages can be defined in one file and used multiple places in other files.
Job templates
A set of jobs can be defined in one file and used multiple places in other files.
In the main pipeline:
- template: string # name of template to include
parameters: { string: any } # provided parameters
And in the included template:
parameters: { string: any } # expected parameters
jobs: [ job ]
See templates for more about working with job templates.
Step templates
A set of steps can be defined in one file and used multiple places in another file.
In the main pipeline:
steps:
- template: string # reference to template
parameters: { string: any } # provided parameters
And in the included template:
parameters: { string: any } # expected parameters
steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
See templates for more about working with templates.
Variable templates
A set of variables can be defined in one file and referenced several times in other files.
Resources
Any external service that is consumed as part of your pipeline is a resource. An example of a resource can be another CI/CD pipeline that produces artifacts (Azure Pipelines, Jenkins, etc.), code repositories (GitHub, Azure Repos, Git), container image registries (ACR, Docker hub, etc.).
Resources in YAML represent sources of types pipelines, repositories and containers.
General schema
resources:
pipelines: [ pipeline ]
repositories: [ repository ]
containers: [ container ]
Pipeline resource
If you have an Azure Pipeline that produces artifacts, you can consume the artifacts by defining a pipeline
resource.
And you can also enable pipeline completion triggers.
resources:
pipelines:
- pipeline: string # identifier for the pipeline resource
project: string # project for the build pipeline; optional input for current project
source: string # source pipeline definition name
branch: string # branch to pick the artifact, optional; defaults to all branches
version: string # pipeline run number to pick artifact; optional; defaults to last successfully completed run
trigger: # optional; Triggers are not enabled by default.
branches:
include: [string] # branches to consider the trigger events, optional; defaults to all branches.
exclude: [string] # branches to discard the trigger events, optional; defaults to none.
Important
When you define the resource trigger, if the pipeline
resource is from the same repo as the current pipeline, we will follow the same branch and commit on which the event is raised. However, if the pipeline
resource is from a different repo, the current pipeline is triggered on the master branch.
pipeline
resource meta-data as pre-defined variables.
In each run, the meta-data for pipeline
resource is available to all the jobs as pre-defined variables.
resources.pipeline.<Alias>.projectName
resources.pipeline.<Alias>.projectID
resources.pipeline.<Alias>.pipelineName
resources.pipeline.<Alias>.pipelineID
resources.pipeline.<Alias>.runName
resources.pipeline.<Alias>.runID
resources.pipeline.<Alias>.runURI
resources.pipeline.<Alias>.sourceBranch
resources.pipeline.<Alias>.sourceCommit
resources.pipeline.<Alias>.sourceProvider
resources.pipeline.<Alias>.requestedFor
resources.pipeline.<Alias>.requestedForID
You can consume artifacts from pipeline resource using download
task. See the download keyword.
Container resource
Container jobs let you isolate your tools and
dependencies inside a container. The agent will launch an instance of your
specified container, then run steps inside it. The container
resource lets
you specify your container images.
Service containers run alongside a job to provide various dependencies such as databases.
resources:
containers:
- container: string # identifier (A-Z, a-z, 0-9, and underscore)
image: string # container image name
options: string # arguments to pass to container at startup
endpoint: string # reference to a service connection for the private registry
env: { string: string } # list of environment variables to add
ports: [ string ] # ports to expose on the container
volumes: [ string ] # volumes to mount on the container
Repository resource
If your pipeline has templates in another repository, you must
let the system know about that repository. The repository
resource lets you
specify an external repository.
If your pipeline has templates in another repository, or you want to use multi-repo checkout with a repository that requires a service connection, you must
let the system know about that repository. The repository
resource lets you
specify an external repository.
resources:
repositories:
- repository: string # identifier (A-Z, a-z, 0-9, and underscore)
type: enum # see below
name: string # repository name (format depends on `type`)
ref: string # ref name to use, defaults to 'refs/heads/master'
endpoint: string # name of the service connection to use (for non-Azure Repos types)
Type
Pipelines support the following types of repositories: git
, github
, and bitbucket
. git
refers to
Azure Repos Git repos.
- If you choose
git
as your type, thenname
refers to another repository in the same project. For example,otherRepo
. To refer to a repo in another project within the same organization, prefix the name with that project's name. For example,OtherProject/otherRepo
. - If you choose
github
as your type, thenname
is the full name of the GitHub repo including the user or organization. For example,Microsoft/vscode
. GitHub repos require a GitHub service connection for authorization. - If you choose
bitbucket
as your type, thenname
is the full name of the Bitbucket Cloud repo including the user or organization. For example,MyBitBucket/vscode
. Bitbucket Cloud repos require a Bitbucket Cloud service connection for authorization.
Triggers
Note
Trigger blocks cannot contain variables or template expressions.
Push trigger
A trigger specifies what branches will cause a continuous integration build to run. If left unspecified, pushes to every branch will trigger a build. Learn more about triggers and how to specify them. Also, be sure to see the note about wildcards in triggers.
There are three distinct options for trigger
: a list of branches to include, a way to disable CI triggering, and the full syntax for ultimate control.
List syntax:
trigger: [ string ] # list of branch names
Disable syntax:
trigger: none # will disable CI builds entirely
Full syntax:
trigger:
batch: boolean # batch changes if true (the default); start a new build for every push if false
branches:
include: [ string ] # branch names which will trigger a build
exclude: [ string ] # branch names which will not
tags:
include: [ string ] # tag names which will trigger a build
exclude: [ string ] # tag names which will not
paths:
include: [ string ] # file paths which must match to trigger a build
exclude: [ string ] # file paths which will not trigger a build
trigger:
batch: boolean # batch changes if true (the default); start a new build for every push if false
branches:
include: [ string ] # branch names which will trigger a build
exclude: [ string ] # branch names which will not
paths:
include: [ string ] # file paths which must match to trigger a build
exclude: [ string ] # file paths which will not trigger a build
Important
When you specify a trigger
, only branches that are explicitly configured to be included will trigger a pipeline. Includes are processed first, and then excludes are removed from that list. If you specify an exclude but don't specify any includes, nothing will trigger.
PR trigger
A pull request trigger specifies what branches will cause a pull request build to run. If left unspecified, pull requests to every branch will trigger a build. Learn more about pull request triggers and how to specify them.
Important
YAML PR triggers are only supported in GitHub and Bitbucket Cloud. If you are using Azure Repos Git, you can configure a branch policy for build validation in order to trigger your build pipeline for validation.
Important
YAML PR triggers are only supported in GitHub. If you are using Azure Repos Git, you can configure a branch policy for build validation in order to trigger your build pipeline for validation.
There are three distinct options for pr
: a list of branches to include, a way to disable PR triggering, and the full syntax for ultimate control.
List syntax:
pr: [ string ] # list of branch names
Disable syntax:
pr: none # will disable PR builds entirely; will not disable CI triggers
Full syntax:
pr:
autoCancel: boolean # indicates whether additional pushes to a PR should cancel in-progress runs for the same PR. Defaults to true
branches:
include: [ string ] # branch names which will trigger a build
exclude: [ string ] # branch names which will not
paths:
include: [ string ] # file paths which must match to trigger a build
exclude: [ string ] # file paths which will not trigger a build
Important
When you specify a pr
trigger, only branches that are explicitly configured to be included will trigger a pipeline. Includes are processed first, and then excludes are removed from that list. If you specify an exclude but don't specify any includes, nothing will trigger.
Scheduled trigger
YAML scheduled triggers are not available in this version of Azure DevOps Server or TFS. You can use scheduled triggers in the classic editor.
A scheduled trigger specifies a schedule on which branches will be built. If left unspecified, no scheduled builds will occur. Learn more about scheduled triggers and how to specify them.
schedules:
- cron: string # cron syntax defining a schedule
displayName: string # friendly name given to a specific schedule
branches:
include: [ string ] # which branches the schedule applies to
exclude: [ string ] # which branches to exclude from the schedule
always: boolean # whether to always run the pipeline or only if there have been source code changes since the last run. The default is false.
Important
When you specify a scheduled trigger, only branches that are explicitly configured to be included are scheduled for a build. Includes are processed first, and then excludes are removed from that list. If you specify an exclude but don't specify any includes, no branches will be built.
Pool
pool
specifies which pool to use for a job of the
pipeline. It also holds information about the job's strategy for running.
Full syntax:
pool:
name: string # name of the pool to run this job in
demands: string | [ string ] ## see below
vmImage: string # name of the vm image you want to use, only valid in the Microsoft-hosted pool
If you're using a Microsoft-hosted pool, then choose an
available vmImage
.
If you're using a private pool and don't need to specify demands, this can be shortened to:
pool: string # name of the private pool to run this job in
Learn more about conditions and timeouts.
Demands
demands
is supported by private pools. You can check for existence of a capability or a specific string like this:
Environment
environment
specifies the environment or its resource that is to be targeted by a deployment job of the
pipeline. It also holds information about the deployment strategy for running the steps defined inside the job.
Full syntax:
environment: # create environment and(or) record deployments
name: string # name of the environment to run this job on.
resourceName: string # name of the resource in the environment to record the deployments against
resourceId: number # resource identifier
resourceType: string # type of the resource you want to target. Supported types - virtualMachine, Kubernetes, appService
tags: string | [ string ] # tag names to filter the resources in the environment
strategy: # deployment strategy
runOnce: # default strategy
deploy:
steps:
- script: echo Hello world
If you're specifying an environment or one of its resource and don't need to specify other properties, this can be shortened to:
environment: environmentName.resourceName
strategy: # deployment strategy
runOnce: # default strategy
deploy:
steps:
- script: echo Hello world
Server
server
specifies a server job.
Only server tasks such as invoking an Azure Function can be run in a server job.
Script
script
is a shortcut for the command line task.
It will run a script using cmd.exe on Windows and Bash on other platforms.
steps:
- script: string # contents of the script to run
displayName: string # friendly name displayed in the UI
name: string # identifier for this step (A-Z, a-z, 0-9, and underscore)
workingDirectory: string # initial working directory for the step
failOnStderr: boolean # if the script writes to stderr, should that be treated as the step failing?
condition: string
continueOnError: boolean # 'true' if future steps should run even if this step fails; defaults to 'false'
enabled: boolean # whether or not to run this step; defaults to 'true'
target:
container: string # where this step will run; container name or the word 'host'
commands: enum # whether to process all logging commands from this step; values are `any` (default) or `restricted`
timeoutInMinutes: number
env: { string: string } # list of environment variables to add
If you aren't specifying a command mode, target
can be shortened to:
- script:
target: string # container name or the word 'host'
Learn more about conditions, timeouts, and step targets.
Bash
bash
is a shortcut for the shell script task.
It will run a script in Bash on Windows, macOS, or Linux.
steps:
- bash: string # contents of the script to run
displayName: string # friendly name displayed in the UI
name: string # identifier for this step (A-Z, a-z, 0-9, and underscore)
workingDirectory: string # initial working directory for the step
failOnStderr: boolean # if the script writes to stderr, should that be treated as the step failing?
condition: string
continueOnError: boolean # 'true' if future steps should run even if this step fails; defaults to 'false'
enabled: boolean # whether or not to run this step; defaults to 'true'
target:
container: string # where this step will run; container name or the word 'host'
commands: enum # whether to process all logging commands from this step; values are `any` (default) or `restricted`
timeoutInMinutes: number
env: { string: string } # list of environment variables to add
If you aren't specifying a command mode, target
can be shortened to:
- bash:
target: string # container name or the word 'host'
Learn more about conditions, timeouts, and step targets.
Pwsh
pwsh
is a shortcut for the PowerShell task with pwsh
set to true
.
It will run a script in PowerShell Core on Windows, macOS, or Linux.
steps:
- pwsh: string # contents of the script to run
displayName: string # friendly name displayed in the UI
name: string # identifier for this step (A-Z, a-z, 0-9, and underscore)
errorActionPreference: enum # see below
ignoreLASTEXITCODE: boolean # see below
failOnStderr: boolean # if the script writes to stderr, should that be treated as the step failing?
workingDirectory: string # initial working directory for the step
condition: string
continueOnError: boolean # 'true' if future steps should run even if this step fails; defaults to 'false'
enabled: boolean # whether or not to run this step; defaults to 'true'
timeoutInMinutes: number
env: { string: string } # list of environment variables to add
Learn more about conditions and timeouts.
PowerShell
powershell
is a shortcut for the PowerShell task.
It will run a script in PowerShell on Windows.
steps:
- powershell: string # contents of the script to run
displayName: string # friendly name displayed in the UI
name: string # identifier for this step (A-Z, a-z, 0-9, and underscore)
errorActionPreference: enum # see below
ignoreLASTEXITCODE: boolean # see below
failOnStderr: boolean # if the script writes to stderr, should that be treated as the step failing?
workingDirectory: string # initial working directory for the step
condition: string
continueOnError: boolean # 'true' if future steps should run even if this step fails; defaults to 'false'
enabled: boolean # whether or not to run this step; defaults to 'true'
timeoutInMinutes: number
env: { string: string } # list of environment variables to add
Learn more about conditions and timeouts.
Error action preference
Unless specified, the task defaults the error action preference to stop
. The
line $ErrorActionPreference = 'stop'
is prepended to the top of your script.
When the error action preference is set to stop, errors will cause PowerShell to terminate and return a non-zero exit code. The task will also be marked as Failed.
Ignore last exit code
By default, the last exit code returned from your script will be checked and, if non-zero, treated as a step failure. The system will append your script with:
if ((Test-Path -LiteralPath variable:\LASTEXITCODE)) { exit $LASTEXITCODE }
If you don't want this behavior, set ignoreLASTEXITCODE
to true
.
Learn more about conditions and timeouts.
Publish
publish
is a shortcut for the Publish Pipeline Artifact task. It will publish (upload) a file or folder as a pipeline artifact that can be consumed by other jobs and pipelines.
Learn more about publishing artifacts.
Download
download
is a shortcut for the Download Pipeline Artifact task. It will download one or more artifacts associated with the current run or from another Azure pipeline that is associated as a pipeline
resource.
steps:
- download: [ current | pipeline resource identifier | none ] # disable automatic download if "none"
artifact: string # artifact name; optional; downloads all the avaialable artifacts if not specified
patterns: string # patterns representing files to include; optional
Artifact download location
Artifacts from the current pipeline are downloaded to $(Pipeline.Workspace)/
.
Artifacts from the associated pipeline
resource are downloaded to $(Pipeline.Workspace)/<pipeline resource identifier>/
.
Automatic download in deployment jobs
All available artifacts from the current pipeline and from the associated pipeline resources are automatically downloaded in deployment jobs and made available for your deployment. However, you can choose to not download by specifying download: none
.
Learn more about downloading artifacts.
Checkout
Non-deployment jobs automatically check out source code.
You can configure or suppress this behavior with checkout
.
steps:
- checkout: self # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # if true, execute `execute git clean -ffdx && git reset --hard HEAD` before fetching
fetchDepth: number # the depth of commits to ask Git to fetch; defaults to no limit
lfs: boolean # whether to download Git-LFS files; defaults to false
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get submodules of submodules; defaults to not checking out submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1); defaults to a directory called `s`
persistCredentials: boolean # if 'true', leave the OAuth token in the Git config after the initial fetch; defaults to false
steps:
- checkout: self | none | repository name # self represents the repo where the initial Pipelines YAML file was found
clean: boolean # if true, execute `execute git clean -ffdx && git reset --hard HEAD` before fetching
fetchDepth: number # the depth of commits to ask Git to fetch; defaults to no limit
lfs: boolean # whether to download Git-LFS files; defaults to false
submodules: true | recursive # set to 'true' for a single level of submodules or 'recursive' to get submodules of submodules; defaults to not checking out submodules
path: string # path to check out source code, relative to the agent's build directory (e.g. \_work\1); defaults to a directory called `s`
persistCredentials: boolean # if 'true', leave the OAuth token in the Git config after the initial fetch; defaults to false
Or to avoid syncing sources at all:
steps:
- checkout: none
Note
If you want to modify the current repository using git operations and/or load git submodules, make sure to give the proper permissions to the "Project Collection Build Service Accounts" user if you are running the agent in Local Service Account.
steps:
- checkout: self
submodules: true
persistCredentials: true
To check out multiple repositories in your pipeline, use multiple checkout
steps.
- checkout: self
- checkout: git://MyProject/MyRepo
- checkout: MyGitHubRepo # Repo declared in a repository resource
For more information, see Check out multiple repositories in your pipeline.
Task
Tasks are the building blocks of a pipeline. There is a catalog of tasks available to choose from.
steps:
- task: string # reference to a task and version, e.g. "VSBuild@1"
displayName: string # friendly name displayed in the UI
name: string # identifier for this step (A-Z, a-z, 0-9, and underscore)
condition: string
continueOnError: boolean # 'true' if future steps should run even if this step fails; defaults to 'false'
enabled: boolean # whether or not to run this step; defaults to 'true'
target:
container: string # where this step will run; container name or the word 'host'
commands: enum # whether to process all logging commands from this step; values are `any` (default) or `restricted`
timeoutInMinutes: number
inputs: { string: string } # task-specific inputs
env: { string: string } # list of environment variables to add
If you aren't specifying a command mode, target
can be shortened to:
- task:
target: string # container name or the word 'host'
Learn more about conditions, timeouts, and step targets.
Syntax highlighting
Syntax highlighting is available for the pipeline schema via a VS Code extension. You can download VS Code, install the extension, and check out the project on GitHub.
The extension includes a JSON schema for validation.
Feedback
Loading feedback...