Visual Studio Test task
Use this task in a build or release pipeline to run unit and functional tests (Selenium, Appium, Coded UI test, and more) using the Visual Studio Test Runner. Other than MSTest-based tests, test frameworks that have a Visual Studio test adapter, such as xUnit, NUnit, Chutzpah, can also be executed.
Tests that target the .NET core framework can be executed by specifying the appropriate target framework value.
Tests can be distributed on multiple agents using version 2 of this task. For more information, see Run tests in parallel using the Visual Studio Test task.
The agent must have the following capability:
The vstest demand can be satisfied in two ways:
Visual Studio is installed on the agent machine.
By using the Visual Studio Test Platform Installer task in the pipeline definition.
Select tests using
|(Required) Run tests from the specified files.|
Ordered tests and webtests can be run by specifying the .orderedtest and .webtest files respectively. To run .webtest, Visual Studio 2017 Update 4 or higher is needed.
The file paths are relative to the search folder. Supports multiple lines of minimatch patterns. More Information
|(Required) Select a test plan containing test suites with automated test cases.|
|(Required) Select one or more test suites containing automated test cases. Test case work items must be associated with an automated test method. Learn more.|
|(Required) Select Test Configuration.|
|(Optional) Test run based selection is used when triggering automated test runs from test plans. This option cannot be used for running tests in the CI/CD pipeline.|
|(Required) Folder to search for the test assemblies.|
Test filter criteria
|(Optional) Additional criteria to filter tests from Test assemblies. For example: |
Run only impacted tests
|(Optional) Automatically select, and run only the tests needed to validate the code change. More information|
Number of builds after which all tests should be run
|(Optional) Number of builds after which to automatically run all tests. Test Impact Analysis stores the mapping between test cases and source code. It is recommended to regenerate the mapping by running all tests, on a regular basis.|
Test mix contains UI tests
|(Optional) To run UI tests, ensure that the agent is set to run in interactive mode with autologon enabled. Setting up an agent to run interactively must be done before queueing the build / release. Checking this box does not configure the agent in interactive mode automatically. This option in the task is to only serve as a reminder to configure agent appropriately to avoid failures. Hosted Windows agents from the VS 2015 and 2017 pools can be used to run UI tests.|
Select test platform using
|(Optional) Specify which test platform should be used.|
Test platform version
|(Optional) The version of Visual Studio test to use. If latest is specified it chooses Visual Studio 2017 or Visual Studio 2015 depending on what is installed. Visual Studio 2013 is not supported. To run tests without needing Visual Studio on the agent, use the ‘Installed by tools installer’ option in the UI or |
Path to vstest.console.exe
|(Optional) Specify the path to VSTest.|
|(Optional) Path to runsettings or testsettings file to use with the tests.Starting with Visual Studio 15.7, it is recommended to use runsettings for all types of tests. To learn more about converting a .testsettings file to a .runsettings file, see this topic.|
Override test run parameters
|(Optional) Override parameters defined in the |
Path to custom test adapters
|(Optional) Directory path to custom test adapters. Adapters residing in the same folder as the test assemblies are automatically discovered.|
Run tests in parallel on multi-core machines
|(Optional) If set, tests will run in parallel leveraging available cores of the machine. This will override the MaxCpuCount if specified in your runsettings file. Click here to learn more about how tests are run in parallel.|
Run tests in isolation
|(Optional) Runs the tests in an isolated process. This makes vstest.console.exe process less likely to be stopped on an error in the tests, but tests might run slower. This option currently cannot be used when running with the multi-agent job setting.|
Code coverage enabled
|(Optional) Collect code coverage information from the test run.|
Other console options
|(Optional) Other console options that can be passed to vstest.console.exe, as documented here. |
These options are not supported and will be ignored when running tests using the ‘Multi agent’ parallel setting of an agent job or when running tests using ‘Test plan’ option. The options can be specified using a settings file instead.
Collect advanced diagnostics in case of catastrophic failures
|(Optional) Use this option to turn on collection of diagnostic data to troubleshoot catastrophic failures such as test crash.|
When this option is checked, a sequence XML file is generated and attached to the test run. The sequence file contains information about the sequence in which tests ran, so that a potentially culprit test can be identified.
Collect process dump and attach to test run report
(Optional) Use this option to collect a mini-dump that can be used for further analysis.
On abort only: mini-dump will be collected only when test run is aborted.
Always: mini-dump will always be collected regardless of whether the test run completes or not.
Never: mini-dump will not be collected regardless of whether the test run completes or not.
|(Optional) A batch is a group of tests. A batch of tests runs at a time and results are published for that batch. If the job in which the task runs is set to use multiple agents, each agent picks up any available batches of tests to run in parallel.|
Based on number of tests and agents: Simple batching based on the number of tests and agents participating in the test run.
Based on past running time of tests: This batching considers past running time to create batches of tests such that each batch has approximately equal running time.
Based on test assemblies: Tests from an assembly are batched together.
|(Optional) Simple batching based on the number of tests and agents participating in the test run. When the batch size is automatically determined, each batch contains |
Number of tests per batch
|(Required) Specify batch size|
|(Optional) This batching considers past running time to create batches of tests such that each batch has approximately equal running time. Quick running tests will be batched together, while longer running tests may belong to a separate batch. When this option is used with the multi-agent job setting, total test time is reduced to a minimum.|
Running time (sec) per batch
|(Required) Specify the running time (sec) per batch|
Do not distribute tests and replicate instead when multiple agents are used in the job
|(Optional) Choosing this option will not distribute tests across agents when the task is running in a multi-agent job.|
Each of the selected test(s) will be repeated on each agent.
The option is not applicable when the agent job is configured to run with no parallelism or with the multi-config option.
Test run title
|(Optional) Provide a name for the test run.|
|(Optional) Build platform against which the tests should be reported. If you have defined a variable for platform in your build task, use that here.|
|(Optional) Build configuration against which the tests should be reported. If you have defined a variable for configuration in your build task, use that here.|
Upload test attachments
|(Optional) Opt in/out of publishing run level attachments.|
Fail the task if a minimum number of tests are not run
|(Optional) Use this option to fail the task if a minimum number of tests are not run. This may be useful if any changes to task inputs or underlying test adapter dependencies lead to only a subset of the desired tests to be found.|
Minimum # of tests
|(Optional) Specify the minimum # of tests that should be run for the task to succeed. Total tests run is calculated as the sum of passed, failed and aborted tests.|
Rerun failed tests
|(Optional) Selecting this option will rerun any failed tests until they pass or the maximum # of attempts is reached.|
Do not rerun if test failures exceed specified threshold
|(Optional) Use this option to avoid rerunning tests when failure rate crosses the specified threshold. This is applicable if any environment issues leads to massive failures.|
You can specify % failures with
|(Optional) Use this option to avoid rerunning tests when failure rate crosses the specified threshold. This is applicable if any environment issues leads to massive failures and if |
# of failed tests
|(Optional) Use this option to avoid rerunning tests when number of failed test cases crosses specified limit. This is applicable if any environment issues leads to massive failures and if |
Maximum # of attempts
|(Optional) Specify the maximum # of times a failed test should be retried. If a test passes before the maximum # of attempts is reached, it will not be rerun further.|
This task is open source on GitHub. Feedback and contributions are welcome.
Q & A
How can I run tests that use TestCase as a data source?
To run automated tests that use TestCase as a data source, the following is needed:
- You must have Visual Studio 2017.6 or higher on the agent machine. Visual Studio Test Platform Installer task cannot be used to run tests that use TestCase as a data source.
- Create a PAT that is authorized for the scope “Work Items (full)”.
- Add a secure Build or Release variable called Test.TestCaseAccessToken with the value set to the PAT created in the previous step.
I am running into issues when running data-driven xUnit and NUnit tests with some of the task options. Are there known limitations?
Data-driven tests that use xUnit and NUnit test frameworks have some known limitations and cannot be used with the following task options:
- Rerun failed tests.
- Distributing tests on multiple agents and batching options.
- Test Impact Analysis
The above limitations are because of how the adapters for these test frameworks discover and report data-driven tests.