3 - Orchestrating the Release Pipeline

patterns & practices Developer Center

Download code samplesDownload PDFOrder paperback book

In the last chapter, the Trey Research team analyzed their development process and their pipeline. When they were done, they listed all the problems they were having. It was a long list. In this chapter they decide on what the most important problems are and what improvements they can make to their pipeline to address them. We'll see that they are going to orchestrate the pipeline, which means they are going to add the capabilities to arrange, coordinate, and manage it. Orchestration can occur at both the pipeline level and at the stage level. To introduce you to the topic of orchestration, we'll first discuss some general principles to follow. While much of this guidance is true for any release pipeline, the intent in this guidance is to eventually create a continuous delivery pipeline. Later in the chapter, to give you a concrete example of how to put those principles into practice, we show the improvements the Trey Research team implements for their pipeline by using the default Lab Management and TFS build templates. The changes they make in this chapter are the foundation for other changes they will make later on.

General Principles for Orchestrating a Pipeline

This section discusses general principles you can use as a guide when you orchestrate a pipeline.

Make the Pipeline Complete

Incorporate as many of the activities that comprise your development and release processes as possible into the pipeline as steps. Where you can, make the steps automatic rather than manual.

Translating activities into steps defines your process and makes it both visible and predictable. You'll be better able to identify areas that can be improved. Also, a well-understood process makes it easier to predict how long it takes and how much it costs to put a new version of the software into production.

If you're interested in adopting some of the principles that are part of the DevOps mindset, a complete pipeline can help you. The pipeline emphasizes the performance of the entire system and not the performance of a single department or silo. Building a complete pipeline can encourage collaboration between groups that may not normally communicate with each other.

Use Automation When Possible

When you can, trigger stages automatically rather than manually. Of course, there are times when you must use a manual trigger. For example, a stage may contain manual steps. There may even be cases where you want to manually trigger stages that have only automatic steps. The most common example is a release stage, where you want control over when the version is released and to what production environment.

Automation generally makes the pipeline faster because you're not waiting for someone to manually trigger the stages. Automated stages also have a near zero, fixed cost. Increased speed and reduced costs make it more practical to have smaller, more frequent versions of the software, which are themselves easier to debug and release.

Move Versions Forward in a Continuous Delivery Pipeline

After a check-in triggers the creation of a pipeline instance and begins to be propagated through its stages, there's no way back. You only move forward through a continuous delivery pipeline, never backwards. In other words, in a continuous delivery pipeline, the same version never travels through the same stage in the same pipeline instance more than once.

Moving in only one direction helps ensure the reliability of the software. If there's a failure the pipeline stops. A new check-in that addresses the problem is treated like any other version, runs through the entire pipeline, and must pass all the validation stages.

Trigger the Pipeline on Small Check-ins

Small, significant check-ins should go through the pipeline as soon as possible. Ideally, every single check-in to version control should trigger the pipeline. Also, check-ins should be done frequently, so that a new version of the software differs only slightly from the previous version.

There are multiple benefits to working with small, frequent check-ins. If a small change makes the pipeline fail, it's very easy to figure out what the problem is. You also reduce the batch size of items that you deploy and test over the different environments. Ideally, the batch size should be a single item. Additionally, propagating small changes reduces work queue lengths (with their associated hidden costs), because small changes go through the pipeline much faster than large ones. Small changes also mean that you get fast feedback. In other words, fail fast and fail often.

Keep the Number of Pipeline Instances Small

Keep the number of running pipeline instances small. You want to focus on small sets of changes and not be distracted by constantly switching from one version to another.

Remember the kanban approach that limits the number of items that constitute work in progress (WIP). You'll need to find a balance between the WIP items, and the time that is spent, on average, to close each item. Also, fixed limits are imposed by the availability of people and resources such as environments.

If your queue length and WIP are increasing, temporarily reduce or even stop check-ins. Dedicate more people and resources to moving the WIP items you already have through the pipeline. When the queues and WIP return to reasonable levels, resume work on new features and start checking in new versions again.

Concentrating on a limited number of versions means that you'll get them through the pipeline quickly, which reduces cycle time. You're also less likely to have one pipeline instance block progress in another because of resource conflicts. If you work to optimize your process and to remove bottlenecks, you'll find that the pipeline runs faster, and you'll be able to support more concurrent instances.

Run Stages and Steps in Parallel

Whenever you can, run stages and steps in parallel. You can do this when one stage or step doesn't rely on the results of the preceding stage or step. This guidance is particularly relevant to continuous delivery pipelines, which depend on getting fast results from each stage. Versions enter the first stage, where many problems are detected quickly. From that point on, if your initial tests are thorough, you should have a reasonable chance that the software will go through the rest of the pipeline without problems. Given this assumption, you can save time by running some stages in parallel. Running your stages and steps in parallel gives you faster feedback on your software and helps to reduce cycle time.

Don't Mistake Steps for Stages

Be careful when defining the stages of your pipeline. A stage lets a version advance until it is considered ready for production. It's generally made up of multiple steps. It's easy to confuse what is really a step for an entire stage.

Deployment is a common example of this mistake. Deployment is not a stage in and of itself because it does nothing to validate the application. Instead, think of deployment as a single step in a larger set of activities, such as testing, that comprise a stage. These activities work together to move the application towards production.

For example, if you run UATs and then deploy to another environment, the deployment is not a stage. Instead, you have a stage with manual steps (the UATs) and an automatic step (the deployment).

Orchestrating a Step

Orchestrating an individual step largely depends on what a specific step is intended to do, so it's not possible to provide a general approach. Implement the orchestration with any technology you feel is easiest to use. Examples of these technologies include Windows Workflow Foundation, PowerShell, or shell scripts. The code within the step should control the step's orchestration.

Stop the Pipeline on a Failure

If a version fails any stage, that instance of the pipeline should stop immediately. (Other running instances shouldn't be affected.) Fix the problem, check in the solution, and run the new version through a new pipeline instance. The fix should be done as soon as possible, before going on to other activities, while the issues are still fresh in everyone's mind. This guidance is particularly relevant to continuous delivery pipelines, which assume that any check-in, if it successfully goes through the pipeline, can be given to customers. If the pipeline fails, the version isn't suitable for release.

Stopping the build ensures that defects never reach production. It also engages the entire team in fixing the problem quickly, thus reinforcing the notion that everyone owns the code and so everyone must work together to solve problems. This is a situation where you can apply the DevOps mindset and get everyone involved.

Build Only Once

Build your binaries and artifacts once. There should be a single stage where the build occurs. In this guidance, we call this stage the commit stage. The binary should then be stored someplace that is accessible to your deployment mechanism, and your deployment mechanism should deploy this same binary to each successive environment. This guidance is particularly relevant to continuous delivery pipelines where a check-in triggers a build and that specific build goes through all the validation stages, preferably as an automated process.

Building once avoids errors. Multiple builds make it easy to make mistakes. You many end up releasing a version that isn't the version you tested. An extreme example would be to build your source code during the release stage, from a release branch, which can mean nothing was tested. Multiple builds can introduce errors for a variety of other reasons. You may end up using different environments for each build, running different versions of the compiler, having different dependencies, and so on. Another consideration is that building once is more efficient. Multiple builds can waste time and resources.

Use Environment-Agnostic Binaries

Deploy the same binaries across all the environments that the pipeline uses. This guidance is particularly useful for continuous delivery pipelines, where you want as uniform a process as possible so that you can automate the pipeline.

You may have seen examples of environment-dependent binaries in .NET projects that use different debug and release configurations for builds. During the commit stage, you can build both versions and store the debug assemblies in an artifact repository and the debug symbols in a symbol server. The release configuration should be the version deployed to all the stages after the commit stage because it is the one that's optimized. Use the debug version to solve problems you find with the software.

An interesting scenario to consider is localization. If you have a localized application and you use a different environment for each language, you could, instead, have subsets of environments, where a subset corresponds to a language. For each subset, the deployed binaries should be the same for all the environments contained within it.

Standardize Deployments

Use the same process or set of steps to deploy to each environment. Try to avoid environment-specific deployment steps. Instead, make the environments as similar as you can, especially if you can’t avoid some environment-specific deployment steps. Treat environment-specific information as parameters rather than as part of the actual release pipeline. Even if you deploy manually, make your procedures identical.

Standardizing deployments offers another opportunity to apply the DevOps mindset. To successfully standardize your deployments requires close collaboration between developers and operations people.

The primary advantage of standardizing your deployment process is that you’ll test the process hundreds of times across all the environments. As a result, you'll be less likely to run into problems when you deploy to your most critical environments, such as production. Standardization is an example of following the "fail fast, fail often" maxim. Test your procedures often, and begin as soon as you can so that you find problems early in the release process.

Keep Stages and Steps Source and Environment Agnostic

As discussed in "Standardize Deployments," deploying the same way to every environment enables you to separate configuration data from components that make up the actual release pipeline, such as scripts. Distinguishing data from components also makes it possible to have steps that don’t need to be adapted to different environments. You can reuse them in any stage. In turn, because stages are made up of steps, the stages themselves become environment agnostic.

You can apply the same principle to sources that supply information to the steps and stages. Examples include a path name, a branch in the version control system, or an artifact repository. Treat these as parameters instead of hardcoding them into the pipeline.

Environment-agnostic stages and steps makes it much easier to point your pipeline to different sources or targets. For example, if there's an urgent issue in production that must be addressed quickly, and the production code is archived in a release branch, you can fix the problem in the release branch and then point the pipeline to that branch so that the new version can be validated. Later, when life is calmer, you can merge the fix into the main branch, point the pipeline to the main branch, and validate the integration.

By keeping configuration data separate from the steps and stages, your pipeline becomes more flexible. It becomes much easier and faster to address problems, no matter in which environment or code branch they occur.

Build Verification Tests

Run build verification tests (BVT) after every deployment to make sure that the application works at a basic level before you go on to further validations. Even manual BVTs are useful, although you should try to automate them as soon as possible. Again, this is a case of "fail fast, fail often." It saves time and money to detect breaking changes as early as you can.

BVTs are also known as smoke tests. Wikipedia has an overview of smoke tests.

Deploy to a Copy of Production

Have the environment where you run your key validation stages as similar to the production environment as possible. Differences between environments can mean potential failures in production. There are many ways environments can differ. For example, there may be different versions of the operating system, different patches, or different updates. These variances can cause the application in the actual production environment to fail with errors that are difficult to understand and debug. Even a good deployment script won't address these sorts of problems.

Here's another opportunity to adopt the DevOps mindset. If operations and development collaborate, it will be much easier to create accurate copies of the production environment. It's to everyone's advantage to have smooth releases to production. Bugs that appear in the production environment but that are seen nowhere else can take many long hours to find and fix, delay release schedules, and raise everyone's stress level.

Version According to the Pipeline Instance and the Source Code

Version binaries according to the pipeline instance and the source code that generated them. (Note that a pipeline instance is uniquely identified. For example, in TFS you can use a combination of numbers and characters for the name.) It then becomes easy to trace any running instance of the application back to the correct source code and pipeline instance. If there are problems in production, you'll be able to identify the correct version of the source code that contains the bugs.

For .NET Framework projects, modify the AssemblyInfo files that are included as properties of the project. Do this from within the pipeline as a step in the commit stage, just before the build step. The version number can be automatically generated based on criteria that fits your situation. Semantic versioning, which defines a consistent scheme of version numbering, might be one approach you could adopt. (For more information, go to Semantic Versioning 2.0.0.) For artifacts that are not converted to binaries, such as HTML or JavaScript code, have a step in the commit stage that embeds a comment with the version number inside the file.

Using an incorrect version of the source code can introduce regression bugs or untested features into your application. A versioning scheme that clearly relates artifacts to the pipeline instance and source code that generated them prevents this.

Use an Artifact Repository to Manage Dependencies

An artifact repository makes it easier to manage dependencies and common libraries. The standard repository for .NET Framework projects is NuGet, which is fully integrated with Visual Studio. You can easily configure it to use the package restore feature so that it can be accessed by the build step in the commit stage. For more information, see Using NuGet without committing packages to source control.

Using an artifact repository has the same benefits as using a version control system for your code. It makes them both versionable and traceable. There is an authoritative source where artifacts are stored. Additionally, you can also share common utilities and libraries with other teams.

Trey Research's First Steps

Now let's take a look at what the Trey Research team is doing. When we left them, they were looking at so many problems they didn't know where to start.

Dn449950.CE8D5700BB172A348F6AEADBC91E7532(en-us,PandP.10).png

Realizing that going out of business isn't the solution they're looking for, the Trey Research team decides to concentrate on the issues that are causing the most pressing business problems. Here's the list.

Issue

Cause

Solution

It's not clear which binary is deployed to each environment, or which source code generated it.

Incompatible versions are deployed across different environments and devices.

The generated binaries and artifacts aren't versioned correctly.

Replace the build stage of the pipeline with a commit stage. Version the binaries and artifacts during the build. Base releases on the versions, not the deployments.

There's nothing that defines the quality of the release. No one knows if it's up to customer expectations or includes the agreed upon functionality.

There are no formal means to verify that the code is what the customers and stakeholders want.

Introduce acceptance criteria and acceptance test cases in the specification.

There's a lot of time wasted, confusion, and errors introduced by manually moving from one stage to another.

Versions are propagated manually through the stages of the pipeline.

Begin to configure the pipeline to propagate versions automatically.

Feedback about each version arrives late in the process and there isn't much information. It's difficult to improve the applications at a pace that's reasonable for the business.

There aren't enough feedback loops in place. The ones that are there don't provide useful information because there are so many different versions of the binaries. Also, because the pipeline is sequential it's slow, so what feedback there is takes a long time to be available.

Improve the feedback loop and provide better information. Relate the information to the same version across all stages. Begin to run stages in parallel where possible, so that the information arrives earlier.

Thinking about it, Jin decides to propose a plan to the team.

Jin says:

Sunday, August 4, 2013

Dn449950.1BB72D0F56BB9989DC1874BD93C3A9B4(en-us,PandP.10).png

I spent all weekend thinking up a strategy to get us on the right path. I know everything would be better if we worked together. I'm going to propose to Raymond that we try to improve the pipeline. He's been working all weekend trying to do a release so I think this might appeal to him. Paulus and Iselda can keep working on new features so we actually have something to deploy. I'm not sure where we should start. I need to talk it over with the team. Let's hope I can convince Zachary.

After a lot of discussion, Zachary agrees. The next step is to start redesigning the pipeline so that the team can solve their biggest problems. They know they want to build once, they want a reliable versioning system, they want some automation, they want better feedback, and they want some criteria to help them create better tests. The problem is, they're not sure what they should do to achieve these goals.

Follow link to expand image

Raymond's attitude isn't as unreasonable as it may seem. Operations people are very protective of their release environments and they have good reasons for it. They're completely responsible for what happens there and many of them will not agree to automatic deployments to production. Jin will have to make some compromises.

Changing How They Work

As they think about the new pipeline, and what continuous delivery means, the team realizes that it's going to change how they work. They've already started to analyze how they create and release software, so they decide to revise their value stream map and their Kanban board. Here's what the new value stream map looks like.

Dn449950.0F038440BE94B3BE5FA127A85999859E(en-us,PandP.10).png

They don’t know yet how long it will take them to complete each phase of the value stream map that will correspond to the new pipeline. So, for now, they're calling these times T1 through T6. After they have completed a few iterations and have some data, they'll take out these placeholders and put in actual numbers.

The revised Kanban board reflects the collaborative atmosphere they want to foster. There are two columns for the development process, Coding and Delivering. The first column represents tasks for developers, but the second represents tasks that involve the new pipeline. This column will contain items for everyone involved in delivering the software. Of course, this includes developers but it also includes testers, operations people and managers. Here's their drawing of the new board.

Follow link to expand image

Here are Jin's thoughts on what a new pipeline will mean to Trey Research.

Jin says:

Monday, August 5, 2013

Dn449950.1BB72D0F56BB9989DC1874BD93C3A9B4(en-us,PandP.10).png

A new continuous delivery pipeline means changes. We have a new value stream map and a new Kanban board. The Delivering column will contain all the items that go through the pipeline. We'll even use this organization to help us build the new pipeline, and, to prove we're serious, we've added work items to make sure we get started. Of course, now some of the work for new features is blocked and breaks the WIP limit. Let's hope we don't get too far behind. I think Raymond hates me.

Here's what the new board looks like in TFS. You can see the new items that are related to the pipeline, such as "Release pipeline configuration."

Follow link to expand image

Jin has a few more comments to make about how the release process is changing.

Jin says:

Monday, August 5, 2013

Dn449950.1BB72D0F56BB9989DC1874BD93C3A9B4(en-us,PandP.10).png

We're shortening the iterations to two weeks. Maybe this is a bad idea. If we can't deliver in three weeks, how can we deliver in two? I'm hoping that because there's less to do, we can focus better. Also, we'll get feedback on what we're doing sooner.

Here's the latest product backlog. Notice that, even though he has strong opinions, Raymond is taking on more responsibilities and is really involved in implementing the pipeline.

Dn449950.55732727D5045F30D122504A8F81D585(en-us,PandP.10).png

Trey Research's Plan for Orchestrating the Pipeline

Trey Research has planned what they need to do for this iteration of the pipeline. They're going to focus on orchestration, which will lay the framework for the rest of the improvements they'll make in the future. They know that when an instance of the pipeline runs, it should follow a sequence of actions that is defined when the pipeline is configured. This sequence is the pipeline orchestration. As they designed the orchestration, they tried to keep some best practices in mind. (These principles are discussed earlier in this chapter.) In particular, they want to:

  • Build once.
  • Whenever possible, trigger the pipeline stages automatically rather than manually.
  • If any stage fails, stop the pipeline.

Here are the actions that they've decided should occur within the orchestration.

  1. Create the pipeline instance and give it an identifier based on the related version. Reserve the necessary resources.
  2. Run any automatically triggered stages by passing them the appropriate parameters, as defined in the pipeline configuration. Provide a way to relate the states to the pipeline instance being run.
  3. Gather all the relevant data generated at each stage and make it available for monitoring.
  4. For manual stages, give people the ability to run and manage the stages within the context of the pipeline.

This diagram shows a high-level view of the Trey Research pipeline orchestration.

Dn449950.689C8F2D52EA6BA78F21C87020033F70(en-us,PandP.10).png

Remember that continuous delivery pipelines only move forward. After a version triggers a pipeline instance and begins moving through the stages there is no way back. No version goes through the same stage in the same instance more than once.

Orchestrating the Stages

When you orchestrate a stage you define how it's triggered and the steps it contains. Here are some principles that the Trey Research team used to guide the design of their pipeline.

  • Steps are where all the work happens. Look at every activity that occurs during the development and release process and identify the ones that can become steps in one of the stages. See if the steps can be automated.
  • Implement steps so that they can be incorporated into any stage by passing the appropriate parameters.
  • When you define a step, think of it as an action. Examples are "Set the pipeline instance," "Run unit tests," and "Run automated tests."
  • Make sure to collect the relevant data and results for monitoring.

Here are the actions that occur within the stage orchestration.

  • Provide the automated steps within a stage with the appropriate parameters.
  • Make sure that any resource required by an automatic step is ready to be used when the step starts. The most common example is environments. If there is no automatic provisioning of environments in place, and the requested environment is being used by another pipeline instance, the stage orchestration should make the step wait until the environment is ready to be used again. The orchestration should maintain a queue of instances in order to make sure that access to the shared resource occurs in the correct order.
  • For manual steps, wait for users to complete their tasks and then take back control when they are done.
  • Make sure to collect the relevant data and results for monitoring.

This diagram shows a high-level view of the Trey Research stage orchestration.

Dn449950.CD83D941F9864E655E664FF182C6B5C3(en-us,PandP.10).png

A stage can either pass or fail. If it fails the pipeline should stop and not be restarted until the problem is solved.

Building a Release Pipeline with TFS

Trey Research is going to use TFS to build their pipeline. It's a technology they already know, and they've created simple build definitions before. Although not everything necessary to create the pipeline is a standard feature in TFS, there are ways to add the required capabilities. The easiest approach is to use the default build templates provided by TFS and Lab Management and to customize them when necessary.

A new feature of Lab Management in TFS 2012 is that you don't have to rely on virtualization or use System Center Virtual Machine Manager (SCVMM). Instead, you can use standard environments that allow you to use any machine, whether virtual or physical, as part of an environment. Using standard environments is the easiest way to get started with Lab Management. You only need to set up a test controller before you set up your first standard environment.

When you plan your pipeline, remember that there is no concrete representation of a pipeline in TFS. It is a concept. However, in the implementation presented in this book, a stage of a pipeline corresponds to a build definition. A step within a stage corresponds to a TFS workflow activity or to a shell script that is invoked by a workflow.

The following sections discuss the general approach Trey Research used to create their pipeline by using build templates. For a step-by-step procedure that shows you how to create the Trey Research orchestrated pipeline, see the group of HOLs that are included under the title Lab02-Orchestration.

Customizing the Build Definition.

Customizing build definitions is the easiest way to implement a continuous delivery pipeline. Each stage of the pipeline has a build definition of its own, even if the stage doesn't contain a build step. The term "build definition" is simply the standard name in TFS for what is otherwise known as an orchestration.

Building Once

The TFS default template always builds but a continuous delivery pipeline should build only once. For stages other than the commit stage, you'll need to make sure that they never create a build. The best approach is to use the Lab Management default template for stages other than the commit stage because that template allows you to deactivate the build step.

Propagating Versions

Propagating versions through the pipeline doesn't happen by default. You'll need to add a mechanism in order for one stage to trigger another. You'll also need a way to stop the pipeline if a version fails. There is some discussion of how to propagate a version later in this chapter. For a complete description, see the group of HOLs that are included under the title Lab02-Orchestration.

Creating Pipeline Instances

Build definitions don't include a way to define pipeline instances. For the sake of simplicity, the implementation presented in this guidance doesn't have a physical representation of the pipeline. It's possible to use custom work items for this purpose if you want more control or traceability but this approach is beyond the scope of this guidance.

Configuring the Pipeline

To configure the pipeline, you'll need to use several editors, including Visual Studio Build Explorer and the Visual Studio workflow editor. In general, you use Build Explorer to configure the pipeline and you use the workflow editor to configure the stages and steps. For more information, see the group of HOLs that are included under the title Lab02-Orchestration.

Managing Environments

To manage environments, you'll need to use Lab Management from within Microsoft Test Manager.

The New Trey Research Pipeline

After much discussion, the Trey Research team has a design for their pipeline. We're going to show you the result first, and then explain how they implemented it.

If you compare the new pipeline to the old pipeline, you'll see that the new pipeline looks very different. The team has changed its structure and added orchestration. These changes are the foundation for what they'll do in the future. Currently, not much happens right now except in the commit stage. The full implementation will occur in future iterations by automating some steps and also by adding new ones.

Here's what the orchestrated pipeline looks like.

Dn449950.125C5964BEAAB0C253708C34B903EAFC(en-us,PandP.10).png

You'll notice that there's now a commit stage rather than a build stage. There are also some new steps in that stage. Along with building the binaries, the stage uses the NuGet package restore feature to retrieve dependencies. It also versions the artifacts and the pipeline instance.

If this stage succeeds, it automatically triggers an acceptance test stage. There is no longer a deployment stage. Instead, deployment is a step within every stage except the commit stage. The acceptance test stage also reflects the fact that the team is changing how they test their applications. They've begun to use Microsoft Test Manager (MTM) to plan and write their test cases.

Another difference is that the pipeline is no longer sequential. There are now two parallel stages: the release stage and the UAT stage. Both stages require manual triggers. Having parallel stages can provide faster feedback than if the stages were sequential. The assumption is that if the version has come this far through the pipeline then it will, in all likelihood, pass the remaining stages. They'll release the software without first learning their users' reactions. If later on they find that they are getting negative feedback, then they will fix it. But for now, they've gained some time. Parallel stages are the basis of techniques such as A/B testing that many continuous delivery practitioners use.

The UAT stage uses a staging environment, which is also new. Iselda finally got her new environment. Finally, the UAT stage includes another new practice for the team. They've begun to write acceptance criteria for some of their user stories.

In the next sections, we'll show you how the team uses the TFS default build template and the Lab Management default template to implement the pipeline. The implementation has two main components–orchestration and configuration. Orchestration defines the basic capabilities of a continuous delivery pipeline. Configuration defines a specific pipeline, which is the implementation shown in this guidance.

Here are Jin's thoughts on the team's efforts, in the middle of iteration 2.

Jin says:

Monday, August 12, 2013

Dn449950.1BB72D0F56BB9989DC1874BD93C3A9B4(en-us,PandP.10).png

The pipeline framework is done! Only the commit stage really does anything, but the rest of the stages have the placeholders we need for when we're ready to make them functional. Raymond really came through for us (much to my surprise)! Now we can concentrate on using the pipeline to release the features we're supposed to implement for this iteration. Hopefully, we can prove to our stakeholders that it's been worth it.

Here's the team's current backlog. You can see that all the pipeline tasks are done.

Follow link to expand image

The Trey Research Implementation

This section gives an overview of how Trey Research implemented the pipeline orchestration for each stage. The team customized the TFS and Lab Management default build templates as well as the build workflow (a .xaml file) to accomplish this. Again, for a step-by-step description, see the group of HOLs that are included under the title Lab02-Orchestration.

Orchestrating the Commit Stage

Some steps in the commit stage were implemented by customizing the TFS default build template. Other steps needed no customization because they're already included in the template. Steps that need no customization are:

  • Getting dependencies. Dependencies are retrieved by using the NuGet package restore feature.
  • Building the binaries.
  • Running the unit tests.
  • Continuous integration (in the sense of triggering the commit stage on each check-in).
  • Performing code analysis.
  • Copying the results to the packages or binaries repository (in the case of Trey Research, this is the configured TFS build drop folder).

The following sections discuss the steps that do require customization.

Naming the Stage and the Pipeline Instance

The name of the stage (and the pipeline instance) is generated by combining the build definition name with an automatically generated version number. The version number is generated in the standard format Major.Minor.Build.Revision, by using the TFSVersion activity from the Community TFS Build Extensions.

The Major and Minor parameters are provided by the team so that they can change them for major milestones or important releases. The Build parameter is the current date, and the Revision parameter is automatically incremented by TFS. This generated version number is used as the build number so that the pipeline instance can be identified. The code in version control is also labeled with the same version number in order to relate it to the binaries it generates for each pipeline instance. Here's an example.

  • 0.0.0510.57 is the name of the pipeline instance. It is the same as the version number embedded in the binaries that are being built.
  • 01 Commit Stage 0.0.0510.57 is the name of the commit stage for this pipeline instance.
  • 02 Acceptance Test Stage 0.0.0510.57 is the name of the acceptance test stage for this pipeline instance.
  • 03a Release Stage 0.0.0510.57 is the name of the release stage for this pipeline instance.
  • 03b UAT Stage 0.0.0510.57 is the name of the UAT stage for this pipeline instance.

Here's the section in the build workflow where the pipeline name is calculated and assigned.

Dn449950.5475E72F3624301AEB6E579A0F1B70D8(en-us,PandP.10).png

Versioning the Assemblies

The same version number that was used to name the pipeline instance is embedded in the AssemblyInfo files for the Visual Studio projects being built. By using the version number, the resulting binaries can always be related to the pipeline instance that generated them. This step is done by means of a second call to the TFSVersion activity.

Propagating Changes Through the Pipeline Automatically

For the last step of the build template, a list of subsequent stages is checked and the appropriate stage(s) are triggered. The list is provided by the team as a parameter for the build definition when the stage is configured. Stages are triggered by using the QueueBuild activity from the Community TFS Build Extensions. Two parameters are passed to the triggered stages.

  • The name of the pipeline instance (the calculated version number). The name is used in the stage to set the build number. By supplying a name, you can identify stages that ran in the same pipeline instance.
  • The location of the binaries repository of the pipeline instance (the drop folder). By supplying the location, the stage can find the correct binaries and not have to rebuild them from the source code.

Here's an example of how to specify and pass in the parameters. They are assigned to the ProcessParameters property in the QueueBuild workflow activity.

Microsoft.TeamFoundation.Build.Workflow.WorkflowHelpers.SerializeProcessParameters(New Dictionary(Of String, Object) From {{"PipelineInstance", PipelineInstance}, {"PipelineInstanceDropLocation", BuildDetail.DropLocation}})

Stopping the Pipeline

If a stage fails, then the pipeline instance must stop. This step is created by using a simple conditional workflow activity, and by checking the appropriate properties.

Orchestrating the Remaining Stages

Orchestrating the remaining stages of the pipeline is partially done by customizing the Lab Management default template and also by using activities that are already in that template. These steps require no customization (at least in this iteration):

  • Automatically deploying artifacts. This feature is implemented in a later iteration.
  • Automatically running tests. This feature is implemented in a later iteration.
  • Choosing the environment the stage uses.
  • Blocking other pipeline instances from using an environment while it is being used by the current instance.

The remaining steps are implemented by customizing the Lab Management default template.

Naming the Stage

The stage is named after the pipeline instance to which it belongs. The build number is changed by using the pipeline instance name that the stage receives as a parameter. The parameter can either come from the former stage in the pipeline if the stage is automatically triggered, or provided by a team member as an argument for the build workflow if the stage is manually triggered. Here's an example of how to specify the metadata for the PipelineInstanceforManuallyTriggeredStages parameter.

Dn449950.C305819878F827AD641B31929CDAE966(en-us,PandP.10).png

Building Only Once

To build only once, remove the portion of the workflow that builds the binaries from the template.

Retrieving the Location of the Binaries for the Pipeline Instance

The build location is changed by using the location that the stage receives as a parameter. The parameter can either come from the former stage in the pipeline if the stage is automatically triggered, or provided by a team member as an argument for the build workflow if the stage is manually triggered.

Propagating Changes Automatically

Propagating changes through the pipeline automatically is done in the same way as for the commit stage.

Stopping the Pipeline

Stopping the pipeline is done in the same way as for the commit stage.

Configuring the Pipeline

Remember that there is no concrete entity called a pipeline in TFS. For the implementation presented in this guidance, you configure the pipeline by configuring the stages. You do this by configuring the respective build definitions. Both standard parameters and the ones that were added by customizing the templates are used.

Configuring the Commit Stage

To configure the commit stage provide the following parameters to its build definition.

  • The trigger mode, which is continuous integration.
  • The Visual Studio solutions or projects to be built.
  • The location of the binaries repository (this is known as the drop location).
  • The set of unit tests to be run.
  • The major version to be used for generating the pipeline instance name and for versioning the assemblies.
  • The minor version to be used for generating the pipeline instance name and for versioning the assemblies.
  • The list of stages to be triggered after the commit stage, if it succeeds. In this case the list has only one element—the acceptance test stage.

The following screenshot shows an example of how to configure the commit stage build definition's Process section.

Dn449950.1135D53075B19E0677E53FA904E196D0(en-us,PandP.10).png

Configuring the Acceptance Test Stage

Configure the acceptance test stage by configuring its build definition. Add the list of stages to be triggered after this one, if it succeeds. In this case the list is empty because the next stages in the pipeline are manually triggered.

Because the acceptance test stage is an automatically triggered stage, the pipeline instance name and the pipeline instance drop location are passed as parameters by the commit stage.

Configuring the Release Stage

Configure the release stage by providing the following parameters to its build definition.

  • The list of stages to be triggered after this one, if it succeeds. In this case, the list is empty because there are no stages after the release stage.
  • Values for the pipeline instance name and the pipeline instance drop location. These parameters must be supplied because the release stage is manually triggered. The instance name can be obtained by manually copying the version number that is part of the commit stage name. The drop location is the same as the one used to configure the commit stage.

Because this is a manually triggered stage, the configuration is done by a team member just before the stage is triggered. This might occur, for example, just before releasing the software, when the team will probably want to deploy and run some tests. The following screenshot shows an example of how to configure a manually triggered stage.

Dn449950.2DD1EAA6057E8E2D8429AD81CAFC17A3(en-us,PandP.10).png

Configuring the UAT Stage

The UAT stage is configured the same way as the release stage.

Jin's Final Thoughts

Here are Jin's thoughts at the end of the second iteration, when the pipeline orchestration is complete.

Jin says:

Friday, August 16, 2013

Dn449950.1BB72D0F56BB9989DC1874BD93C3A9B4(en-us,PandP.10).png

It wasn't as bad as I thought. We even managed to deliver new features for the mobile client and by "deliver" I mean coded, tested, and deployed using the new pipeline. Raymond was completely involved, worked with me every day, and the results speak for themselves. I'm really impressed with what he's done. Now I only have to get Iselda and Paulus on board for us to have a real DevOps mindset. It shouldn’t be that hard because they're happy to see we're making progress.

Even though we didn't deliver all the features we planned, the stakeholders were so excited to see new features released in only in two weeks that they didn’t care too much. Now we have some credibility. Next step—automation!

Here's what the team's product backlog looks like at the end of the iteration.

Follow link to expand image

Summary

In this chapter we talked about orchestration, which is the arrangement, coordination, and management of the pipeline. The goal is to take the initial steps towards creating a continuous delivery pipeline. There is some general guidance to follow such as building only once, automating as much as possible, and stopping the pipeline if a stage fails.

We also took another look at the Trey Research team, who are trying to find ways to solve their problems. They first needed a way to decide which of their many problems they should solve first. Zachary points out that there's a board meeting coming up, and if they can't show a working version of their app, they'll be shut down. Realizing that business needs dictate everything, they make a list of their most pressing problems.

Next, they need an approach that will help them fix the problems. Jin proposes a continuous delivery pipeline but gets some resistance from Iselda, who's worried about adopting new tools and the amount of works that's necessary. There's a lot of resistance from Raymond, who refuses to allow an automatic deployment to the release environment.

They finally begin to implement the pipeline and realize that it will change everything. As a result, they revise their value stream map and their Kanban boards.

Lastly, we showed the design of the new pipeline and explained how they orchestrated and configured it. The key is to use the TFS default build template and the Lab Management default template, customizing them where necessary.

What's Next

The Trey Research team has finished orchestrating their pipeline, but it's just the framework. Only the commit stage contains steps that are completely functional. They're still largely dependent on manual steps, which means that they still have a long list of problems. Many of them can only be solved by automating their deployments, the creation of environments, and at least some of their tests.

More Information

There are a number of resources listed in text throughout the book. These resources will provide additional background, bring you up to speed on various technologies, and so forth. For your convenience, there is a bibliography online that contains all the links so that these resources are just a click away. You can find the bibliography at: https://msdn.microsoft.com/library/dn449954.aspx.

For more information about BVTs (smoke testing), see http://en.wikipedia.org/wiki/Smoke_testing#Software_development.

For more information about semantic versioning, see http://semver.org/

For more information about using NuGet and the package restore feature, see https://docs.nuget.org/docs/workflows/using-nuget-without-committing-packages

The Community TFS Build Extensions are on CodePlex at http://tfsbuildextensions.codeplex.com/

Next Topic | Previous Topic | Home | Community