Build, Build, Build and Build Some More With Team Build

This post describes the latest Team Build updates with features available both in Team Foundation Server (TFS) 2015 Update 2 RC1 and Visual Studio Team Services (VSTS).

build

'Team Build is Dead! Long Live Team Build!'

This was one of the main titles of last year's Ignite conference when the latest version of Team Build was introduced and there is a simple reason behind it - more specifically, the new Team Build system is a complete re-write of the former Team Build. One of the first results in this re-write is that there no longer is any reason to raise the shoulders when questions such as "I love TFS, but why can't I use it to build my Android projects?" are asked. As it turns out, the latest build of Team Build allows for more extensibility than ever, easier management over the web portal and much easier build agent deployment - throughout this post I will try to cover as much as possible in terms of the new available features.

What's new?

Ever opened a XAML Build Definition Before? Yikes!

Even though the entire workflow-based schema of a build definition prior to TFS 2015 was cool as it allowed a lot of complexity in the entire logic of an automated build, it turned out that due to the lack of extensibility and difficulty of understanding the underlying XML schema, build definitions needed another approach. This is probably one of the main reasons behind the decision of ditching XAML altogether from the new Team Build system. Don't get me wrong - XAML-based build definitions didn't go anywhere: you can still create XAML-based build definitions both in TFS and VSTS, but as the team has put it, they will become obsolete at some point in time and therefore, it's best to put up a strategy of migrating from the XAML-build definitions to the new Team Build task-based system. And to be fair, the new system also comes along with tons of benefits, extensibility being one of the greatest one (at least in my opinion).

The fact that the system doesn't rely on XAML files also means that, besides easier extensibility, the system allows the build definition to be manipulated outside Visual Studio; more specifically, build definitions can be easily created, edited and queued from the web portal (whether in TFS or VSTS) which also means that there no longer is a requirement of installing the latest version of Visual Studio in order to edit the build definition (as it previously was the case with XAML build definitions in the days of TFS 2013 and before that).

[caption id="attachment_6451" align="alignleft" width="250"]Untitled Team Build steps[/caption]

Among the very first things you'll see when you create or edit a new build definition is the list of available build steps (a.k.a. build tasks). If you scroll the list of available of steps, you'll be amazed by how many non-Microsoft technologies are already baked into the new Team Build system. For example, you can now build Android apps using Gradle or Android or iOS apps developed using Xamarin Forms. If you're familiar with open-sourced web technologies, Apache Maven might be something you'll appreciate - as a web developer, you might also like the fact that both Gulp and Grunt are now part of the plethora of various build steps available right out-of-the-box. Both VSTS and TFS offer a number build templates, so when you create a new build definition, you can either start with one of these templates or start by creating a completely empty build definition.

What else is new?

Whenever I get the chance, I love to contribute to open-source repositories in GitHub. The beauty about VSTS and TFS 2015 nowadays is that building a new project doesn't require the project to have its source control in a TFVC or Git repository hosted by either VSTS or TFS. More specifically, the new Team Build system allows you to build projects which are sourced externally in either GitHub, Subversion or another external (remote) Git repository. The latter option thus also allows you to host the source code in your own on-prem TFS infrastructure (considering Git is your source control of choice) and use the power of VSTS to automate your builds.

[caption id="attachment_6461" align="alignleft" width="139"]Capture build badge[/caption]

Also, in terms of automated builds, another cool addition to VSTS and TFS is that they've enabled build badges, which basically represent a URI to a dynamically generated icon which shows developers whether the last build of the project succeeded or not. So besides a widget which can be pinned as a tile to the customizable dashboards and which lists the history of a build definition's builds, an extra badge can be embedded in the readme.md files in GitHub or in the project's documentation in VSTS. Therefore, your contributors will know if the last nightly build succeeded or if there's any mending necessary before they start working on new features.

Just as important is the fact that now you get access to the complete build log during and after the build has finished. Most build steps will write their log write in the form of text output right into the build's log and if you decide to create your own custom build steps, you have this option too.

[caption id="attachment_6471" align="alignleft" width="300"]Capture1 Example of a build log[/caption]

Another important change is regarding the retention policy. While back in the days of TFS 2013, a build's retention was based on the number of successful builds (which basically meant that after x builds, your build results were lost and this doesn't always make sense, especially if you build many times per day), nowadays the retention policy is calendar based, meaning that you specify the number of days for which you like to keep your binaries around. Last but certainly not least is the fact that now you're offered the build definition's history - this means that whenever someone on your team changes the build definition and his or her changes break the build process, you can go back in the build's timeline and see which changes (and who's changes) broke the build process. I honestly have to emphasis on the which changes part, as now you're offered the option of clicking a so-called 'Diff' button right in the build definition history menu which will show the exact changes between various versions of the build definition during its history, side-by-side.

It's now high time to talk about build agents. Back in the days of TFS 2013, maintaining a build agent infrastructure also meant having a build controller. But because a controller was associated to a team project collection, you were left with two options: 1) enable or disable a controller based on the team project collection which served the team project you wanted to build, in the case when you didn't have a lot of hardware resources available for multiple build controllers, or 2) provision multiple machines to act as controllers and optionally agents to build project. This thus meant quite a complex overall infrastructure which was difficult to maintain. And as this wasn't somewhat problematic already, there was also the hazard of maintaining the build agents up to date. Considering you had to maintain an infrastructure of tens or hundreds of build agents, this could be a nightmare. True story: one of my customer's preferred to have a PowerShell DSC script with which he deployed multiple agents from scratch with the latest updates, rather than maintain the same the same machines.

[caption id="attachment_6481" align="alignleft" width="300"]capture2 Example of build agent queues[/caption]

In the new task-based build system, the concept of a build controller is gone. Completely gone. Instead, there's a concept of agent queues. Each queue defines a so-called agent pool which hosts a number of agents. Each team project collection will define its own agent queue, but the agent pool itself can be shared among agent queues, which basically means that a single agent can build multiple team projects with no problem whatsoever. The reason for grouping multiple agents together into an agent pool is because whenever you queue a new build, that build will be built by any available agent inside a pool. In terms of maintenance, a DevOps responsibility will be to make sure that an agent has all the necessary binaries installed so that the build process can succeed - these "necessary binaries" are called capabilities, which is also the reason for why you'll find a tab called Capabilities right in the list of agents corresponding to a pool. An agent's capabilities will be automatically populated (if possible) based on its on properties (things such as OS version, computer name etc.) and its installed tools (such as MSBuild, Azure SDK, Visual Studio, npm etc.). Additionally, you also get the ability of adding your own capabilities, which is nothing more than a list of tags and is saved as a key-value pair. The reason behind the capabilities concept is that each build definition can define a list of so-called demands. Basically, a demand is nothing more that yet another tag which, if exists, has to exist (or have a specific value - remember a capability is a key-value pair) in the list of an agent's capabilities as well. Therefore, when a build is queued, only the agents which hold all the build's demands will be eligible for the build process. This also means that there's no reason to create agent pools corresponding to the agents' capabilities. The reason for multiple queues is, in fact, regarding security - each pool will define a number of roles (Agent Pool Administrators and Agent Pool Service Accounts) and so will each queue defined per team project collection (Agent Queue Administrators and Agent Queue Users).

How does this relate to Azure?

The real beauty about the new build system is that within Visual Studio Team Services, besides the 240 build minutes you get completely for free using a hosted (ran by Microsoft) build agent, you also have the option of deploying your own private agents. The first one is for free, which thus means that you can run the very long builds (maybe nightly builds which take a long time to complete) and not pay a dime more that you afford. XAML builds still work, but it might be a good idea to plan the migration to the new task-based build system today.

Another important aspect of the new build system is that it allows you to run surface tests in the form of web performance tests and cloud load tests with extreme ease, using the power of Azure.

In terms of continuous integration, making sure that all your web application's endpoints are available is crucial. Happily enough, running a web performance test over all these endpoints is very easy thanks to both the new Team Build and Azure - it's just a matter of configuring the system right. The build step category named 'Test' holds the necessary steps for running either a web performance test (personally, I'd have rather called this a surface test, honestly) which basically allows you to configure a URI, a number of preconfigured users (25, 50, 100, 250) which will request the specified endpoint, a number of preconfigured seconds the test will run and one of the 15 available locations to run the test from. By the way, the 'Default' locations stands for the location the Visual Studio Team Services environment is created in.

[caption id="attachment_6491" align="alignleft" width="300"]capture3 Generic Service Endpoint registration[/caption]

One of the less-intuitive operations is configuring 'Registered connection' setting. By default, there won't be anything listed in the 'Registered connection' dropdown list and in order to get this required setting populated, you have to click the 'Manage' link-button right next to it. This will take you to the 'Service Endpoint' configuration tab, which are new since TFS 2015. A service endpoint allows you to use external (usually 3rd party) services as part of a build process. Regarding the registered connection, what they expect you to configure here is a service endpoint connection to... your Visual Studio Team Services account; and this is as confusing (in my opinion) as it can be, because of a number of reasons:

  1. A web performance test actually runs as a 'light'-ish load test (light as in a low number of users, for quite a short period of time), part of the Load Test functionality in Visual Studio Team Services. Considering that the build definition is part of VSTS, within a team project hosted in inside the same VSTS, it's simply surprising that you have to define a connection from VSTS to... itself. I admit not knowing the exact reason for this approach, but I am happy enough that this is actually allows you only pay for extra cloud load-testing minutes within a single tenant (VSTS account) and yet have multiple tenants to manage your source code repositories. It would have been nice to get a connection to itself (the same VSTS account) by default though, but considering the incredible pace in which new features are landing to VSTS each month, I wouldn't be surprised to see this coming one day soon...
  2. Remember that in order to pay for extra load testing minutes, you have the option of linking the VSTS tenant to your Azure subscription? Also, that a VSTS load test originates from Azure datacenters? And yet, when you configure a service endpoint for the purpose of specifying a registered connection to the VSTS tenant, you won't choose Azure as the service endpoint type, but... generic!
  3. When you configure a generic service endpoint, you have to type in the VSTS complete URL, your username and your password. However, this won't be your Microsoft Account set of credentials, but a set of alternate authentication credentials which are well hidden in VSTS (click on your name > My Profile > Security > Alternate Authentication Credentials > Enable alternate authentication credentials and type in the username and password)

A web performance test's output won't be listed in the build log, but rather in the VSTS Load Test tab, in the form of a running or completed load test. The reason for this is because of the decoupled nature between the build agent and the load test agent - remember that the build agent will, in the case of a web performance test (and load test, for that matter) only act as an orchestrator for the load test, which is hosted as another VSTS feature and will run side-by-side with the build process. Therefore, you don't have to worry about traffic originating from your on-premises infrastructure if you decide to run the build on your private agents rather than the hosted ones and have web performance tests or load tests steps part of that build definition.

The difference between the web performance test and a cloud load test step is due to the fact that a web performance test only mimics minimal load traffic (I've called it a light load test earlier for this exact reason), whilst a cloud load test will run according to a .loadtest file configured in Visual Studio and checked into source control. This allows for much finer-grained control over the load test and helps simulate traffic according to your application's actual user behavior. Yet again, a cloud load test step will require a registered connection, which just like in the case of a web performance test mentioned earlier, is a connection to the VSTS tenant environment which hosts the load test agents which will run the test.

Conclusion

Personally, I find that web performance tests are suitable for continuous integration in the form of surface tests, as they could mark whether all your existing and your new endpoints respond. Obviously, the fact that within a single step multiple simulated users will hit the same endpoint over a period of time (some seconds longs) is just a matter of making sure that caching and CDNs won't give you any false positives. In the same time, I usually configure cloud load test steps as part of a release definition (thus part of a continuous deployment strategy) in order to make sure (usually in a QA environment) that all the KPI and SLAs are met. Hope this helped understand the new task-based Team Build system better.