Compile… oh no!

With AX it takes time to crunch through all the code and metadata. However, many are seeing “extremely long” compile times. In one of the earlier blogs, we reference compile times just under 2 hours on a VM when compiling AX2012 RTM. I’m hearing from an increasing number of people that they are spending way more than that. So why is that the case and is there something to do about it?

First off, it’s important to recognize that AX is single threaded in nature. Only one element is being compiled at a time and since compilation process is exercising all three tiers of the architecture (client, AOS and model store), it is extremely bound to the execution speed of each of these tiers and in the communication between them. The size of the application will also affect the compile time proportionally.

Now with this premise, the differences we’re seeing are starting to make sense. Turns out that to minimize the “friction” and thus optimize compile performance, proximity of the three tiers is the most significant. This can be achieved through colocation so more of the involved components are running on the same machine. The more the merrier. This of cause with obvious limitations like saturation of the machine etc. The fastest compile is typically achieved by creating a “1-box” setup: SQL, AOS and Client on the same machine. This limits network impact between the tiers, which has a dramatic impact on the compile time.

When creating a 1-box setup it’s important to manage the saturation of the box. Since SQL Server is a server product, unless instructed, it will take over the box to optimize SQL performance. The main limitation to incur on SQL is how much memory it uses. If the box has 16GB memory, allowing for 4GB SQL memory looks like a good setting. With an 8GB box, 3GB seems to be the sweet spot for optimal compile in our tests with an 8GB VM on a single 3.6GHz XEON processor, 4 + HT core host, with standard 7200rpm SATA disk. On such a VM, compile times for RTM out of the box showed to be one hour and fifty minutes. Note that these are ad hoc measurements with every possible caveat, so we’ll be working on real reference numbers for different topologies.

Back to tuning again…. Then there’s the SQL IO. We are reading and writing a lot, so SQL is benefitting from fast disks. Whether 10.000/15.000rpm disks, SSD or even RAM disk, it will improve, but results vary a lot and generally shouldn’t be the first place to tune. However, once you start really tuning, this might be interesting. Similar for SQL Server connectivity where local systems can communicate faster through shared memory than through TCP. We have no solid quantification of the gains from this particular tuning at this point, but some system configurations might deprive SQL from choosing the most optimal configuration, so it might be worth checking those configuration options, when you want to take tuning to the next level.

So what’s the big problem - less than two hours is not that bad? Well, many of us are driving for virtualization of machines to get simplicity in our deployments and leverage the investments better. This calls for massive scale servers and those servers are typically equipped with extensive multi-core processors. However, these are typically not running at the same high clock-frequencies as those with less cores (no over-clocking, turbo, …). With the compile time being very connected to the clock frequency, these servers will yield worse compile times than many laptops, despite their sophistication in architecture, scalability and IO. In essence, they are massive, but slow. With AX2012RTM and R2, there’s a good reason to think of this when creating your build machine. Small and nimble still beats big and slow.

One problem remains – the compile necessary when shipping models to other environments. With the freedom of not having to deal with element IDs between vendors came a need to compile in the scenarios where we add new/changed models to an environment. If that happens on a server-grade machine (with slower cores) and with multiple tiers involved, we’re back at paying the big price for the compile. The recommendation here is to leverage the customer test system instead of the production system and ensure that the test system supports as close to “1-box” setup for the particular task of compiling the application after model install. Then when the test system is compiled (faster than the production system could do it), the whole model store is moved to the production system, which would also allow for very quick switch over to the new application. The ALM papers for AX2012 describe staging in quite some details on how to baseline systems to allow all this.

While all these tips can bring down compile time for many, they are not the end of the story. We continue our work to improve this and ensure that we can provide as much efficiency as possible, through new tools, instructions and product fixes and changes. We will need a way to handle the ever growing code base in a productive way. As a side note to those working with R2, we’ve lately identified the cause for a quite significant degradation of compile performance for R2, which is going to be addressed by a hotfix being made available within the next very few weeks. And the work continues on.

A few white papers that might be of interest in this context:

1)     Change management and TFS integration for multi-developer projects:


2)     Deploying Customizations Across Microsoft Dynamics AX 2012 Environments:

Feel free to share your own tuning results here as comments.


Jakob on behalf of the X++ compiler team.