One Small Step for Microsoft’s Cloud, Another Big Step for Sustainability


By Kevin Timmons

General Manager of Datacenter Services

The datacenter industry is at an inflection point. We need the ability to build out “at right scale” based on capacity needs at unprecedented price points. Oh by the way - it’s also imperative that we find ways to avoid the significant up-front costs and long lead times required to build traditional datacenters. Mission impossible, right?


Wrong. In order to address these challenges Microsoft has been moving toward more modular designs over the last couple of years, moving from purchasing individual servers to racks of servers to, most recently, entire containers filled with servers. Our most recent datacenter in Chicago used the container approach. At our datacenter in Dublin we were able to eliminate the need for chillers by using stand-alone server pods. These advances helped us become more efficient in our datacenter operations but we are continuing to look for new ways to look at the problem.

During my recent keynote speech at the Datacenter Dynamics Conference in New York (in March 2010), I shared details about the next step in Microsoft’s datacenter evolution, which is consistent with the vision that we’d previously laid out as our Generation 4 datacenter. Our plan is to move towards pre-manufacturing every part of the datacenter and assembling them onsite based on the class of service to be delivered and supported. By eliminating over-provisioning and maximizing productivity on an ongoing basis we are recognizing that each datacenter is a living organism that needs to be flexible enough to adapt quickly to meet our own demands as well as the demands of our customers. 


One of our ITPAC “Proof of Concept” modules for datacenters

Our plan for the future is to have essentially everything but the concrete pad pre-manufactured and then assembled on site: the IT, mechanical and electrical components are all part of Pre-Assembled Components that we call an “ITPAC.” We actually think of the ITPACs not as containers in a traditional sense but as integrated air-handling and IT units.

The units will be assembled entirely from commercially available recyclable components such as steel and aluminum and the cooling requirements for the ITPACs will be met by more efficient means, such as a single water hose with residential levels of pressure to control ambient temperatures. The servers will be stacked in rows, sandwiched between air intake and output vents.

Using fans to create a negative pressure for cooling, ambient air can be pulled through on one side, run through the servers and exhausted out the other, with some of the air re-circulated to even the overall temperature of the unit. Think of how a radiator works in a car. No mechanical cooling units will be used and networking and power buses will run over the tops of the servers. The elimination of mechanical cooling devices should allow the Average PUE ratio (drives OPEX) for our ITPACs to be approximately 1.15 to 1.19 and the Peak PUE ratio (drives CAPEX) to be around 1.32 to 1.38, depending on the outside conditions and the number of hours of ambient air cooling in a year.

Our development team is considering a number of different sizes of ITPACs in order to make the units easily shippable, and they could contain approximately 400 to 2,500 compute servers and draw 200 to 600 kilowatts depending on the server mix between compute and storage. Another exciting development is that in our research and development, we found that with automation a single person could build one of these units (like pictured above) in only four days. Click here to watch a short video on how we build one of these ITPACs.

Once built, the units can be placed inside a large building or when equipped with outer protective panels, they can reside out in the open and be linked together to build out an entire datacenter. We believe that by utilizing this new approach, Microsoft can reduce the time it takes to ramp up new cloud computing capacity in half the time as traditional datacenter infrastructures, as well as significantly reduce the cost of the building. This gives us the flexibility to grow without having to commit to a large upfront investment for a datacenter and hope that demand shows up later.

These facilities will not be pretty and might actually resemble the barns I spent so much time around during my childhood in rural Illinois. That, combined with the fact that these facilities will be substantially lower cost per megawatt to build and substantially lower cost to run, makes it very easy to become excited about what we’re doing here.

William Gibson said it best: “The future is here, it’s just not widely distributed yet.”

Today we are taking another big step towards greater efficiency and driving down costs through a more widely distributed – modularized and commoditized datacenter. My thanks go to the thoughtful engineering approaches of our team and the close collaboration from our product, services and research teams across the company, as well as our partners–in helping us achieve these objectives. We do not publically disclose information on the partners and vendors we work with within our cloud infrastructure due to our policies to protect these high security facilities. However, we have shared our research and development information for these ITPACs and other Generation 4 technologies with key partners over the past couple of years and many of them are incorporating this information into their product offerings for customers today.

Few companies have the capability of tapping into $9.5 billion dollars of Research and Development that Microsoft supports (company-wide) per year. And we hope that by sharing our key learnings and best practices that we can help the industry, as a whole, work together to drive greater efficiencies through our cloud services infrastructures and collectively reduce our carbon wastes.


You can access more of our best practices around environmental sustainability, cloud security, etc. in published papers and blogs available on our web site at