What I was doing, RichCopy now..

First of all I’d like to apologize for being away for so long and not actively updating this blog.

Today I will share what I was doing while I was away. My group is responsible for ensuring the quality of products aimed at enterprise customers as well as accelerating innovation through the engagement and joint activities with software and hardware partners. Over the past year I have been working almost exclusively on setting up our new facility, “Microsoft Otemachi Technology Center” in downtown Tokyo, Japan. This is a rather unique facility as it houses both MIC (Microsoft Innovation Center) and MTC (Microsoft Technology Center) as opposed to the usual model of having one facility dedicated to one center. MIC focuses on accelerating innovation and stimulating the local software economy with Microsoft technology, and MTC, which is for enterprise customers, “…is a collaborative environment that provides access to innovative technologies and world-class expertise, enabling you to envision, design, and deploy solutions to meet your exact needs” (MTC Web site). I was the project manager for every phase in setting up this new facility in Tokyo. My role not only involved driving the project, but I was also responsible to the entire design process from designing the office layout and infrastructure to setting up the switches, cables, etc. This new facility has 5 briefing rooms, 2 solution experience room, 1 envisioning room, 1 datacenter and 1 topology lab. Here is look like.



Briefing Room

Briefing Room


Microsoft is committed to reducing carbon dioxide, and I spent a lot of time designing this new facility to minimize energy consumption, especially in the datacenter, which accommodates a total 40 server racks. In many cases, customers pay attention to server and storage energy consumption, but not the air conditioning infrastructure. At a typical datacenter, not container base, server and storage consumes less than 50% of total energy; the rest is consumed by lighting and air conditioning. Thus facility infrastructure is quite important for better PUE (Power Usage Effectiveness). Also this new facility hosts an environment for product testing, such as Exchange Server, SQL, and System Center’s management solutions. Network infrastructure was another key area.

Focuses are:

- Central management and monitoring of PDU (Power Distribution Unit) and Air Conditioners, not only server hardware and OS.

- Independent air conditioning control (per Server rack) to avoid unnecessarily cold lab.

- Maximum efficiency of cooling by minimizing the diffusion of hot air for high cooling efficiency. *Hot Aisle Containment

- Typical datacenter/lab configuration, not container, as our customers and partners come to visit and use them.

- Higher network bandwidth.

What we’ve chosen is APC’s datacenter solution. APC is Microsoft’s technology and solution partner, and they have rack and power solutions that satisfy the majority of the needs mentioned above.

Here is what the new center has deployed:

Summary of infrastructure


- APC’ Datacenter solution

Ø NetShelter Racks with HAC (Hot Aisle Containment)

Ø InRow cooling

Ø InfraStruXure PDUs

- HP ProCurve, Brocade and F5 for network solution.

Network Cables


- OM3, ribbon cable for interconnection between racks and labs. (MPO/MPO)


- HP ProCurve, Brocade and F5 for network solution.

- 80G bps interconnection between labs.

- 10G network ready (CX4, SR)

- FCoE


- iSCSI (1000Base-SR 1000Base-T, 10G Base-SR [SFP+])

- Fibre Channel (2, 4, 8G bps)

- FCoE (10G bps)

Hot Aisle Containment

HAC (Hot Aisle Containment) to minimize the diffusion of hot air for high cooling efficiency.

Core switch with 10G modules


Long path, reparse point, some missing command options, etc will be addressed in next release. It will be released in a couple of weeks.