How Environmental Sustainability Can Help to Revitalize the IT Market Now
Green Computing and Green IT are some often heard phrases when talking about Environmental Sustainability with the help of IT. During times of economic challenges, the IT industry and other companies are required to think about new concepts to mange cost and their responsibility for our environment and the greenhouse effect. Microsoft provides a paradigm, architecture and technologies to support a green way of doing business through software and to take ownership of the future requirements in IT.
How can IT be "green"?
There are many aspects when looking at power saving efforts in IT. Not all of them are obvious. For example: Did you ever think about the millions of computers we have to recycle every year and the energy it takes. Now what if we could use the power of IT to design a totally new line of computers? This would start with a new set of materials over to the manufacturing process, distribution and use/reuse of parts once their lifecycle has ended. That would probably safe 20% of the energy it costs today in the supply chain of their production and sorting out all parts after they are amortized.
Lewis Curtis, Principal Architect on the Microsoft Platform Architecture Team, is a subject matter expert in this space. He recently wrote an article in the Architectural Journal and provides insight on different angles of view on "Green Computing". I would like to follow the path of his thoughts broadening the focus to ISVs. They can take advantage of many models and technology components Microsoft provides and build new applications and services on top of it.
Physical Power Savings
Physical power savings can be achieved using less power hungry devices or controlling the consumption of energy. To achieve this goal there are many different streams in IT: from Chip design to power supply design to cooling system optimization to name some of the most important ones. I found an interesting example for a sensor device based on Intel's Atom processor with low power consumption (160micro Ampere/cycle) so that one scenario in the future might be to power those devices by solar or thermal energy. Microsoft provides a wireless sensor network for its data centers today to optimize data center design and avoid overcooling. As another example Windows 7 provides a sensor platform and APIs for ISVs so that they can build their own environment aware applications and power management APIs in their apps to reduce power consumption within computers and datacenters, as well as micro computers [like the one described above]. Built in sensors play a central role in controlling larger systems like buildings, plants, machines and airplanes.
Operating Platform and Cloud Computing
[Computer] Operating Platforms have to provide the right computing power for almost all use cases including peaks and failure scenarios. Therefore most IT solutions end up with way more hardware and energy usage then their solution will ever need during its lifetime. This leads us immediately to cloud computing where the cost of operations is calculated by the usage of CPU power. Whether it is one Server/hundred hours or one hour on a hundred servers at the same cost - we now can take advantage of flexible models to compute our problem space. In order to take advantage of these capabilities architects (and ISVs delivering apps in the cloud like Azure today) should follow a couple of principals for shared applications in highly virtualized environments including the localization of the application (levels of abstraction vary from local to all virtual=anywhere). Localization was already used when designing Client/Server applications (basic level of abstraction) to enable shared resources in several levels/tiers. In a next level we can take applications to a datacenter to share the whole infrastructure and save on dedicated energy cost. Windows 7 (read more) and Windows Server 2008 R2 will have extended capabilities to provide Energy Star 4.0 compliance and safe on average 5% to 10% more energy compared to previous versions (like Windows Server 2003). In the highest level of abstraction and delegation of management we move the execution of our app to the cloud where we don’t have any physical control over resources like disc, network bandwidth and dedicated memory or compute time anymore. This demands the encapsulation of all physical control mechanisms which are very general today and might not fit every organization’s requirements today. Only there we can take advantage of all data center related attributes like reliability, scalability and security and also the use of less “embodied energy” related to physical machines and green power used to operate our public cloud infrastructure. Before choosing the right deployment location of our application today we should clearly measure or roughly calculate the overall power gains of the intended scenario.
Only a few applications are able to provide full insight on a given production process or whole supply chain. Measurement would be one important criteria but I see even more demand on delivering a transparent view on the full process: from planning to manufacturing to distribution and recycle. In many cases this includes the simulation of scenarios to use the most cost efficient path also in terms of energy efficiency. Microsoft provides tools today (like the Environmental Sustainability Dashboard) to monitor a variety of KPIs in every part of the supply chain. On top this tool offers a Dashboard with an executive summary and drilldown capabilities. This is one good example of monitoring a whole supply chain and its energy efficiency. But it all starts with measurement and control of a process. Today only a few systems provide the ability to monitor not only security and main production KPIs but also energy efficiency. Therefore ISVs should take advantage of Microsoft’s Business Intelligence Toolset and build KPIs into their applications to gather “green” information like power consumption of a specific device or process, thermal data of facilities or add calculated data about the energy it takes to recycle parts in this process or chain. Now when we have the right data we can make use of an intelligence tool to see relations and dependencies and optimize for the right outcome whether it’s power, time of use, temperature or another key metric for energy consumption.
When talking about energy efficiency we not only have to look at switching off lights, heating, managing cooling in datacenters and bring hardware down to energy saving mode when unused. It also touches the algorithms underneath. This is really about software design and programming paradigms. Proofing the architecture of an application for performance (and therefore efficient power usage) often only is the last step as part of the test process and fails success because the design and implementation does not allow fundamental changes anymore. Introducing so called scale units (read more) helps architects to compose applications with the right granularity avoiding too many dependencies on other components and services in the application and scaling with a fiscal balance between users and performance/hardware. Also the best designed software has to be metered to know the real outcome. Therefore measurement of the performance and energy usage should be part of every step in software design and development.
The time is right to re-think the layers of our IT, our applications and the related components and services. The economical impact of the recent recession not only challenges financial results in trading goods – it also allows for a pause and plan a new way to differentiate on the (software) market with features increasing the environmental soundness of our existing applications. Tools and standard components and APIs for this are available today so that ISVs can start to build a new set of services whether local (on premises) or in the cloud (as a service). As we improve the quality and architecture of our applications our responsibility for the environment will increase as well.
Microsoft Global ISV Team