More Content - Including Podcasts

Thursday, May 3, 2012

Building a "Zero Energy" Data Centre

Firstly, as opposed to zero energy, we're talking about a movement towards a zero net emissions HPC data centre.

The Energy Efficient HPC Working Group focuses on driving energy conservation measures and energy efficiency in HPC data centre design, the group is open to all parties, and can be found online at http://eehpcwg.lbl.gov.

There are three subcommittees:
Infrastructure committee working on liquid cooling guidelines, metrics (ERE, Total PUE), and energy efficiency dashboards.
The system team is working on workload based energy efficiency metrics, and system measurement, monitoring, and management.
The conferences team puts on a monthly webinar every second Tuesday, and is primarily focussed on awareness.

A pay-for membership group related is The Green Grid, who for a $400 annual fee provide access to top resources to learn and apply.

75% of the top 500 super-computing facilities are in US, China, and Japan. The top system (in Japan) uses 12.66 MWatts, the average of the top 10 4.56 MWatts, with an average efficiency of 464 Mflops/watt

The WestGrid HP compute system at UBC is the 189th most powerful supercomputer in the world, but the 398th most efficient, this is a derived number as only half the systems in Canada have submitted their numbers to the EE HPC WG.

There are three tiers of power measurement quality:
1. Sampling rate; more measurements/higher quality
2. Completeness of what is being measured; more of the systems translates to higher quality
3. Common rules must be followed for start/stop times.

The EE HPC WG has a beta methodology that is being tested in Quebec.

Energy use in US datacentres doubled between 2000 to 2006 from 30 Billion KWatts per year to 60. Awareness, efficiency, and the economic downturn affected that trend and in 2011 the growth since 2006 was calculated to have slowed to 36%.

PUE = total energy divided by IT energy
This is equivalent to cooling, power distribution, misc, plus IT divided by just the IT energy consumption.

PUE average is 1.91 according to EPA Energy Star. Intel has a data center operating at 1.41, and the Leibniz Supercomputing Centre is predicted to operate at 1.15.

PUE does not allow for energy re-use, but the ERE does. ERE is the same as PUE, except it minuses reused energy before dividing by IT.

HP discusses next steps in re-thinking servers and datacentre designs. We're told the median PUE in 2009 for DC's was over 2, and today, efficient systems and the use of chillers and water cooling can get you to about 1.3. The FreeAir/EcoPOD methodology can get you to 1.1, theoretically.

The lowest a PUE calculation can get to is 1, so we're challenged to look at efficiencies in the 1, to pay attention to the numerator and denominator of the fraction. CPU (48%) and DRAM (18%) are the biggest energy/heat pigs in HPC systems. HP now tells us about Project Moonshot, which features a workload tuneable compute to I/O ratio, leveraging the cost structures of commodity processors. The reference is made to how ARM processors operate, and how this methodology is applicable to efficient processing.

Water has ~50 times the efficiency in removing heat that air does; making the argument that using air to cool our systems is significantly less efficient than water. Compare liquid cooled to air cooled engines; air cooled is much simpler, but highly inefficient.

Liquid cooling has been around for ages, but has not been attractive from a cost perspective. Power costs are rising, and liquid cooling options are now becoming available and commodity (almost). The argument from HP is that this is what we will using in our data centres in the immediate future, although nothing is readily commercially available to the higher ed market space.

Last year, ASHRAE issued a whitepaper on liquid cooling guidelines. This includes standard architectures and metrics that must be measurable to achieve success in liquid cooling for your data centres. The specs are rated in 5 levels of increasing efficiency.

Large improvements have been made in the past ten years in energy efficiency, and the focus will now turn to total sustainability. This includes now looking at metrics for carbon footprint, water usage, and energy consumption.
A key consideration needs to be location of the data centre, which looks at temperature and humidity of the locale. Based on all these factors, Canada is actually the best location for efficient data centres in North America. Yet all the new data centres are being built in locales in the US where either land or power is cheap, but the over-all efficiency is poor.

An example of a great pilot site is the cow manure powered data centre outside of Calgary AB. The discussion moved to total carbon footprint, and the GreenPeace "So-Coal Network" video is shared as an example of two things; the poor decision making around coal powered data centres, but also the pressure that can and should be put on North American (and global) organisations to make the right decisions.

The challenge is that it is short-term profitable to pollute. We're posed with the idea of using our campus data centres as carbon offsetting tool. By heating buildings with the heat output from the chilling water from the computer cooling we can claim not only reduced natural gas use, but carbon credit compensations in a cap and trade situation to reduce operating costs.

Lake Mead's loss of water, and potential complete loss by 2021 is cited as part of the challenge for us to think hard about evaporative cooling solutions. Stay tuned for the site www.top50DC.org which will create a playing field and world stage for accountability in truly green computing and data centres.

- Posted using BlogPress from my iPad

Location:W Hastings St,Vancouver,Canada

No comments: