16.01.2015 Views

R&M Data Center Handbook

R&M Data Center Handbook

R&M Data Center Handbook

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

www.datacenter.rdm.com<br />

1.10. Energy Efficiency – Initiatives and Organizations<br />

There are numerous measures for increasing energy efficiency in data centers. The chain of cause and effect<br />

starts with applications, continues with IT hardware and ends with the power supply and cooling systems.<br />

A very important point is that measures taken at the very beginning of this chain, namely for causes, are the most<br />

effective ones. When an application is no longer needed and the server in question is turned off then less power is<br />

consumed, losses in the uninterruptible power supply decrease and so does the cooling load.<br />

Virtualization – a way out of the trap<br />

Virtualization is one of the most effective tools for a cost-efficient green computing solution. By partitioning<br />

physical servers into several virtual machines to process applications, companies can increase their server<br />

productivity and downsize their extensive server farms.<br />

This approach is so effective and energy-efficient that the Californian utility corporation PG & E offers incentive<br />

rebates of 300 to 600 U.S. dollars for each server that is saved thanks to Sun or VMware virtualization products.<br />

These rebate programs compare the energy consumption of existing systems with that of systems in operation<br />

after virtualization. The refunds are paid once the qualified server consolidation project is implemented. They are<br />

calculated on the basis of the resulting net reduction in kilowatt hours (at a rate of 8 Cents per kilowatt hour). The<br />

maximum rebate is 4 million U.S. dollars or 50 percent of the project costs.<br />

By implementing a virtual abstraction level to run different operating systems and applications, an individual server<br />

can be cloned and thus used more productively. On the strength of virtualization, energy savings can be increased<br />

in practice by a factor of three to five – and even more in combination with a consolidation to high-performance<br />

multi-processor systems.<br />

Cooling – great potential for savings<br />

A rethinking process is also taking place in the area of cooling. The cooling system of a data<br />

center is turning into a major design criterion. The continuous increase in processor performance<br />

is leading to a growing demand for energy, which in turn leads to considerably higher<br />

cooling loads. This means that it makes sense to cool partitioned sections in the data center<br />

individually, in accordance with the specific way heat is generated in the area.<br />

The challenge is to break the vicious circle of an increased need of energy leading to more heat, which in turn has<br />

to be cooled, consuming a lot of energy. Only an integrated, overall design for a data center and its cooling<br />

system allows performance requirements for productivity, availability and operational stability to be synchronized<br />

with an energy-efficient use of hardware.<br />

In some data centers, construction design focuses more on aesthetics than on efficiency, an example being the<br />

hot aisle/cold aisle constructions. Water or liquid cooling can have an enormous impact on energy efficiency, and<br />

is 3000 times more efficient than air cooling. Cooling is further discussed in section 3.5.2.<br />

Key Efficiency Benchmarks<br />

Different approaches are available for evaluating the efficient use of energy in a data center.<br />

The approach chosen by the Green Grid organization works with two key benchmarks: Power<br />

Usage Efficiency (PUE) and <strong>Data</strong> <strong>Center</strong> Infrastructure Efficiency (DCIE). While PUE<br />

determines the efficiency of the energy used, the DCIE value rates the effectiveness of the<br />

energy used in data centers. The two values are calculated using total facility power and IT<br />

equipment power. The DCIE value is the quotient of IT equipment power and total facility<br />

power, and is the reciprocal of the PUE value. The DCIE thus equals 1/PUE and is expressed<br />

as a percentage.<br />

A DCIE of 30 percent means that only 30 percent of the energy is<br />

used to power the IT equipment. This would result in a PUE value of<br />

3.3. The closer this ratio gets to the value of 1, the more efficiently the<br />

data center uses its energy. Google can claim a PUE value of 1.21 for<br />

6 of their largest facilities.<br />

Total facility power includes the energy used to power distribution<br />

switch board, the uninterruptible power supply (UPS), the cooling<br />

system, climate control and all IT equipment, i.e. computers, servers,<br />

and associated communication devices and peripherals.<br />

PUE<br />

DCiE<br />

Level of<br />

efficiency<br />

3.0 33% very inefficient<br />

2.5 40% inefficient<br />

2.0 50% average<br />

1.5 67% efficient<br />

1.2 83% very efficient<br />

Page 14 of 156 © 08/2011 Reichle & De-Massari AG R&M <strong>Data</strong> <strong>Center</strong> <strong>Handbook</strong> V2.0

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!