The green data centre

Data centre managers face two major and related issues this year. First, they must acquire and pay for enough electricity to feed their increasingly densely packed server farms. Then, they have to spend more money, and find yet more power, to keep them cool.

Without a doubt, the data centre power and heat dilemma is rapidly reaching crisis proportions. For the past several years, data centre operators have been fighting to find the space needed to meet spiralling demand for processing capacity. Most have partially fixed this problem by using compact blade servers to boost their MIPS/square foot ratio, but this has created a new problem. Densely packed racks need more power to run, and each Watt of electricity used by a processor means another Watt of power must be used to keep it cool.

The consequences of this equation are hurting companies in a variety of ways, not the least of which is increased operating costs. According to a recent US study by Hewlett-Packard, a relatively modest 30,000 square foot data centre will now typically house 1,000 server racks. Depending on how many servers they hold, each rack will consume between 5 and 25 kilowatts per hour, or roughly 10 Megawatts a year when cooling power is factored.

In the US, with maintenance and amortisation charges added, such a data centre would generate an operational bill of $4.2 million. Europe’s higher electricity prices could add a further 30% to this bill.

So what is to be done? Can companies really be digital and green? The simple answer is, yes – but in practice it will be more difficult than a lot of companies have yet realised.

Although ideally all heat and power issues would be solved by building new, state-of-the-art data centres designed from the ground up to accommodate modern IT capacity demands, this isn’t going to happen.

New data centres require cheap real estate, access to high-capacity supplies of both electricity and communications bandwidth and, if they are to house servers supporting real-time applications, must be within a few miles of users’ offices. Where such premises exist close to business centres like the City, service providers can easily charge $50 per square metre simply to house equipment.

For the majority of organisations this means managing with existing resources. There is no silver bullet that does this. Instead, data centre owners must build new parameters into their capacity management equations.

One starting point may be to install more “intelligent” air-conditioning systems. Companies such as APC and Hewlett-Packard report growing interest in next generation AC regimes that use sophisticated matrices of sensors and fans.

Such “intelligent” AC regimes make it possible to run more servers in less space without a complete rebuild, but to be effective they will need to be complemented by new rack infrastructure, including air-tight blade chassis such as IBM’s Blade Centre series, and HP’s new C-Class range.

HP’s C-Class series optimises the efficiency of air cooling by managing air flow inside as well as outside chassis enclosures. With the addition of complementary power management technology, HP says its C7000 chassis will cut as much as 60% from data centre operating costs. With BladeCentre, IBM has recently taken a different tack, reintroducing water to the data centre for the first time since its 360-series mainframes dispensed with copper piped cooling systems.

As global warming becomes less and less deniable, and as energy prices inevitably rise in recognition, many more techniques will be brought to bare on keeping data centres running at optimum capacity, but without costing the earth.

Lee Biggins

Lee Biggins developed as an entrepreneur selling cold cans of fizzy drinks to fishermen on hot days, to running his own car washing empire as a teenager. This spirit carried through to the developmental...

Related Topics