The art of cool

There has been a mindshift – maybe even a testosterone shift – in how problems of power and cooling in the data centre are perceived.

“Four or five years ago, we couldn’t get anyone to talk to us about any of this,” says Neil Rasmussen, CTO of data centre infrastructure provider APC. “We’d go into a data centre and talk about energy efficiency, and they’d say, ‘Real men don’t talk about data centre efficiency; we talk about availability, we talk about MIPS [millions of instructions per second], but we don’t talk about efficiency.’”

They do now. The energy IT systems consume, the heat they generate, the energy they use to get rid of that heat has become not just a headache for business but “one of the big public policy issues our industry faces today,” says Rasmussen.

What has changed is that the many stakeholders involved, from the vendors to the end users, have come to understand at least some of the issues of what is happening in today’s data centres and how that impacts them – within their organisation and as individuals, says Yogendra Joshi, a professor at Georgia Institute of Technology’s School of Mechanical Engineering.

These issues go far beyond the electricity costs; they are now directly relevant to corporate social responsibility. More and more companies are making public commitments to reduce their carbon footprint, and data centres, with their growing, multi-megawatt profile, stand in the way of that.

There is a big change here, says Rasmussen. Data centres, for long-regarded as making a highly positive contribution to society – through their enhancement of business productivity and the automation of processes – are now being looked at in a more negative light.

But the issue is not how much energy they use in processing, but how much they wasted.

Today, the numbers don’t look good. Anything between 40% and 70% of the energy used by a modern data centre never touches an IT job. In fact, recent numbers by the US Department of Energy break down how the energy delivered to a typical data centre is used: about 20% goes into power conversion and power distribution; 40% into cooling equipment; and just 40% into server load and computing operations.

“Definitely we want to change that,” says Rasmussen. And he knows that if data centre owners don’t take the initiative, then others will take it for them.

Regulators have become alerted to the high levels of waste. Indeed, some local and central authorities have already put in place incentives for greater energy efficiency while others are talking of punitive measures and are already banning the construction of new data centres in certain areas.

Wasting a thousand cars

More than 50% of the power going into a typical data centre does not go to the IT load, it goes to power and cooling. That is the reference point that APC CTO, Neil Rasmussen, likes to start with when he describes – with some disgust – the level of waste that is currently a feature of most data centres.

Here is his reckoning:

“A typical 1 megawatt data centre (and that’s not a particularly big any more) takes about 177 million kW hours of electricity over its 10 year life, worth about $17 million at a cost of 10¢ a kW hour (the unit cost of electricity in the US varies from 3¢ to 20¢ a kW hour). Each data centre megawatt is equivalent to about 4,300 cars on the road. So if I took out a data centre from a carbon perspective, it is taking out 4,300 cars.

“That 1MW data centre is continuously wasting the equivalent of about 1,000 cars worth of carbon – that is just the waste – due to poor design. I am not even talking about theoretical waste that might be eliminable [by virtualisation or server consolidation or some other approach]. I am talking about stupid waste that is happening ever day.

“And there are thousands and thousands of these data centres around the earth. So there is a lot of opportunity here to make savings.

“Various estimates – by APC, the US Department of Energy and others – all confirm that we are looking at savings in the order of millions of cars’ worth of electricity by better data centre design.”

Inside businesses though, there is an understanding that inefficiency in the data centre is costing them dear. “They are starting to realise that inefficiency itself is forcing them to build new data centres that they might not actually need,” says Rasmussen. “If they could pick up 10% to 20% efficiency in that data centre, they could actually put in up to about a fifth more IT equipment in the existing facility, deferring the building of a new data centre build for one or two years.”

Processes and practicalities

There are two parts to tackling this problem: There are technical ways of improving data centre efficiency, but there are also processes that can be applied for improving electrical efficiency – internal energy polices and procedures that need to be adopted.

Setting arbitrary targets for improving data centre efficiency may be counter-productive – at least at this stage. With a paucity of information on the energy efficiency of different equipment and the lack of any benchmark data, working out the overall energy efficiency of any data centre is no small undertaking. Moreover, different industries have different IT models: a high availability data centre for financial transactions, where redundant features are used to increase resilience, is going to be less energy efficient than, say, a small university data centre.

What companies need to do, says Rasmussen, is focus on their process capability – to make sure they start systematically going after the problem.

Not many have done so to date. “The status of virtually every data centre today is level zero: they don’t know what their current efficiencies are and they don’t have an active programme to change it.”

On the IT side, key strategies centre around technologies such as virtualisation to optimise server utilisation levels and blade technologies, which can be used in place of multiple, standalone servers.

But that leaves the other area of consumption – power and cooling – where 40% to 80% of the power goes.

In terms the power and cooling, technologies are available today that promise to increase the efficiency of power and cooling significantly:

• In-row cooling  By placing cooling systems within the rows of rack units instead of at room level, hot air can be extracted directly as it emerges from the IT equipment, cooled and returned to the servers at ambient air temperature. That reduces the power used by fans by as much as 50%, enables companies to pack systems more densely, and allows cooling capacity to ‘follow’ virtualised IT loads;

• Ultra-high efficiency UPSs Data centres should upgrade their uninterruptible power supplies as power losses from UPSs have dropped by a factor of two to three in the past two years;

• High voltage AC power By switching to European standard 230/240v power from 120v, US data centres can eliminate significant  power loss from their PDU transformers and associated copper;

• Scalable power and cooling  Organisations need to match their power and cooling to their changing needs by selecting equipment that is scalable;

• Capacity planning and management  Software is emerging for optimising the siting of power and cooling equipment in the data centre – of particular importance when virtualisation is being used.

Together, these can cut energy use by 25%, Rasmussen estimates. And the pressure to take such action will only grow. “If within a few years you do not have a large meter on the wall with a real-time read-out of data centre efficiency, then you will not be managing your data centre effectively.”

David Cliff

David Cliff is managing director of Houghton le Spring-based Gedanken, a company specialising in coaching-based support and personal development. Cliff is an experienced trainer, manager and therapist,...

Related Topics