Smart power

Lay a map of a national electricity grid and the plans for a data centre side by side and there is more than a passing resemblance. This is hardly a surprise: a typical data centre – which covers 5,000 square feet – can easily consume five megawatts of power, around the same as a small town.

Nor do the parallels end there. Just as householders have faced rising energy bills over the past few years, companies running large data centres have become more and more concerned about both the cost and availability of energy.

In some cases, a scarcity of electricity supply is preventing enterprises from running their data centres at full capacity, and some local authorities are reluctant to grant consent for new data centre construction. 

In the UK, for example, it is reported that there will not be enough electricity available to support building new data centres within the M25 until after the 2012 Olympics. At the same time, of course, demands on businesses to store and process data continue to grow.

Happily, IT equipment in the past decade has become dramatically more power efficient per unit of processing power. But as computing capacity has increased, so too has the absolute energy used by IT equipment. A four-core server might be more power efficient per core than a single core, single processor box, for example, but in absolute terms it might still use more power than the older, less powerful equipment. 

Then there are the additional power demands that stem from the move to more compact form factors, such as rack-dense servers and blades, as well as larger and more powerful storage arrays and faster networks. The Department of Energy and Climate Change calculates that data centres use 3% of the UK’s energy. Unchecked, that proportion could double by 2020.

A lot can be achieved in data centre efficiency with simple techniques such as refreshing hardware and consolidating IT systems. But IT equipment accounts for only a part of the data centre’s infrastructure. IT directors looking to make a dramatic improvement in their energy use need to look at the overall overhead of running the data centre, especially at the power and cooling costs.

The precise power profile of individual data centres varies greatly with age, construction and even geographical location all affecting efficiency. But industry figures put IT power consumption – servers, storage and networking – at around 40% total energy. Cooling, meanwhile, consumes another 40%

Around 15% of power is lost as it circulates around the power infrastructure, and through equipment such as power strips and UPS (uninterruptible power supply) systems.

The remaining five to six per cent of data centre power goes on support systems such as building management, security, telecoms, lighting and even the administrators’ coffee maker. This cost of electricity needed to accommodate human workers in data centre environments has prompted some companies to move to “lights out” locations that are managed entirely remotely.

As these figures reveal, the energy footprint of the typical data centre is disparate and complex. To date, that complexity has prevented data centre managers from achieving a detailed view of energy consumption and also from managing that consumption as effectively as possible.

In this, there is yet another parallel with the electricity grid. Just as utility providers are experimenting with so-called Smart Grid initiatives – in which energy consumption and input is monitored in real time and managed dynamically – some of the same concepts, and in some cases the same technologies, are being discussed in a data centre context.

Lessons from the Grid 

The starting point for Smart Grid initiatives are smart meters, that provide more detailed measurement of power consumption than conventional meters, especially around the consumption over time. 

According to Simon Mingay, research vice president at Gartner, “smart meters are incredibly useful in the context of the data centre, to gain greater granularity of what is being consumed and by what.”

Continued…

Page 2 of 3

Once energy consumption can be monitored at a more granular level, data centre operators can begin to manage that consumption in a similar fashion. “One of the biggest opportunities to take data centres forward and where best practice is today is [to use] dynamic energy management,” says Mingay. “Historically, data centres have been run full on 24/7 because performance trumps everything. But with good instrumentation it is possible to shut equipment down and change power use based on the load.”

To do this effectively means tying sensors in servers, storage systems and network hardware into a common management tool so that data centre managers have an accurate picture of, and control mechanism for, power use.

But it is not just hardware that needs to be monitored if a full picture of energy consumption at any moment is to be achieved. IT departments also need detailed information on software workloads, so they can match this to energy use data from the hardware, and establish where the peaks lie. Armed with this information, data centre managers can identify devices that are using more power than they need.

Richard Lanyon-Hogg, chief technology officer for IBM UK, points out that some systems, especially networking hardware, run around the clock when they might only be needed during office hours. With a better picture of which systems use power and when, those systems can run at lower performance levels, cutting power demands without affecting application performance. 

As well as smarter IT equipment, vendors of supporting infrastructure including power distribution units, UPS systems and switches are starting to incorporate sensors. These systems help managers form a much more detailed picture of energy use across the data centre aisles, not merely from the equipment within the racks.

Hewlett-Packard defines this concept of sensor-driven data centre energy management as a “data centre Smart Grid”. According to Doug Oathout, vice president for green IT at HP’s Enterprise Business division, the company has built a “sea of sensors” to provide highly detailed, as well as timely, energy use information. 

“Our software can control power as well as manage servers,” he explains. “Servers today are better at energy management and we are opening the management interfaces to third parties, so they can do capacity planning. In order to manage the data centre from a capacity standpoint, we need accurate power [information] on an ongoing basis.”

Most data centres, he suggests, overprovision both power and cooling systems because managers have no real idea of actual usage. 

Smart buildings

The move to smart management of data centre power may not originate in the data centre itself. An admittedly limited number of organisations have begun to adopt smart building management technologies – where computers control heating, lighting and ventilation – and these can be extended to their computer rooms and data centres. 

“Some customers are starting to use smart management systems they have been using in their other buildings in the data centre,” says Marc Donnelly, who heads the UK utilities business at Cisco. “We are certainly seeing it being adopted first in the ‘carpeted office’, but it is much more business critical in the data centre, and there is more regulation.”

Indeed, by managing data centres in the context of their total consumption, organisations might be able to circumnavigate the restrictions on energy supply that currently constrain data centre expansion. Already, some larger businesses are using advanced power management techniques to negotiate lower power tariffs, or even to ensure a supply when the local utility provider might otherwise be reluctant to allocate electricity to a large site. 

Continued…

Page 3 of 3

These deals involve tying IT electricity use and building control systems together to form a better picture of real-world energy use. Businesses then use this picture of their internal “grid” to negotiate agreements with electricity suppliers, where they agree to cap their consumption in times of peak demand in return for price reductions. 

“Companies are aggregating multiple buildings, in order to agree with power companies that their buildings will shed a certain load [at peak times] in return for lower energy rates,” says Oathout. “If you amalgamate, there might be an opportunity to shed half a megawatt or a megawatt.”

Lack of trust

Despite the ample opportunity for greater efficiency through greater visibility, the take up of ‘smart data centre’ technologies remains limited so far, analysts say.

“Almost no-one is doing anything with the intelligence available to them,” says Gartner’s Simon Mingay. “IBM, HP and Sun have gone to significant lengths, as have the rack vendors, to deliver information about how much a rack or server or blade is consuming. But very few people are using it.”

“There is fantastic technology out there but no-one is using it, because they don’t trust it. [IT managers] want control, and nobody wants to give control to software, potentially putting performance at risk.”

Mingay predicts that if energy costs rise more companies will start to adopt more advanced power management technologies. But, he suggests, most of the companies using these technologies today do so for reasons of corporate social responsibility or brand management, rather than IT efficiency.

In the short term, businesses appear to be more willing to upgrade or replace IT equipment with more energy-efficient models than to undertake the large-scale improvements needed to improve power and cooling. 

One reason is the life cycle of data centres. Power and cooling systems are commonly designed to last 25 years. IT equipment has a more rapid replacement cycle, making it feasible to introduce energy-efficient replacements as part of the overall update cycle.

Arguably, this bodes well for the introduction of IT equipment with smart energy monitoring functionality. But the prospect of supplementary power infrastructure with these features looks less certain, at least among mainstream businesses.

The companies most likely to deploy building-wide energy management technologies in the data centre are those that run very large installations, such as hosting or cloud computing providers.

Google, for example, has spoken publicly about its energy use, and says that its best data centres achieve a PuE (power usage effectiveness) ratio of 1.13 – where one watt of power delivered to IT equipment carries an overhead of 13 per cent – well below the industry average of between 1.5 and 1.8. But even Google admits these levels of efficiency are only possible in its most modern data centres. 

The applicability of smart energy management will also vary according to the operating environment. For example, providers such as Google, which design and assemble their own systems are better placed to optimise their set up than data centre operators that manage heterogeneous IT environments. 

The example of hosting and business continuity provider SunGard Availability Services proves the point. “Every component of the data centre is measured and monitored,” explains Dave Gilpin, the company’s chief strategy officer. “We have a sophisticated building management system that shows every power load and cycle, down to fan speeds and return temperatures, and how many chillers I am using. We can measure this for customers at the rack level.” 

Even in an environment such as this, Gilpin says, the best possible PuE is between 1.6 and 1.7 for an air-cooled data centre in most of the UK, although a PuE 1.4 is possible in Scotland or through water-cooling, Gilpin says. This level of efficiency is about as much as can be achieved in most data centres, given the heterogeneous environments contained within them and the need to support several generations of IT equipment.  

Clearly, then, the efficiency improvements made possible by sophisticated monitoring technology are secondary to characteristics that are harder to address, such as the data centre’s size and location, and the nature of the IT environment it houses.

Furthermore, prevention is always better than cure, and there are a number of data centre efficiency measures that could provide more immediate improvements than installing a smart monitoring infrastructure, such as introducing hot and cold aisles or running hardware at a higher temperature. “Even cleaning the data centre makes an enormous difference,” says Gilpin. “We spend £50,000 every six months on a deep clean of our data centre, and that brings efficiency right up.”

Nevertheless, IT departments tend to be advocates of the power of computing to analyse and manage business functions more effectively, and there is ample opportunity for them to apply that to one of their most pressing concerns, namely data centre power.

Related Topics