Fighting the fire

Power-saving chips

For most of its life the chip industry has relentlessly pursued ‘bangs for bucks’, and thanks to Moore’s Law it has succeeded. But, today, ‘more MIPS for your money’ is no longer what every customer wants to hear. Instead, what many server buyers are asking for – indeed, screaming for – is more MIPS from fewer watts, and the industry (realising the commercial imperative) is falling over itself to respond.

Intel and AMD have both taken on-device power management to new levels in their 64-bit development programmes, and IBM has gone to more fundamental levels – exploiting the better conductivity of copper to build more power efficient buses into its chipsets, and deploying its silicon-on-insulator (SoI) technology to reduce wasteful leaking of electricity from chip circuits. Both these technologies are now used in the company’s X3 chipset, Although AMD has consistently declined to confirm or deny it, Google’s decision to favour AMD over Intel chips is widely thought to have been motivated by Opteron’s then-superior power/watt profile, as much as by the processor’s price/performance.

More recently, the launch of what Sun claims to be the “the world’s first eco-responsible processor” the UltraSPARC T1, appears to have breathed new life into an architecture many believed to be nearing the end of its commercial life.

The UltraSPARC T1 (codenamed Niagara) uses multi-threading technology to finely optimise instruction execution in heavy transaction processing environments – ensuring that more of the power it draws does real work. Sun claims mass adoption of its new chip would eliminate the need for half the world’s web servers, “slashing power requirements and having the same effect in reducing carbon dioxide emissions as planting a million acres of trees.”

This commitment to saving the world appears to be good business: since launching UltraSparc last November, Sun claims to have attracted more new customers to its SPARC architecture than at any other time in the last seven years.

Cooling

Data centres were once home to complex and potentially leaky tangles of copper tubing which pumped cool water through the arteries of hot mainframes. Today, the most practical way of keeping servers cool (up to about 10kW of power usage) is air.

Unfortunately, as well as being a big part of the solution, air conditioning is also a big part of the data centre cooling problem. According to Chandrakant Patel, head of Hewlett-Packard’s thermal management research group – the HP ‘Cool Team’. “There is now a one-to-one relationship between the two. For every watt of power your processors consumes, you need another watt to keep them cool.”

The fact is that most air conditioning systems are inherently inefficient. In older data centres, air is expected to circulate in a relatively unmanaged way, rising through the vents in raised floors to be sucked into hot server racks, and then blown out at the top to make its own way out of the server room. This isn’t always, or even usually, what happens.

According to Tikiri Wanduragala, a senior consultant with IBM’s system and technology group, traditional air conditioning systems are not dynamic enough to cope with today’s constantly changing data centre environments. Whereas once servers installed in one corner of a room could be relied on to still be in the same place several years later and doing much the same job as they did when they were installed. Today’s rack-mounted and blade-based machines are constantly being repurposed or even physically relocated, with the result that “the air-flow you planned on Monday is different from the one you need on Friday,” says Wanduragala.

For racks heading above 10kW, many companies are looking at watercooling, but beneath that most companies are likely to install new blade and server enclosures that offer greater potential for monitoring, analysing and dynamically managing airflow.

Hewlett-Packard’s recently announced HP BladeSystem c7000 has good claims to being the present state-of-the-art data centre architecture. At the heart of this “data centre in a box” is a hermetically sealed blade chassis that prevents air from leaking randomly from places where it needs to be, to places where it doesn’t. With air controlled in this way, its cooling potential can be maximised by an internal network of so-called ‘Active Cool’ fans which dynamically redirect air around the enclosure as the workloads, and hence the thermal profiles of the servers it contains change.

HP claims that the thermal logic technology contained in c7000 will allow it to draw 30% less power than an equivalently configured 1U rack systems.

Virtualisation

Direct, physical approaches to improving the power/watt performance of the data centre will continue to slice away at the problem, but for some organisations a faster and more cost-effective solution has been found in logical virtualisation.

Conventionally deployed applications generally require the resources dedicated to them to be significantly over-provisioned as a hedge against infrequent, but business-critical, workload peaks. This wastes power, because even when a system is running at 15% of its capacity (which is not uncommon) it may still be consuming 50% of the power it would need at peak performance.

In a virtualised environment, multiple applications share a common set of resources, accessing as much or as little of the aggregate capacity available as required. Managed properly, virtualised servers can be safely run at 60% or even 80% of their peak capacity, with far less wasteful idle time. Put simply, fewer servers can cope with the same workload, and so consume less power even though, in operational terms, performance is often improved.

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics