According to some server vendors, customers mount as many as 280 blade servers into a single 42 unit rack. That may seem very appealing to organisations with high real estate costs and a desire to manage fewer units. But the downside is that by doing so, the systems will generate so much heat that internal components will start to get so hot they will automatically shut down – before they literally melt.
This re-emerging issue of heat came to the fore at Information Age's most recent roundtable debate. Every month the magazine editors gather 20 senior IT executives to share their views and experiences on strategic IT issues, with the debates run under the so-called Chatham House Rule which enables delegates to speak freely without fear of being quoted directly.
At the December lunch, sponsored by data centre infrastructure protection specialist APC, most delegates reported a growing concern over a lack of understanding about data centre design and the need for adequate cooling and powering of server and storage racks.
Density has been a goal for years. One delegate, a data centre manager at a UK-based utility company, said that for him the pressure in data centres has always been footprint. Now, with developments such as blade computing, devices have become ever more concent-rated, allowing organisations to cram scores of blades into a single rack. "Little mention is made of power or cooling in the vendor's specification. But in fact these two factors are likely to have the most impact on the cost of data centres in the future," he said. Much of the skill set required to design a modern data centre builds on an understanding of ‘old-fashioned' mainframe environments, said one IT services consultant.
But while power and cooling are becoming important considerations for data centre design, many organisations face challenges on how to make a business case for more reliable (and therefore more expensive) power and cooled facilities in data centres.
That means that IT has to have a clear say in the design of the physical facilities that will house its equipment. One delegate from a financial services firm recounted how his department had been expected to move into a data centre designed by the building facilities management unit without much consultation. "They'd given no consideration as to how we were going to cool or even get power to the racks," he said. "We were lucky we could get it to work. Now we always get involved in the design stages."
Where investment becomes a critical issue, the most logical option is to outsource the data centre, leaving the cooling and heating issues to third party hosting companies, he added.
But many appeared reluctant to go down that route. "You become reliant on other people's pipes and communications – we know the consequences of that," said one. The head of IT at a multinational electronics manufacturer said the best way to free budgets up for such infrastructure investment was to fine-tune the message. "You have to present it to the board as a risk management problem. Show them the business impact of an overheated data centre."