IT analyst group Gartner has a well-known model for the adoption of technology and related services: a brief flurry of over-excitement is swiftly followed by disenchantment, before the technology painstakingly gains widespread acceptance. The co-location market – where companies rent out purpose-built data centre space to corporate customers and telecoms vendors – has bucked that trend: after hitting the first two markers on the curve, the usually gradual pace of mainstream adoption has been skipped as demand has gone into overdrive.
Facilities built in the dot-com era that – only a few years back – were sitting largely empty, are now full to bursting point, and consolidation has swept through the industry as vendors have realised just how much money there is to be made.
The clamour for UK data centre space – especially in and around London – is fierce. As a sellers’ market has emerged, co-location providers have increasingly been able to dictate terms – and prices – to customers.
The distortion of the Gartner Hype Cycle – indeed, the market transformation itself – has not been spurred by any extraordinary surge in applications demand: hunger for new applications carries on rising, but without any dot com-like or Y2K explosion. Rather, the co-location sector has been set on fire by one overriding factor: the increasing power demands of modern IT equipment.
In that regard, though, the co-location industry faces the same issue as its customers. And some service providers are much better equipped than others to deal with their new-found popularity.
For the foreseeable future, that popularity is not going to wane. The amount of new, prime, data centre space that will become available to businesses over the coming years is going to fall well short of demand, warns Philip Low of data centre sector analyst BroadGroup – in London the increase in space will be just 2% per year. That has given some providers, such as BT, the confidence to predict 20% growth rates for its data centre business for the next three years.
Furthermore, says Low, that confidence looks well placed. The need for businesses to lower their operational costs, combined with more onerous regulatory environments, will continue to drive “the need to migrate to third-party outsourcing”, he says.
And while normal market dynamics might dictate that the number of service providers (from those simply renting cages in purpose-built space through to those offering fully managed services) would be surging at this point, that is not the case. In fact, the “significant” barrier to entry, says Anthony Foy, group MD of co-location provider InterXion, means that few new players are likely to emerge at this stage, and the cut-throat competition for sites with suitable communications and power feeds makes it harder for new entrants, he adds.
“Volatility in the energy markets resulted in electricity prices rising by 70% overnight.”
Dave Gilpin, SunGard Availability Services.
With suppliers in the driving seat, there is some talk among customers that they are not getting the service they would like. As one delegate to a recent Information Age event complained: “We’re tied into a contract based on the floor space our systems occupy, but in an era of denser kit, that is no longer our chief concern. Square footage should not be the metric we are charged on.”
While selling data centre services on the basis of footprint made perfect sense historically, the new breeds of small, computationally powerful – but electricity-hungry – servers has rendered that model less than perfect. The rising price of power and pressures to combat global warming have heightened the understanding of the relationship between IT and energy.
Co-location providers have certainly felt the consequence of rising energy prices. Until two years ago, SunGard Availability Services enjoyed a fairly ‘generous’ contract with its electricity provider. When that expired, however, the renewed deal involved power charges rising by 70% practically overnight, says Dave Gilpin, product development director at SunGard. And ultimately, that has to be passed on to customers.
Increases in wholesale electricity prices have persuaded some co-location providers to invest in ways of shielding customers from the volatility of the global energy markets. Both Telehouse Europe and Global Switch have engaged in the energy futures markets, allowing them to set prices as far as 24 months in advance, avoiding short-term price spikes. This ensures that customers are charged at a fixed rate, and protected “as far as possible from ad hoc price fluctuations,” says Robert Harris, technical services director of Telehouse Europe.
Such changes are reflective of the degree to which power has become a significant factor in calculating the total cost of ownership of a data centre, and while not all co-location providers were quick to adjust, most of them are now including some form of power calculation in their pricing, and reducing the primacy of footprint, says BroadGroup’s Low.
Nevertheless, this change has highlighted inequity in some of the more traditional co-location pricing models. “Traditionally, in co-location facilities, power costs are shared across customers depending on their infrastructure and space requirements,” says Telehouse’s Harris.
As power becomes a more significant aspect of charges, customers are keen to see that the amount they are charged accurately reflects their usage. The result has been the development of rack-level metering.
But even that presents some challenges to providers. It may wrongly give the impression that power can simply be paid for as needed – but of course, the amount of power facilities can deliver is finite, and so some constraints on the power a rack can draw are inevitable.
For SunGard Availability Services this has meant developing charging models that are both transparent and workable from a client perspective, insists Gilpin. Customers’ energy requirements differ throughout a month, so power limits have to include some flexibility, so that usage is calculated on an average basis over a month. On some days a customer may exceed the notional limit its racks can draw, on other days it may shoot well under. “We’re not trying to say, ‘You can’t rob Peter to pay Paul’,” he says.
But delivering that flexibility still leaves service providers with a potential problem: if all of their customers were to exceed their requirements at precisely the same moment, a service provider would encounter problems with the generating companies. To date, this has been a theoretical problem but it remains the sort of uncertainty that customers prefer to avoid, says Gilpin. “We have yet to reach the point where we’re having brownouts,” he says.
Meanwhile, some co-location providers are keen to see the power problem removed from customers’ hands entirely. They can achieve this by moving to a managed services model. Under such a model, providing power is just one of the many services that go into the overall charges for housing, operating and maintaining an organisation’s server estate.
“Managed services is very much the Holy Grail,” for co-location service providers, says Low, and where space is at a premium, co-location providers may be able to entice additional service spending out of customers. However, it also the most competitive segment of the market. Specialist business continuity and disaster recovery companies are pushing aggressively into this market, he says, and the telecoms operators have traditionally been very strong there too.
Many of the managed service offerings have been seen as “a form of insurance”, says Paul Myerson, an analyst at the Enterprise Strategy Group: there is a clear distinction between those willing to pay for that insurance and those willing to bear the risk.
In effect, companies were traditionally less sensitive to price than may otherwise have been the case: once continuity became a business imperative, company purse-string holders had to bite the bullet. However, today the energy question is pressing even here, says Martin Lynch, CEO of high-availability data centre operator, Infinity. Electricity is the single biggest data centre cost, so calculating the detailed requirements and allowing for scalability – rather than over-specification – is becoming an increasingly important facet of delivering continuity services.
The notion that survival relates directly to the amount of electricity consumed by and available to an organisation’s critical IT infrastructure is appreciated by only a fraction of the wider business community. Nevertheless, that situation highlights the stakes involved in modern data centre economics.
Dead on arrival
Robust; fault-tolerant; high-availability: systems vendors are forever keen to emphasise the reliability of their equipment. But moving that kit around – especially bringing it into data centre or co-location facilities – seems to test some of these claims.
Anecdotal evidence from data centre operators suggests that when customers move their systems between sites, a surprisingly high proportion of the equipment turns up ‘dead on arrival’.
One director of operations at a large investment bank with experiences of several data centre moves says his company now budgets for 30% of its servers to fail irrevocably during transfer.
Estimates such as that are “reasonably consistent” with the experience of data centre operator Global Switch. “Many [customers] haven’t considered the risk until we flag it up to them,” says Luke Mann, operational project manager at Global Switch.
It remains unclear why equipment failure rates are so alarmingly high. Asked about why equipment commonly fails during a move, Mark Jarvis, chief marketing officer at computer maker Dell, said he had not seen evidence that it was a widespread problem.
Nevertheless, the dangers of transportation to computer equipment are well understood. Systems and software giant Sun Microsystems spent the summer of 2007 marketing its Project BlackBox – a data centre packed into a shipping container. Inside the container, racks were supported by a sophisticated bed of shock absorbers, supposedly enabling the equipment to survive even if the container were dropped from a height of four feet.
Martijn Lohmeije, who was recently the project manager, working for a Dutch IT services company, charged with moving a client’s data centre, describes the difficulties that equipment failure can cause in a virtualised environment. He had to move 150 virtual servers that existed on just 30 physical servers (in fact 120 of those virtual servers reside on just six physical machines). The big risk was that “servers would not survive the transport phase and we would be left with too few cluster hosts to run all the servers,” says Lohmeije.
Nevertheless, Lohmeije’s experience is testament to the benefits of careful planning. The move he oversaw took place over the course of a weekend, using two transport companies, each using two trucks; the second truck was not allowed to depart until the first one had called to confirm its safe arrival. The ISO-certified transportation companies used special boxes, with shock-damping package material on the bottom. Additional snapshots were taken prior to the move and additional hardware rented, just in case equipment failure rates were higher than could be covered by redundancy in the existing configuration.
Ultimately only two machines died in transit, says Lohmeije. That said, both were less than a year old and neither showed any outward sign of damage sustained in transport.