The future of the data centre

Before he jumped ship and formed his own technology company, Vern Brownell was becoming increasingly uncomfortable with his job.

As the chief technology officer of Goldman Sachs, Brownell had presided over a huge expansion in the international investment bank's IT capacity.

Over time, a handful of mainframes was replaced, first by tens, and then by thousands of Unix and Windows servers. Thousands of other devices compounded that complexity. Complicated systems management tools were added to try to manage the mess.

"I looked at all this and thought: 'I am the guy who created this environment," he says, "I felt responsible for creating a nightmare." Brownell knows that, like his peers running complex, IT-dependent organisations, he had no real choice. But, he says: "I never felt it was right.

There was something fundamentally wrong with the technology." Why, he asked himself, was it so expensive to manage? Why so difficult to change? And why was the utilisation of all this power so low? Brownell, of course, was not alone in his frustrations. Big data centre operators regularly complained about the growing complexity, and industry leaders, among them Larry Ellison, CEO of Oracle, and Scott McNealy, CEO of Sun, often promised a future nirvana, when computer power would be available on tap, on a "pay as you go" or "utility model" basis.

Whatever they said didn't make much difference: all the while the software and servers, storage and networks proliferated.

Now, all that is starting to change – dramatically. And the experts agree that computing is on the cusp of one of its periodic seismic revolutions. And this one, they say, will have profound implications.

"We're at one of those rare points in time when we universally agree," says Bill McColl, professor of computer science at Oxford University and the CTO of Sychron, a provider of provisioning tools for automated data centres. "We're in transition from proprietary, expensive and manually intensive computing to a more commoditised, more automated, virtualised and cheaper model," he says.

It is a model in which all the devices in a data centre – processors, storage, networks – are virtualised; in which services are paid for on demand; in which business needs dictate which resources are used; and in which automation finally drives down IT costs (see box).

"[This] will have a huge impact on the technology industry. [And] it will bring about a profound change in the way organisations source, use and pay for technology," said Gary Barnett, an analyst at technology research company Ovum, in a recent report.

Sceptical customers may thinking they have heard some of this before. But the weight of investment is undeniable: "There are 20,000 young software companies out there, and at least 40% of these are aimed at virtualisation," says Nora Denzel, senior vice president and general manager for Adaptive Enterprise at Hewlett-Packard (HP).

Brownell left Goldman Sachs to set up one of those companies, eGenera, a pioneer of blade servers – a new technology that simplifies and reduces the costs of server-based computing. Bill McColl set up Sychron to address another area – automatically linking business demands with underlying virtualised resources.

Executives at HP say they are betting the company on the inevitability of this new model of computing: The company has so far invested $3 billion in R&D and $100 million in acquisitions in this area. IBM, Sun and Microsoft are all doing likewise.

Driving down costs

The IT industry is frequently accused – deservedly – of creating solutions to solve non-existent problems. But not this time.

In 1990, IDC said that 20% of all IT costs were in operations. That figure, considered too high then, is now more like 60%.

"Spending on servers is down to $55 billion a year. But users are spending twice that on managing them," said Tim Howse, chief technology officer of Opsware, a software company set up to automate data centre operations, speaking at the recent IDC conference on dynamic computing.

The opportunity for businesses to slash their IT costs is huge – even without adopting full utility computing. "Through adaptive computing, CIOs can flip the ratio for spending heavily on operations to spending heavily on innovation," says Mark Potts, CTO of the Management Software Organisation at HP's Software global business unit.

HP estimates that 80% of IT spending goes on infrastructural and application maintenance. Its target is to get this down for customers to single digits.

"Around 70% of what CIOs are now spending is just on treading water," concurs Rob McCormick, CEO of Savvis, one of the world's biggest data centre operators.

How long will the momentous change take? Most analyst firms see adoption of the basket of technologies involved rising rapidly over the next two to five years.

Gartner, the IT advisory company, has a six staged model of adoption that stretches from the preparatory phases of consolidation through to virtualisation, automation and, ultimately, business process or policy-driven IT (see box). Ultimately, Gartner analysts think businesses will be able to benefit from highly flexible, lower-cost, business driven IT.

They don't put a timetable to it, but warn it is complex and could take years.

BT, one of the leading data centre operators and services suppliers, takes a similar view. "You shouldn't underestimate the difficulties involved here.

The shift should be done in a componentised way – virtualise the risk," says Colin Hopkins, director of BT's data centre services and telehousing operations. BT offers a virtualised storage service today, and will soon introduce a virtualised network service. But virtualised processing, especially in complex environments, is further out.

The wave is already building, however. Blade servers, for example, are now being used by thousands of companies, and some are beginning to use virtualisation.

Dynamic provisioning is also catching on fast – regardless of how virtualised the underlying technology is: Opsware, for example, cites one customer, Fox News, that, for the day of the US SuperBowl, was able to more than double the number of servers it had available.

Another example is Savvis: It uses blade servers, dynamic provisioning and virtualisation technologies to provide resources on demand to services customers – among them several Wall Street banks.

Grid computing is also beginning to win acceptance at service providers and users, giving dramatic results to companies such as Acxiom, Bristol Myers Squibb, Société Générale, JP Morgan and Peugeot – among others. These early adopters demonstrate that grid is not just about numerically intense computing. "They don't want references from mad scientists," points out Martin Hingley, VP of IDC's European systems group.

These dramatic cases are just the start, says Rich Friedrich, director of Internet Systems and Storage Labs for HP. "We're setting out to re-invent the economics of IT. We're in the very early days."

Data Centres: Five issues for the next five years

1. Concentrated, centralised, commoditised

Today: Data centres are jammed full of servers, sometimes clustered, mostly standalone. Some are monoprocessor, some are multiprocessor, some run Unix, some Linux, some Windows. Others run IBM's MVS or z/OS. The cabling is nightmarish. The heating and cooling costs more than some of the kit.

Managing this cat's cradle of resources is complex and expensive, whilst optimising its utilisation and adapting it to changing business needs is near impossible.

Tens of thousands of desktop PCs add to the problem. Hugely expensive to manage, PC costs often exceed the development and server management budgets of large organisations. They are failure prone, insecure, and open to user intervention that frequently demands expert intervention.

Emerging: Highly standardised, commoditised, mostly Intel-based servers running Linux or Windows. Many of these will be blades – single-board servers that can be densely packed into chassis which share common resources such as cabling and power supplies.

These will be complemented by SMP (symmetric multiprocessor) computers, with many standard processors sharing one pool of memory 'bricks'. Result: more power, less space, easier manageability, lower prices.

Thin clients – or indeed blade PCs – will start to replace the desktop PC. Individual users' software and data 'profiles' will be stored and managed centrally on secure servers. Management costs will plummet. Business continuity and compliance demands will be easier to meet.

One drawback: densely packed processors eat electricity and generate tremendous heat. Blade server customers are struggling to keep their systems cool. "Most data centres were built on the assumption you didn't need water, but we might be moving that way again," says Colin Hopkins, director of BT data centre services.

Pioneers: Blade servers: eGenera, IBM, HP, RLX, Sun; Thin clients: Wyse, Sun, Citrix (software); Power management: HP, IBM, APC.

2. Automated and autonomic

Today: Data centres still rely on manual intervention to accomplish routine tasks. The necessary expertise is expensive, scarce and getting scarcer, and systems administration is now thought to account for around 70% of all corporate IT costs.

Emerging: Dynamic tuning and configuration tools will shrink the time needed for tasks such as server provisioning from days to hours or minutes.

Increasingly, low-level hardware and software interfaces will emerge that enable applications to manage themselves – calling on new system resources as required, and responding to change according to pre-defined business policies.

Ultimately, virtualised autonomic systems will automatically align themselves with fluctuating business conditions – fulfilling the on-demand computing promise of the utility computing visionaries.

Pioneers: IBM, HP, BladeLogic, Veritas, Computer Associates, Microsoft, Opsware.

3. All in the cloud: Virtual and infinite

Today: Most applications run on one computer, and most computers run one operating system. Such monolithic systems are vulnerable to overloading, and are typically configured to run at no more than 30% to 40% of their peak capacity – a huge waste of resources. Utilisation of PCs, which spend 50% of their time switched off, is even worse, and even ultra-expensive number crunchers are routinely left to gather dust when not in use. When big jobs do need to be done, however, this can take hours because the available, individual computers are not powerful enough.

Storage is similarly inefficient. Data is still usually held on disks dedicated to one machine, or one group or cluster of machines. Capacity has to be managed to ensure applications always have enough storage room, and accessing data stored on disks has to be planned or negotiated through the associated server. Avoiding bottlenecks is a perpetual challenge.

Networked storage has created some flexibility, breaking the direct link from server to disk, but applications must still read and write from designated disks, and performance and administration can be a problem.

Networking is also a complicated affair. Security, bandwidth allocation, quality of service, business rules, user profiles, and even some routing, is handled by individual applications, or by middleware programmed to link to particular applications. Management is complex, software expensive, and changes difficult.

Emerging: Virtualisation offers a solution to many performance and management problems. Individual computers increasingly run multiple instances of operating systems like Linux and Windows. This allows systems to be 'partitioned' – preventing faults in one from impacting another, and allowing underlying hardware capacity to be dynamically re-allocated as different applications make different demands on resources such as memory or processor capacity.

Independent and 'heterogeneous' computers may be linked together in so-called 'grids', allowing work to spread from one overworked machine to the spare capacity of one or more others. Grids can be shared out to run many applications or configured as virtual supercomputers to run big compute-intensive jobs.

Whether built privately or as shared resources, grids and other virtualised resources hold the promise of almost infinite capacity at peak times and near 100% resource utilisation. However, a raft of technology standards will be needed to make them work and, even then, only with certain categories of applications.

Pioneers: Virtualisation: VMWare (EMC), Sun, Microsoft, IBM; Grid: Oracle, Platform Computing, DataSynapse, Intel, Gigaspaces; Storage: Hitachi, NetApp, EMC, DataCore, Falcon and Softek.

4. Outsourcing: Flexible, aligned, value-based

Today: Classic outsourcing practice involves large, carefully planned upfront commitments. It is neither cheap nor flexible and rarely offers more than a "your-mess-for-less" solution to problems which are not so much fixed as contained. Other forms of outsourcing – co-location (data centre space rental) or managed services offer tactical but often not dramatic, transformational improvements.

In most cases, billing and fee structures replicate the inflexibility of the IT infrastructure. Outsourcers or data centre operators charge customers in a variety of ways but these rarely accurately reflect real business demands.

Emerging: Conventional data centre outsourcing will be supplanted by more flexible, value-added services. Managed services – such as networking or storage – will become more common and the underlying infrastructure less important.

Virtualisation and dynamic provisioning will make it easier for businesses to access extra capacity on demand and on a meter-ed use basis. Ultimately, the price of IT capacity will be driven down and may even become traded like other commodities.

Pioneers: HP, Sun, BT, Savvis.

5. Aligned, agile & automated: Business-driven IT

Today: There are tools for reporting the status of IT systems, there are tools for controlling and managing the systems, and there are emerging tools that link and manage the two. But they use proprietary models and architectures that do not communicate well, if at all.

Until products emerge that explicitly link systems resources to business policies, true IT/business alignment will remain a black art practiced by an increasingly scarce cadre of capacity planners.

Emerging: Businesses will increasingly have one information model that all participating systems can use – enabling far greater responsiveness and better control of resources by business managers.

Planning tools will increasingly measure and model business demand and help planners understand how changes in business services affect the underlying IT systems – and vice-versa. Once this is linked with virtualised resources and dynamic provisioning, it will be possible to meet demand with resource in near real time – eliminating the need for long-term and often inaccurate model building.

Pioneers: HP (Model-based automation), Microsoft, Cassatt, Sychron

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics