The new virtual platform

For as long as Intel-based computers have been fulfilling local file and print duties, the market for ‘commodity servers’ has not been an especially sophisticated one. Faced with finding a simple and cost effective way of meeting their users’ insatiable appetite for new applications, commercial IT buyers have ordered Wintel and Lintel boxes by the dozen, and vendors have happily fulfilled this demand with no-frills boxes priced on a ‘bangs for bucks’ basis.

Now though, the Intel server community is waking up to the virtues of virtualisation and the commodity server market may never be the same place again.

In fact, harsh necessity has been forcing the x86 computing world to reinvent itself for several years. Beginning in the immediate post dot-com era, when suddenly cash-strapped businesses discovered that years of buying budget boxes had created a sea of widely dispersed, poorly optimised servers that were cripplingly expensive to manage and maintain.

The almost universal response to this ‘server sprawl’ crisis has been to consolidate server resources, by ‘retiring’ as many distantly located machines as possible, and relocating applications to data centres where they can be more cost-effectively managed. However, this solution has short-comings of its own, contributing to the creation of over-crowded, over-heated data centres that consume more energy than can readily be supplied, and waste more of this energy than is ecologically acceptable.

Fortunately for Intel-server buyers (although maybe not immediately for manufacturers of Intel-based systems), at about the same time that they began to notice how expensive their Intel systems were, VMware released what has since been recognised as the market’s first practical server virtualisation for x86-based systems.

Capacity planning

Indeed, as mainframe and high-end Unix system users have long known, and Intel are now finding out, virtualisation is a very effective way of optimising server utilisation. By using logical virtual machine images, multiple applications can safely, and transparently be deployed against the same underlying physical resources, and those shared resources can be run at between 60% to 80% of their full capacity.

Conventional Intel-based systems are typically devoted to supporting one application, and are rarely run to within 30% of their peak capacity. In simple terms this means that a virtualised server system will run more applications per square meter than a conventional Intel-based system, and do so more efficiently and economically.

The power savings alone are so great in fact, that Pacific Gas & Electricity, California’s chief power utility, hopes to reduce the pressure on its power plants by paying businesses $300 for each server they decommission by creating a virtual image on another machine.  More to the point, as far as most IT professionals are concerned, early adopters of VMware software, or of rival virtualisation products from companies such as XenSource and SWsoft, routinely report consolidating their server resources by a factor of 5:1, as well as other benefits.

With such powerful testimony in its favour, it is no surprise that server virtualisation is rapidly becoming a mainstream practice. According to IDC, although fewer than 1% of today’s 24.6 million servers support virtualisation, by 2010 the analyst house predicts that 17% of all new servers to have virtualisation software pre-installed. However, other market watchers, such as the 451Group, think such estimates are too conservative. In a similar time frame  451Group expects “most” new servers will ship with virtualisation software pre-loaded.

It already seems as if the question for corporate IT buyers is no longer about whether or not to equip new or existing servers with virtualisation software. Instead, they should be beginning to consider which servers will be best suited to support tomorrow’s virtual machine estates?

Just at the moment, says Jerry Walsh, a data centre systems consultant with Hewlett-Packard in the UK, the answer to this question is a pretty straightforward one: “They buy bigger ones.”

Old favorites

Walsh has spent much of the last year demonstrating virtualisation software in action at Hewlett-Packard’s City of London data centre where HP is able to show blade servers based on both Intel Xeon, and AMD Opteron processors. In most cases, he says, customers’ choice of Intel and AMD tends to be dictated by their existing architecture. However the common feature of most purchases is that they order the highest specification blades they can get: “They maximise the memory, using the highest capacity DIMMS [dual inline memory modules]. That can be very expensive, but it is what will give them the most virtual machine capacity,” says Walsh.

This demand for more powerful servers with the capacity to support large numbers of virtual machines is only to be expected given that the most companies’ initial Intel virtualisation deployment is motivated by the desire to centralise and consolidate server resources. However, it is beginning to look like a mixed blessing for server vendors.

As the trend towards virtualisation for consolidation has accelerated, IDC has warned that it expects the industry to sell 4.5 million fewer servers than it had originally forecast  between now and 2010 – representing a potential revenue shortfall for the sector of $2.4 billion. The implications of this for Dell, HP, IBM and Sun are stark: if they wish to maintain their server revenues in a shrinking market, they must compete to build the best server virtualisation platforms, and they recognise as much. “We know this [virtualisation] is coming, and we know it could mean we sell fewer machines,” says HP’s Walsh, “but we’ve got to do it or somebody else will.”

Certainly, all the major server manufacturers are gearing themselves for a new battle for market share based on their ability to support virtualisation – although not necessarily always virtualisation based on Intel. IBM has already identified the new interest in server virtualisation has an opportunity to promote the perennial virtues of mainframe servers. Similarly, Sun has been quick to play up the virtualisation credentials of its ultra-threading family of SPARC processors and its Containers technology which provides operating system-level virtualisation within Solaris. However, in taking such “retro” stances, both companies run the risk of diluting their credentials as x86 suppliers.

Andy Butler, vice president of research at Gartner, is sceptical of whether these arguments will divert customers from the prospect of virtual machine servers at commodity server prices. Of course, he says, the mainframes strengths are well known: “it was built to support mixed workloads and diverse deployments. They [IBM] didn’t give us virtual machine technology to [just] play with. It is what has always made the mainframe the gold standard.”

“But, it is very difficult to draw a direct comparison between a T.Rex mainframe that has exploited virtualisation since we were in short trousers, and an x86 machine running VMware. It’s not an apples-to-apples comparison,” says Butler, who is equally sceptical of the ability of Unix vendors to piggy-back on the x86 virtualisation trend.

The problem that Unix vendors face is that they have spent most of the energy  attacking the mainframe by concentrating on building more reliable clustered systems.  They have been taken by surprise by the sudden surge in the development of virtualisation technology for Windows and Lintel so that,  by comparison, the ability of both Aix and Solaris to support virtualisation looks “immature.” As far as virtualisation is concerned today, says Butler, “x86 is where the action is.” The challenge facing server manufacturers is to harness this action to their advantage before their competitors do.

Naturally, proponents of x86 virtualisation, such as the chief technology officer of XenSource, Simon Crosby, argue that commodity hardware vendors have never had a better opportunity to differentiate themselves from the competition through technical innovation.

“One of the key things about the virtualisation layer [the hypervisor] is that it isolates hardware from the operating system,” says Crosby. This benefits the hardware vendors by enabling them to introduce new features on a schedule that is not dictated by the time it takes the operating system to support them. Instead, thanks to the hypervisor, “there is now a run-time that lets you do innovation outside the operating system.”

For a company like NEC, one of XenSource’s OEM partners, working with the Xen hypervisor enables it to build services and features that aren’t supported by the operating system. In this way, suggests Crosby, “NEC could make it possible for customers to define a security policy that is independent of Microsoft.”

It remains to be seen how many customers will be attracted by the prospect of being able to define a security policy independently of Microsoft. However there seems little doubt that agreements such as NEC’s with XenSource, which bundles XenEnterprise as the default virtual machine management environment on NEC machines, are set to proliferate.

Dell, for example, is thought to be preparing an entirely new line of made-for-virtualisation machines. These “Veso” servers are expected to feature boards capable of mounting 256GB of memory, supported by an accelerated bus and PCI support. It is also rumoured that Dell’s new virtual machine server line will feature ‘ESX-lite’, cut-down but license free version of VMware’s market-leading server product.

Alliances like NEC’s with XenSource and, potentially, Dell’s with VMware will make it easier for businesses to begin the virtualisation of their server resources. As far as driving innovation into the commodity server market is concerned however, they are unlikely to be as influential as the ongoing and intensifying competition between Intel and AMD.

Virtualisation 2.0

VMware undoubtedly deserves the credit for kick-starting the x86 virtualisation revolution, and owes its dominant market position to its early mover status. However, since the announcement of their virtualisation technology road maps, Vanderpool and Pacifica (since renamed Intel VT and AMD-V respectively), it is the two chip companies that have made the most fundamental contributions to bridging the gap between mainframe virtualisation and commodity server computing.

To some extent both companies have had to support virtualisation out of necessity. With Moore’s Law fraying around the edges in terms of their ability to keep doubling device density every 18 months, Intel and AMD have both embraced a future based on multi-core designs. This has the double benefit of enabling them to continue increasing performance without also doubling power consumption.

Chip-level virtualisation technology plays a central role in making multi-core designs practical. Without it, in a device such as an Intel Quad-core Xeon, software hypervisors or operating systems would be faced with dynamically allocating tasks across what are essentially four individual CPUs; it is unlikely that they would be able to cope. However, in Intel and AMD’s latest products, support for virtualisation technology is not treated merely as a background engineering requirement, it is emphasised as a source of competitive differentiation between their two product sets.

This became obvious most recently during AMD’s pre-launch promotion of its much delayed quad-core Opteron, Barcelona. For once Bruce Shaw, the company’s director of worldwide commercial and enterprise marketing, wasn’t concentrating on how much faster Barcelona would be than equivalent Intel chips, although he did have grounds for doing so. Instead, he said: “Barcelona is a different kind of value proposition. It’s not just about speed – although that’s still important. It’s about being more energy efficient, and it’s about being a smarter platform for virtualisation.”

By “smarter platform for virtualisation”, both Intel and AMD are talking about accelerating the rate at which virtualisation capabilities that have been developed in software, such as VMware’s VMotion dynamic application migration capability, can be implemented as features such as Intel’s FlexManagement, which are embedded in the underlying processors.

For the next several years at least, commodity server buyers would be well advised to keep an eye on this particular race. Although AMD and Intel are straining to reach the same finish line – an entirely virtualised silicon infrastructure comprising CPUs, memory and ultimately I/O – they appear to be taking different routes to getting there, and the different routes will be more suited to some applications than to others.

At the end of the race though, when AMD and Intel have created a standard means of allocating any logical task, to any element in a seamless physical fabric, virtualisation will have ceased to be an issue, since it will be an integral part of the corporate IT infrastructure. At about that time, maybe sometime towards the end of the next decade, it may also be time to stop differentiating between commodity systems, and the mainframe, because there will be no point of differentiation left.

Further reading

VMWare’s unstoppable rise – VMware is riding high with its virtualisation technology but it can expect tougher times ahead.

Hypervision – Virtualisation threatens to usurp the role of the operating system.

Find more stories in the Systems Management Briefing Room

Pete Swabey

Pete Swabey

Pete was Editor of Information Age and head of technology research for Vitesse Media plc from 2005 to 2013, before moving on to be Senior Editor and then Editorial Director at The Economist Intelligence...

Related Topics