Hypervision

At a recent meeting with some of Microsoft’s major customers in New York, Steve Ballmer was asked about his company’s strategy in the booming new market for server virtualisation technology. The Microsoft CEO’s response was characteristically blunt: “Compete aggressively with VMware.”

Microsoft, of course, has a deserved reputation for aggressively confronting any player who challenges its authority. For three decades it has decisively beaten off any threat to its office software dominance, fought hard (and sometimes dirty) to defend its near-monopoly in desktop operating systems and taken and defended the server software ground from Unix.

Thanks to this relentless determination to win, Microsoft’s towering presence has never come close to being challenged. Why then is Ballmer so worried about VMware, a company less than 2% Microsoft’s size.

The answer, in the opinion of a growing number of observers and, perhaps even in the mind of Microsoft’s CEO, is that VMware is starting to show the qualities of a giant-slayer.

Its weapon of choice is a ‘type 1 hypervisor’ – a thin layer of software that sits between a computer systems’ raw hardware resources and the systems software ‘stacked’ above – the operating system, the user interface software and applications. Its potency stems from an ability to make optimal use of the underlying hardware resources by ‘activating’ multiple virtual machines on a single machine.  Alternatively, it can be used to treat dozens of computers as a single set of resources.

Since the hypervisor essentially masks the underlying hardware from the operating software above, it also makes the deployment and reconfiguration of applications far easier – reducing, say, the time required to build new application servers from days to a few hours or even allowing pre-configured application stacks to be provisioned automatically.

Hypervisors are hardly a new idea. As early as the late 1960s, IBM was using a hypervisor to partition its mainframe environments into virtual machines. A hypervisor became an intrinsic feature of its CP/CMS and VM mainframe operating systems. However, VMware, and more recently XenSource, the commercial offshoot of the Open Source Xen virtual technology project, have brought hypervisor-based virtualisation to commodity Intel and AMD-powered systems – and that is having an explosive effect.

“In many ways,” said Ian Pratt, the leader of the Cambridge University Xen initiative, and founder and chief scientist of XenSource, “virtualisation is even more needed in the commodity world because of the huge diversity [of applications and underlying servers]. I think people are realising that virtualisation is going to be ubiquitous on commodity processors.”

Pratt is certainly not alone in his optimism for the future of commodity server virtualisation. Nathaniel Martinez, a senior analyst with IDC’s European Systems and Infrastructure Service, agrees that “it really is ‘the next big thing’, and it is a trend that is not just for today.”

Of the 7 million to 8 million servers that analysts believe will be sold this year, between 6% and 7% are expected to ship preloaded with virtualisation software. By 2010, however, this figure will have soared to 35%, say market watchers. This may be a significant chunk of the total world server sales but it is still likely to be an underestimation of virtualisation’s penetration of large server estates. According to Martinez, a recent IDC survey revealed that 55% of those larger European companies questioned said they expect to deploy virtualisation widely; in the US, respondents to the same question expressed a full 100% commitment.

In the x86 server market – as well as the overall enterprise IT sector – the impact of this widespread deployment of virtualisation is likely to be profound. In the short-term, according to Martinez, there is already evidence that server virtualisation may initially cause a dip in sales, as virtual server deployment enables consolidation and eats into demand for new boxes.

In the longer term, virtualisation promises to improve the cost of server ownership, accelerate the deployment of new systems and, ultimately, play a pivotal role in enabling the next generation of agile, service-oriented infrastructure. This should mean new business opportunities and increased sales for all participants – with one significant exception, perhaps: the incumbent operating system vendor.

Hasta la vista Vista?

For as long as there has been an x86-based systems market, Windows has been the de facto standard operating system. This has undoubtedly been a good thing. On the desktop, in departmental server rooms and, latterly, in the data centre, Microsoft has provided a consistent target for software developers to build on top of and, in partnership with Intel, helped to push down costs and make it possible to speak of a commodity server market.

Nevertheless, after more than two decades of Windows-driven x86 computing, VMware and Xen’s hyper-drive technology has the potential to be nothing less than “another operating system”, observers such as IDC’s Martinez believe. As such it provides an alternative to Windows that its proponents believe there is good reason for the industry at large to adopt.

Essentially, those that support the idea of hypervisor technology argue that Windows’ benefits are starting to be outweighed by its shortcomings, both technically and in terms of the influence it enables Microsoft to exert on the industry.

This was certainly the subtext of VMware president Diane Greene’s keynote address to her company’s user conference in December. “Our entire industry is marching to the cadence of the operating system,” she said. This beat, she said, is a plodding rhythm tying customers’ applications to their hardware, and requiring innovations in either hardware or applications to queue up in front of the ever-narrower portal that is the operating system vendor’s schedule.

By contrast, virtualisation allows customers, hardware vendors and developers to by-pass this bottleneck by adopting a “thin layer of software” that can allow innovation from either side of it to pass through unhindered. In the future, says Greene, “there will be no more arbitrary reasons for purchasing software. Virtualisation will let our customers choose software based on functionality, reliability and price.”

So far, no other vendor has grasped the tiger’s tale as boldly as Greene, and suggested the hypervisor could supplant the operating system as the defining feature of a computer system, and in so doing disintermediate the world’s biggest software vendor from its core power base. But, she is not alone in believing that the operating system is facing a potential crisis.

“The operating system is a large body of code – a large body of interfaces into the user space, the applications programs, and also a large body of interfaces into the hardware,” says XenSource’s Pratt. “The trouble is that hardware [development] is moving pretty quickly. New hardware advances like multicore, new I/O devices and other advances are evolving too quickly for the operating system to keep up.” And, even when the OS developers do catch up, it forces customers to deploy major upgrades, “so there is a lot of pain involved,” he adds.

Testing times

The truth of this will be tested later this year when Microsoft releases its next generation server operating system – Longhorn. Microsoft has worked hard to make deployment of both Longhorn and its new Vista desktop operating system a far less disruptive process than was the case with earlier operating system upgrades.

Still, it is the nature of big, monolithic code bases such as Windows that they carry a lot of baggage with them. In Longhorn’s case, says Pratt, Microsoft is preparing “a build of Windows that has everything in it, and for most applications probably too much,” he says. This ‘excess’ code is what stops Windows from being as responsive to innovation as some customers would like, and provides a potential source of system and security vulnerabilities that most could do without. A hypervisor-based alternative might enable remove those limitations.

Of course, there is no evidence that customers or third parties intend to abandon Windows, but that isn’t stopping some from at least experimenting with virtual alternatives. Ingres, for instance, now offers its opens source database on an “optimised” subset of Linux running against the Xen hypervisor – allowing more of the underlying hardware resources to drive database queries rather than serve the operating system.

VMware has similar stories among its own technology partners, including major enterprise software players such as BEA Systems, and a variety of smaller players, such as the network traffic management provider, Zeus Technology, who are busily populating an online market for virtual appliances. Such products, like the Ingres database, can be deployed as pre-configured stacks, incorporating an optimised operating system element, which can then be ‘dropped’ directly onto the VMware ESX server in a fraction of the time their deployment would conventionally take.

All of this activity speaks of an alternative future for the way that application software is packaged, deployed and licensed (see box). It isn’t clear how distant this future may be, or exactly what role will be left for the operating system to play when it does arrive.

It may be, as VMware’s Greene suggests, that the OS will cease to be the key layer of the software stack. Alternatively, it may be, as Ballmer told his customers in New York, that the destiny of the hypervisor is to become part of the operating system, rather than its replacement.

This is certainly what Microsoft is hoping will happen. So far, the company has responded to the growth of virtualisation with a series of tactical moves that have included the purchase of specialist vendor Softricity. Its desktop application virtualisation technology radically improves how desktop images are managed and deployed in large organisations. And the launch of Microsoft’s own virtual server extension to Windows will allow the operating system to manage virtual images of itself.

However, as yet, there is no direct Microsoft answer to the hypervisor. For that, customers will have to wait until Microsoft unveils its own hypervisor development, code-named Viridian. This is scheduled to happen six months after Longhorn arrives at the end of 2007, suggesting that it may be between 18 months and two years before it can counterattack any potential giant slayers.

By then, both VMware and the Xen community will have made greater inroads into corporate software stacks that have hitherto relied on Windows to manage their hardware requirements. As Ballmer himself conceded to his customers in a recent briefing, owning the lowest layer of software in the stack is the sweet spot for any vendor. As he says, “everybody in the operating system [space] wants to be the guy at the bottom.”

The case for virtual licences

The spiralling popularity of virtualisation is having a profound impact on many areas of established IT industry practice – not least in the area of licensing.

In the past, software licensing has been governed by two key measures, the size of the processor resource driving applications or system software, or the number of users that are required to access them. Both are rigid charging mechanisms that normally require customers to subscribe to more capacity than they may ever actually use. Typically, where operating systems for commodity systems are concerned, it is usual for customers to require a new operating system licence each time they provision a new application server.

In a virtual server environment, there is no longer a rigid link between applications, the operating system and an individual processor. Instead, new software stacks can be provisioned and deprovisioned on the fly – a level of potential flexibility that is incompatible with conventional licence terms.

As virtualisation becomes more widespread, and concepts such as virtual appliances more commonplace, new licence models will be needed that allow customers to enjoy the full potential of virtual technology.

The importance of this issue was demonstrated in February by VMware’s decision to publish a White Paper that criticised Microsoft’s approach to virtual machine licensing. The paper listed seven ways in which VMware believes Microsoft is obstructing the growth of the market for virtualisation software.

“Microsoft is trying to restrict customers’ flexibility and freedom to choose virtualisation by limiting who can run their software and how they can run it,” the paper said. It went on to accuse Microsoft of deliberately using licence restrictions to discourage customers from buying other vendors virtualisation products. In particular, it cited Microsoft’s insistence that its operating system licences are associated with specific physical servers as an attempt to prevent the virtual machine mobility capabilities provided by VMware software.

Microsoft has rejected VMware’s accusations, and independent voices have questioned VMware’s “naivety” in suggesting that any software company would not try to protect the value of its products by placing restrictions on their use. But others, including some virtualisation users, have echoed VMware’s view that it is no longer reasonable for software vendors to ignore their legitimate need for greater freedom and flexibility in how they operate their systems.

It is possible that, as the virtualisation matures, customers will value innovative licensing as much, if not more, than innovative technology.

Related Topics