It is the single clearest success story in enterprise IT in the past five years. The virtualising of servers – specifically, those based on the x86 architecture – has been as close to a magic bullet as one would dare imagine in IT.
By liberating the logical image of a server from the hardware on which it runs and so allowing multiple servers to run on a single piece of kit, the near-criminal under-utilisation of data-centre computing resources that was standard across the industry has been relieved – almost overnight.
The explosion in the popularity of server virtualisation is all too evident among the Information Age readership. Over the last two years, in its annual Effective IT survey, the proportion of readers who have adopted virtualisation has surged from 35% to 56%. In both years, over 75% of adopters described server virtualisation as either ‘effective’ or ‘very effective’, making it arguably the single best technology investment they have made this decade.
But if there is a complaint to be made about server virtualisation, it is that is has been too successful. Virtualisation has added a new conceptual layer to the enterprise IT stack, a layer that has its own rules. Indeed, the biggest names in virtualisation are making credible claims that the virtual layer may become the new operating platform for corporate IT.
That has introduced a fresh set of issues into the IT department’s already sizeable systems management workload. Given the ease with which virtual servers can be created, it is not unusual for companies to have thousands of virtual servers now spread across their hundreds of physical machines.
But those virtual environments introduce unique problems to the IT infrastructure, and they also demand new approaches to familiar problems. As an organisation’s use of virtualisation matures, the requirement for virtualisation management tools and for greater discipline in systems management processes also increase correspondingly.
Those IT organisations that successfully evolve their systems management capabilities to effectively manage virtualised environments will see the promised benefits of efficiency, flexibility and scalability become reality. Those that don’t, meanwhile, will soon find those benefits gravely threatened by complexity and by poor visibility into the IT infrastructure.
The most common virtualisation management issue, and for most organisations the first sign that all is not necessarily going to be plain sailing with virtualisation, is ‘virtual sprawl’. When servers can be provisioned at the click of a button, unconstrained by physical realities, the number of virtual machines in a given environment can quickly get out of hand.
According to a survey published this year by IT consultancy Morse, 67% of IT directors at large organisations did not know how many virtual machines existed in their environment, despite the fact that 56% reported having a system for tracking them.
Information Age has heard of one large technology company which found that it had deployed around 5,000 virtual machines, fewer than half of which were doing anything useful. It is not an atypical predicament, and is one that jeopardises the very efficiency benefits that most organisations have used to justify investment in virtualisation.
“Unused machines take up a lot of space,” explains Jim Houghton, CTO and founder of Adaptivity, an IT consultancy that specialises in flexible computing platforms. “One of the problems companies see is the proliferation of redundant data. You may be improving CPU and storage utilisation, but in reality it may be less efficient than you think, because you will have all these virtual machines that are not doing anything useful.”
But the threat of unchecked virtual complexity is not simply a matter of limited efficiency. Perhaps more importantly, that complexity blurs the connection between computing resources and the applications and business processes that rely on them. Understanding that connection is vital if an IT department is to manage its resources according to business priorities.
Page 2 of 3
T-Systems, the IT services division of Deutsche Telekom, is a sophisticated adopter of virtualisation. The computing platform it uses to support customer systems is entirely virtualised, to improve both utilisation and flexibility of compute resources. But the increased complexity introduced by this move drove a wedge between hardware and applications, the company found.
“In traditional systems management environments, there is a close link between hardware and applications,” explains Jörn Kellerman, vice president of application line at T-Systems. “In a virtual environment there is a greater distinction between the two.
With an install base of hundreds of virtual machines, you need very sophisticated systems management tools and processes.”
One reason why it is currently difficult to manage virtual environments in a way that is tied to business functionality is the existence of a fundamental technical blind spot, according to Simon Crosby, CTO of the virtualisation and management division at Citrix. “In general, at the virtualisation layer, we get to see the virtual machines, how much memory they have, their CPU requirement etc,” he explains. “But we don’t know how the applications are performing on top of the virtual machines.”
Even at a small scale, simply adding another layer in the IT stack has increased the management workload.
Although very pleased with his virtualisation deployment that has cut the server farm from 21 machines to two, Gordon Paterson, director of information technology at trade union PCS, has found the diagnosing of problems has become more time-consuming.
“We were getting poor performance on a virtual NetWare (Novell’s ageing network operating system) machine, and it was impossible to tell whether it was a VMware problem or a Novell problem,” he recalls. “So I was spending half my time on Novell forums, and half my time on VMware ones trying to find out where the problem was. Overall, virtualisation has reduced our management overhead, but it has also increased the problem resolution time.”
But there are further dimensions of management complexity introduced by virtualisation, notably software licensing and security. In the case of licensing, virtual sprawl can mean not just software sprawl but a loss of visibility into licence compliance requirements. The challenge of keeping tabs on licence usage is not helped by the fact that some software vendors are still themselves trying to figure out how to charge customers when metrics such as CPU usage are no longer meaningful.
In the case of security, the concept of a virtual machine is as useful to hackers, saboteurs and intellectual property thieves as it is to data centre administrators. A disgruntled employee might copy a virtual machine onto a USB device and run it at home, for example, or set up a virtual machine that spreads malware throughout the entire organisation. In either case, the typical IT department would struggle to trace the employee’s actions.
Crosby reports that while there is some interesting work being done on virtual machine encryption, the field of virtual security is still immature. “One thing that scares me is the lack of a standard way to encrypt a virtual machine,” he says. “If some vendor went off and encrypted all your VMs, and you broke off the relationship, you could be locked out.”
As well as making day-to-day operations more complex, these virtualisation management challenges threaten to cap one of the more significant promises of virtualisation, namely the ability to deliver highly scalable systems.
In theory at least, if an organisation’s IT infrastructure is comprised of a large number of standardised virtual machines, as opposed to a small number of siloed systems, each with their own idiosyncratic management requirements, then it can be maintained, updated and managed with repeatable – and more importantly, automated – systems management processes, allowing it to scale.
That promises to radically lower the staffing overhead of corporate IT operations. “At Google’s highly automated data centres, there is one admin for every 20,000 machines,” explains Citrix’s Crosby. “In the typical enterprise, there are between 50 and 100 servers per admin.”
And while there is some evidence that virtualisation is already improving that ratio, the potential improvements are as yet unrealised, he says. “We haven’t seen the order-of-magnitude improvement that is needed.”
Tools and process
So how can organisations tackle these management challenges? Any technical challenge is an opportunity for the IT industry, and an ecosystem of vendors selling virtualisation management tools has arisen, especially around the VMware platform.
One such vendor is Veeam, which sells a variety of tools including backup and systems monitoring for VMware environments. For its founder and CEO Ratmir Timashev, at the tools level at least, the unique demands of virtual environments will usher in a new generation of vendors with technology specifically tailored to the virtual world.
“The laws of physics work differently in the virtual world,” he explains. “With backup, for example, in a physical environment you can upload an agent onto every machine. If you did that in the virtual world, you’d be taking up storage with redundant data. You need to do things differently.”
Timashev acknowledges, though, that the existence of these vendors is precarious. “If VMware buys one of the VM lifecycle management vendors, the other ones will go bust.”
Page 2 of 3
Further up the systems management food chain, however, the big names are more entrenched. While VMware itself sells a wide range of management tools, it acknowledges that getting customers to change their strategic systems management platform to cope with virtualisation is too big an ask.
“We want to make it so that our customers who have systems management environments [from the likes of IBM, BMC, HP or CA] don’t need to replace them to manage virtual environments, and we partner with all those companies,” explains Melinda Wilken, senior director of product marketing for VMware’s management range. Indeed, all of those are working to deliver virtualisation management toolsets.
That doesn’t mean it won’t come at a price, she says. “There are rich opportunities for more productised integrations.”
Given that an important problem is tying virtual infrastructure to applications and business functionality, Citrix’s Cosby believes that the business service management vendors, such as BMC and CA, whose tools are designed to do just that, have an important role to play. But so too do the applications manufacturers, he adds.
“SAP has developed the ability for an application workload to request information from the virtualisation layer,” Crosby explains, “so the workload knows how to scale itself.”
From a tools perspective, the Holy Grail is a single system from which one can fully manage virtual and physical environments. But while most systems management tool vendors have plug-ins for virtual systems, enabling truly flexible virtual environments requires systems management tools that interact in a fashion far closer to real time than is currently the norm.
“When you’re taking down and putting up new virtual machines on a week-to-week basis, you’ve got to keep all your management systems synched in real time, or you have a lot of problems,” explains Mary Johnson Turner, research director at IDC. “They need to be able to update and to proliferate the relevant information through the stack at a greater speed. The industry as a whole is still figuring out how to do that.”
Turner adds a familiar refrain in the systems management world: “And it is as much a process issue as a question of having the right tools.”
Role of ITIL
Fortunately for enterprise IT shops, the systems (or rather, service) management processes outlined in the industry-standard ITIL framework are sufficiently decoupled from specific technical tools to accommodate virtual environments. “The processes recommended by ITIL are all perfectly applicable to virtual environments,” says Turner.
Indeed, recent IDC research has found that greater virtualisation deployment drives organisations to apply best practice processes to their virtual environments. Nearly 80% of US organisations that have more than 50 virtual machines say they currently apply or plan to apply ITIL or other best practice process models to managing their virtual infrastructure, the survey found. Among those organisations with fewer than 50 machines, that figure was just 48%.
The ITIL processes of configuration and change management are of particular importance in virtual environments, adds Turner. Adaptivity’s Houghton agrees: “Configuration management and understanding what those virtual machines are doing is vitally important,” he says, “and if you don’t change the processes when you change your architecture, you are going to get in a lot of trouble.”
Unfortunately, for the same reason that they are important, those processes can be difficult to execute in a virtual environment. “They can start to feel a little stressed as virtualisation progresses,” says Turner.
For most organisations then, as virtualisation proliferates they will have
to step up their systems management processes, and intensify their efforts to tie those processes to business requirements.
But by the same token, if an organisation has already achieved a mature level of business service management, then virtualisation need not prove too difficult
to integrate into existing processes.
UK retailer Tesco, for example, is currently preparing to virtualise 1,500 of its servers onto 120 HP blades using Citrix’s XenServer product. But Tesco’s IT director Nick Folkes is unfazed about the management challenge associated with that product.
“We have hundreds of thousands of physical devices in our IT environment, and adding virtual machines on top of that could have caused even greater complexity,” he explains. “But we spent a good year changing our development and support processes in line with managing business processes.” That introduces a ‘layer of abstraction’ that in fact makes it simpler to manage IT effectively, he says.
But no one is losing sight of the fact that the benefits far outweigh the management overhead. At pension company Standard Life, a radical programme of systems refresh, consolidation and server virtualisation has dramatically reduced its data centre, server and energy requirements. By virtualising 70% of its Intel servers, it has been able to decommission 143 of its 400 production machines. Along with consolidating onto a small number of more powerful platforms, virtualisation has cut its occupied floor space from 1,400 sq m to 500 sq m, says Neil McPherson Standard Life’s data centre manager, and take its annual power consumption from 11 million kilowatt hours to seven million kilowatt hours – a saving of £300,000 a year in energy costs alone, says McPherson.
It is typical of the contradictory nature of virtualisation that it can simultaneously confound organisations’ ability to tie IT resources to business functionality while also allowing them to dynamically allocate those resources according to demand.
But for many, capitalising on the potential of virtualisation will require more disciplined systems management processes and additional tooling. The future is looking increasingly virtual, and those organisations that bring the new wave of virtual processes and tools on board sooner rather than later will be in the best shape to benefit from it.