Virtual Safety Net

It is an uncomfortable fact for business leaders to live with: incidents they are powerless to influence can kill their companies. Fires, hurricanes, malicious attacks can potentially cripple operations, and, as analyst group Gartner is want to point out, 40% of businesses that suffer major interruptions are dead within two years.

Historically, managers have had two options to mitigate the risk: to invest in redundant infrastructure that can instantly be brought online in the event of a disaster; or to invest in resilient systems that automatically failover. But both options have the downside that investment adds nothing to business profitability – a fact that reduces their appeal to management boards.

Today, however, IT professionals are working with a third option – virtualisation technology – that allows them to both protect the business at a much lower cost and add non-redundant processing horsepower.

Low-cost resilience

Virtualisation technology, which analyst group Gartner defines as “the pooling of IT resources in a way that masks the physical nature and boundary of those resources from the user”, is hardly new – IT staff have been partitioning and creating virtual machines on their mainframes for years. But more recently, a new approach to virtualisation is aimed at low-end servers.

At US-based car manufacturer Subaru of Indiana Automotive, managers realised that systems downtime time was having an impact on profitability. They calculated that the company spends $20,000 an hour on wages alone when its inventory tracking system is unavailable.

It has used virtualisation technology to create a pool of virtual machines that are not tied to a physical server. Workloads now automatically failover in the event of a problem with one of its physical servers. “Our downtime number last year was under three hours. The year before, it was 28 hours,” says Jamey Vester, a production controller at Subaru.

While some of its customers are happy to use its virtualisation technology to provide high system availability across all servers, Richard Garsthagen, European technical marketing manager at VMware, says that others prefer to use it tactically. “In a typical business, there are probably only between 5% and 10% of servers where high availability is critical. That means you can do it at a much lower cost.”

Telecoms equipment maker Qualcomm introduced virtualisation technology primarily to ease the burden of server provisioning within the company. But the move has also dramatically improved uptime, enabling the company to upgrade or replace hardware in the event of a systems failure, without any impact on the underlying applications.

Such examples show how real-time application clustering is being used to provide high levels of availability, without having to double up on hardware investment, says Neil Macehiter, the founder of analyst house Macehiter Ward-Dutton. “But this is hardly a risk-free approach. You need to consider what happens in the event of multiple failover.”

But virtualisation need not only happen at the server layer, says Rich Lechner, vice president in charge of virtualisation technology at IBM. “It is possible to provide that layer of abstraction right across the entire technology stack: server, storage and networks,” he says. That would effectively decouple the IT operations from any part of the physical infrastructure, potentially making it far easier to provide resilience throughout the IT operations, he adds.

Road to recovery

As well as spreading the business continuity risk, virtualisation technology is also playing a key role in disaster recovery. Typically, disaster recovery sites are expensive to maintain because they need to house exact copies of the live workload they are designed to pick up in the event of a major failure, says VMware’s Garsthagen. That level of mirroring can dictate every aspect of the infrastructure, from the version of the operating system being used to the system BIOS.

By using virtual machines, however, the production environment can be accurately replicated, without having to allocate specific physical machines to undertake the processing. “That represents an enormous opportunity to cut costs,” says Garsthagen.

The vision is not without its problems, though. As Frank Gillett, principal analyst at IT advisory group Forrester Research observes: “If you put ten virtual servers on a single machine, you eliminate nine pieces of hardware, but you still have ten operating systems to maintain.”

And when it comes to mission-critical applications, most people in charge of business continuity will still demand that there is hardware available in the recovery site.

For non-critical systems, there is a greater appetite for virtualisation technology. IT staff at Foxwoods, a major casino and resort in Connecticut attracting up to 40,000 customers a day, back up 2.5 terabytes of data every night. By introducing a virtual tape library – a technology that allows hard disks to behave as though they were tape drives – casino managers have seen a 200% speed improvement for data restorations.

Despite such successes, Macehiter says that virtualisation technology is only likely to be used for business continuity by companies who already take such processes seriously. “The cost savings are probably only going to impress those that already embrace business continuity. There are still some who will resist being converted.”

 

The number of respondents using mirroring, disaster recovery and high availability technologies shows that virtualisation could play a major role in reducing the costs of business continuity provision.

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics