From contingency to continuity



Business continuity planning is all about being prepared for unplanned, and sometimes extraordinary, events. But who could possibly have anticipated the rapid and dramatic changes that have swept through business continuity itself in the past four years?

In early 2000, business continuity, and its associated field of disaster recovery, was important, even essential, for many organisations, especially in financial services. But, as frequent CIO surveys showed, it was not a top priority.

As one manager put it, when the board had a choice of investing in a new dot-com fund, or in upgrading the company’s disaster recovery facilities, there was only ever one winner.

All that has now changed – rapidly, radically and permanently. Today, business continuity is not only at or near the top of the corporate IT agenda in almost every analyst survey (Gartner, Meta Group and IDC, for example), but it is expected to remain there for years to come. IDC forecasts that spending in this area, already at $66 billion in 2001, will jump to $155 billion in 2005.

One example of how much things have changed: Cris Conde, the CEO of business continuity specialist SunGard, has spent much of his career trying to persuade companies to do more in this area. But in January 2003, he found himself on four separate panels at the World Economic Forum in Davos, even addressing a private meeting of 50 government ministers. The practice may often be detailed and difficult, but business continuity is now, truly, a top-level issue.

Why? Four separate events have played a key role in pushing the issue up the agenda: the terrorist attacks of 2001; the dramatic corporate failures of Enron,


We’re ready… or at least, we think we are

Most research shows that businesses, especially large businesses, are well prepared for a disaster. But there is clear evidence that their business continuity plans are often either insufficient, or insufficiently exercised.

A recent study by Macarthur Stroud International, for example, found that 83% of medium to large companies in the major European economies had a disaster recovery plan. But 14% had not reviewed their strategies in the previous two to three years. Gartner Group, meanwhile, found that only 50% of global 2000 enterprises have ‘fully tested’ disaster recovery plans.

Small and medium-sized (SME) companies are not nearly as well protected. Research commissioned by Dell found that many have no disaster recovery plan – despite being worried.



WorldCom and others; the rolling wave of power cuts in North America and Europe in 2003; and the impact of a few powerful computer viruses that spread across the world.

All of these threatened, for a while, to dangerously destabilise business and disrupt civil services. Governments have responded with a slate of initiatives and legislation, ranging from the US Sarbanes-Oxley Act, designed to improved corporate governance and resilience, to the planned UK Civil Contingencies Act, which will mandate local authorities and other organisations to put improved disaster recovery plans in place.

All of these have made global headlines and have changed the perceptions of legislators and business executives. But most experts see these events as powerful catalysts, helping to remind leaders of their responsibilities, rather than the true reason why IT business continuity has suddenly become such an important issue.

That can be put down to a wider, long-term trend: IT systems are now at the heart of almost everything any organisation does – and are becoming more critical, and more central, by the day.

Whatever it is called – business-on-demand, the real-time enterprise, or the Internet-enabled business – it all means much the same thing: real-time, automated business processes, executed and recorded electronically, now span not just entire organisations, but whole industries.

It is these business processes – rather than merely the hardware, software or the data per se – that are most vulnerable, and which cause the most problems when they are disrupted. Whether it is trading a government bond, or booking a patient into hospital, or buying the weekly groceries online – all these increasingly rely on whole networks of computers performing without failure – and on the ability to smoothly bypass problem nodes if failures do occur.

As SunGard vice chairman Till Guldimann put it in a recent white paper: “The contingency challenge has shifted from disaster recovery – cleaning up and getting back to work after a cataclysmic event – to operational resilience – designing your enterprise to operate effectively, right through a disruption.”

The Basel II Accord, which is currently driving a huge wave of investment in financial services, is presenting banks with exactly this challenge. If they can show that their operational systems are both accurate and resilient, they will be allowed to operate with fewer capital reserves – a financial reward that mirrors the move in retail and manufacturing to operate with less capital tied up in inventory.


How is all this manifesting itself on the ground – out in industry and business? Here, the answer is surprisingly patchy.

Whereas some industries (financial services stands out), and some individual companies, have begun huge and radical investment and re-education programmes,


Downtime costs money

In a recent survey, Infoconomy, in association with APC (American Power Conversion), asked 230 organisations “What would be the cost of downtime at your company?” One-quarter did not know, but one-quarter said it would cost a minimum of GBP 10,000 an hour.



many others have clearly not fully acted on their growing awareness and made the necessary investments or strategic changes.

There are certainly some signs that the message has sunk in at the top tier of management. One US survey reported, for example, that more than 80% of US CEOs now directly and regularly review IT business continuity and corporate governance systems; and spending intention figures put business continuity at or near the top of the list.

But other indicators, such as actual spending figures and CIO surveys, are much less clear. IBM, SunGard, Hewlett-Packard and others in the business continuity planning business, for example, all reported an upturn in business in 2003, in some cases in double digits. But these figures, while above IT industry growth rates, are not as dramatic as they might be, given the apparently extraordinary demand for better business continuity.

One reason for this is that the improvement in the reliability of IT systems over the past decade has effectively provided customers with a business continuity premium. RAID (redundant arrays of independent disks) devices and storage area networks, for example, drastically reduce the threat of data loss due to systems failure. Equally, products and services from storage companies, such as Hitachi and EMC, allow for real-time or near real-time data replication over a network to a remote location.

“Users can use these network technologies in a very effective way. Many have gained efficiencies of a factor of three or four to one. And there is a fall out – the systems become more resilient and the continuity factor improves,” says business continuity consultant Hamish Macarthur of Macarthur Stroud International. But using these products is a means towards improved continuity – and does not eliminate the need for detailed and practiced planning.

Lack of review

A more disturbing pattern, however, has been uncovered by repeated surveys of executives at businesses heavily dependent on IT. These surveys – from the Gartner Group, the Business Continuity Institute and others, suggest that business continuity strategies and plans are not thorough enough, are not reviewed frequently enough, and are certainly not practiced enough.

“More companies are trying to become a real-time enterprise (RTE), but in the race to get there, many enterprises are not implementing critical business continuity plans,” according to Gartner. In a recent study, it found that less than one-quarter of Global 2000 enterprises have invested in comprehensive business continuity planning, and only half have what can be termed ‘fully tested’ disaster-recovery plans.

If such findings present a disturbing picture about large companies, mention of small and medium-sized enterprises (SMEs) is likely to send the specialists into hand-wringing contortions. “SMEs cause me real concern. I’d be surprised if many SMEs have any recovery capability in place at all,” says Philip Carter, head of SunGard Professional Services for SunGard Availability Services. He points out that after the World Trade Center attacks, several significantly sized SMEs went out of business. Similarly, after the New York power blackouts, smaller firms were most affected.

One reason why SMEs are often less prepared is price – so the emergence of specialist disaster recovery services that target SMEs may eventually help. But over time, a more important significant driver will encourage all organisations – large and small – to improve their business continuity planning – interdependence.

As the age of real-time business grows, most end-to-end business processes will involve many organisations. And, on the principle that any process is only as strong as its weakest link, the most diligent – and most powerful – will force the rest to improve their systems and their practices.

It may take a few years, but it cannot be too long before all organisations will need to provide publicly available evidence of their business resilience.


Where businesses are vulnerable
Function/system affected   Most common causes   Effects  
Headquarters Fire, power loss, weather, major incident Loss of productivity, revenue and customers. Danger to employees’ lives.
Production server Disk drive failure, network outage, underpowered server programs Inability to access data and to process transactions.
Back-up server Related to production server failure (above) Loss of data, programs and transactions.
Ecommerce systems Software failure, server failure, security flaws Loss of revenues, transactions, customers and market share.
Internet access Network outage, server failure Loss of revenues, transactions, customers and market share.
Secure systems Hacking, denial of service, viruses Loss of trust, confidential, information, customers and productivity.
Call centre Server failure, denial of access, software failure Loss of revenue, customers and customer satisfaction.
Billing centre Application failure, operator errors, hardware failure Slowed or interrupted cash flow, loss of productivity.

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics