He who laughs last, so the saying goes, laughs longest. In the world of business continuity planning there is an essential extension to this truism: he who laughs longest, is generally the person who last completed a back-up.
Indeed, although modern business continuity strategies are more comprehensive and sophisticated than ever – capable of countering virtually any eventuality from a severed leased line to a full scale terrorist attack – the starting point for any effective strategy is the same as it has always been: protect corporate data assets by practising back-up and other forms of copying such as replication.
Just how central those remain to the task of preserving the overall integrity of corporate IT infrastructures is illustrated by the unchecked growth in demand for back-up and replication technologies.
The most recent figures, from Dataquest, Salomon Smith Barney and Harris Information Systems, show that network and host-based back-up and replication software accounts for more than 60% of today’s $10.4 billion storage management software market, still easily leading sales of arguably more sophisticated storage software such as information lifecycle management (ILM), virtualisation and other storage resource management (SRM) products.
To some extent, the unexpected resilience of back-up and replication sales reflects a recent shift in the priorities of corporate buyers when it comes to storage procurement. Three years ago, when companies were still struggling to keep up with demand for data storage capacity, priorities were firmly focused on bringing greater efficiencies to storage operations. That desire to optimise existing infrastructures sharpened demand for storage area networking and network attached storage technologies.
Since then, however, the demands of regulatory compliance and the intensifying threats of cyber-crime and political extremism have refocused attention on data assurance basics – and back-up and replication investments have subsequently climbed the corporate procurement priority list.
However, buyers have by no means abandoned networked storage. Having embraced SRM and virtualisation, they are looking at options that provide all of the data assurance qualities of traditional back-up solutions, but without compromising on the efficiency and flexibility that networked storage has to offer. Far from standing still, back-up and replication practices have themselves become more sophisticated.
Traditional data assurance regimes, based on overnight batch back-up, are now no longer sufficient for many businesses. Where applications are designed to be available 24×7, organisations are using online back-up tools and applying ILM software that prioritises back-ups based on the importance of information stored.
“When you start talking about information lifecycle management and the role that back-up plays in that, you are not really talking about what people used to call back-up,” says Andy Cleverly, director of technology marketing for Oracle EMEA.
Modern back-up systems are deployed not as discrete adjuncts to specific applications or data volumes, but as elements in a richer, and increasingly virtual pool of storage resources. The challenge that this poses to storage managers is not trivial though. In theory, there are no longer any real technological barriers to companies backing up and replicating all transaction data without compromising systems performance.
In practice this belt and braces approach to data assurance is rarely affordable. Instead, says Chris Stuart, a technical consultant with storage giant EMC, companies are learning to take a ‘top down’ approach to data assurance, which starts with a business impact assessment. “This measures the risk that data loss presents to an organisation, and leads to an estimate of how much can reasonably be spent on preventing it.”
Following this, says Stuart, they should then make an assessment of what existing back-up resources are already available to them, and only then begin to consider what new systems they need to procure and how best to deploy them.
This measured approach sounds reassuringly straightforward. In practice it is a lengthy and complex process. Storage managers need to work closely with line-of-business professionals to establish crucial agreement on data back-up and replication principles, such as the recovery point objective (RPO) – the maximum time that data can be offline before it is recovered – and the extent to which an organisation can afford to lose differing types of data.
Most back-up policies are based on a trade-off between these two differing sets of parameters. For companies whose business depends on a reputation for reliability and availability, for instance, the balance of their data assurance strategy will be geared to a tight RPO, where some degree of data loss is acceptable, so long as systems availability is optimised. In systems terms, this may translate into back-up strategy founded on networked storage systems that are themselves relatively infrequently backed-up supporting archive systems.
Alternatively, where companies place a greater premium against data loss than on availability, recovery times may be sacrificed to allow more data to be backed up more frequently.
Ultimately, most organisations will implement a variety of back-up strategies, founded on a variety of online and near-line systems, and differing storage media offering different cost of ownership profiles. The only option unavailable to the continuity conscious company is to decide that back-up is not an option.