Even a back-up policy that looks sound on paper can be found wanting in practice.
CTD, a German call centre operator working for the likes of Deutsche Telekom and Computer Associates, suffered a
fire in its air conditioning system in March 2003. Within minutes smoke and fumes had spread through the building.
The fire started at night, so there were no casualties, but the company is still feeling the financial pain today. Although technical staff at CTD took daily back ups to tape and stored them in a fire safe, on the day of the fire a couple of tapes were in an ordinary cupboard, not the safe. They were lost.
Executive manager Marcus Hoppe estimates that the lost data – and the time it took to restore CTD’s systems – has cost the business between 5% and 10% of its €12 million turnover.
Now, as part of a more robust back up and business continuity plan, CTD replicates its data between its two sites, at Dusseldorf and Gelsenkirchen. And instead of writing back-up data directly to tape, CTD’s technicians first copy it to a network attached storage device, so that the information can be backed up and replicated far more quickly. Should disaster strike twice, CTD is now far better prepared.
CTD is not an isolated example. A survey carried out by industry researcher Winmark suggests that only 35% of companies are more than 90% confident in their back-up systems.
Growing data volumes are the root cause: the survey found that, among companies with more than 500 staff, data storage requirements are growing at 21.5% annually. And the challenge is made more complex because there is less time to back-up information. Banks and financial services are a case in point: extended branch hours, phone banking and now Internet banking services have narrowed – or even closed – the window for back-ups.
Most banks in Europe now aim to have no downtime at all. “We have become horribly reliant on the integrity of our data,” says David Weymouth, CIO of Barclays Bank. “Developments such as Internet banking also create more single points of failure.”
This is putting a strain on conventional storage management echniques, which rely on taking applications offline for during the back-up process.
At the technical level, IT managers are looking to disk-based back-up either to complement or replace tape-based systems. Tape retains some important advantages: it is still a low-cost medium; it is portable (tapes can be transported off site or put into a fire-proof vault); and the software for managing tape libraries is tried and tested.
Research by the storage division of Sony suggests that tape costs for the latest generation of cartridges (SAIT) will continue to be lower than those for disk. However, the company predicts that low-cost disk (ATA RAID) will be cheaper per gigabyte than ‘conventional’ tape by 2006.
“Businesses are tackling growing data volumes by resorting to new devices, such as disk-based systems, or staging their back ups to disk,” explains Claus Egge, storage analyst at IDC. “They are also moving to new storage software that recognises different devices, in order to make the technology as automated as possible.”
Tapes, Egge points out, create a single point of failure because the medium itself is relatively fragile. “If you want to make sure you can restore from tape, you need the same data on at least two tapes,” he says.
This is becoming harder for companies to handle, as data volumes grow. Barclays Bank estimates that it has around 500TB of data, with 100TB being backed up each day. Its back-up provider, Enable, holds some 80,000 physical tapes, amounting to 2.5 petabytes (PB) of data; Barclays alone accounts for the equivalent of 8,000 physical tapes.
All but the latest tape systems are linear, so the time it takes to write data for back ups and to read it is governed by the physical limitations of the tape system’s speed, and the speed at which tapes can be changed. Disks are clearly more flexible, as they store information randomly, are less prone to deteriorate and can read and write information instantaneously.
Unlike tapes, disk-based data is typically online all the time, and adding more storage capacity is only a question of adding more, or larger, drives. But it is greater back-up speeds that prompts most interest in disk-based systems.
“The primary back up for large enterprises is moving to disk, with the data then moving off to tape. That is the way the industry is going,” says Bob Hammer, chief executive of
CommVault Systems, a storage company that specialises in back up for the Windows market. “As many as 70% of our customers are now backing up to disk rather than tape.”
At the highest level, servers, storage arrays and arrays in storage area networks (SANs) make use of RAID (redundant array of inexpensive disks) technology to provide near-continuous access to data. The data is written simultaneously to two physical drives so if one fails the data is still available on the other. In such circumstances, users of the application may never even notice a failure.
RAID alone cannot provide a full back up, however, as a data error will affect all copies of the data within a RAID device. A physical disaster, such as a fire, could also destroy both the server and its storage devices. As a result, companies in environments where high-availability is critical, such as financial services or ecommerce, are using data mirroring to ensure continuous availability. Mirroring simultaneously copies all the data for a critical application to a second, separate hardware installation.
This can be handled in-house, but faster and cheaper bandwidth is encouraging more businesses to mirror data off-site – either at another company facility or with an outsourcer for greater protection against physical threats. Barclays, for example, mirrors its data to an IBM data centre.
The cost of the high performance storage systems needed for mirroring makes them less suited for long-term data back up. But a multi-tiered back-up strategy will maximise performance at a reasonable cost.
Another disk-based technology, snapshotting, plays a critical role in protecting data. Snapshot software makes a near instantaneous copy of data and then saves the copy either to a local disk or to a storage area network device.
With snapshot technology, there is no need to take the main application offline to make the initial copy. IT managers can then either keep the snapshot as a back-up in its own right, or archive it to lower-cost disks or to tape. Storage analysts say that snapshotting is giving new life to older tape systems as it takes away the time constraints that come with copying straight to tape.
Direct Line, the insurance company owned by financial services giant HBOS, is one business that uses snapshotting in this way. Using EMC Symmetrix arrays and TimeFinder software, the company takes snapshots of its production data and then copies it to tape overnight. According to Bhaktesh Patel, head of infrastructure development, this has cut back-up time by four hours. As that shows, the back-up headache may not be going away, but there are techniques for taking away the pain.