Are enterprises taking business continuity seriously?

 

Disasters can strike at any time, whether by human error, cyber-attack or natural events like earthquakes, fires, floods and hurricanes.

Even so, it’s quite tempting to sit back and relax, and not worry about the consequences – perhaps for cost reasons. But investments in business continuity are like an insurance policy – the best way to prevent downtime is to keep a step ahead of any potential disaster scenario.

When unforeseen incidents do occur, the organisation’s disaster recovery plan should instantly kick in to ensure that business continuity can be maintained with either no interruption or a minimal amount of it.

An e-commerce firm, for example, could lose sales to its competitors if its website goes down. Downtime can damage the company’s brand reputation. For these reasons alone business continuity can’t wait, and so large volumes of data need to traditionally have a batch window for data for back-up and replication.

This becomes increasingly challenging with the growth of data volumes, as businesses are no longer talking mere zettabytes – they are now into the realms of yottabytes and brontobytes.

Avoiding complacency

So are organisations taking business continuity seriously? Apparently they are, but how they go about handling disaster recovery is another matter. The difference is made in how companies manage disaster recovery and business continuity, and the emphasis they place on how time really matters by putting solutions in place to ensure that a high level of uptime and network performance is maintained.

More to the point, Gartner says that disaster recovery and business continuity are merging to become IT service continuity. In fact, the analyst firm has reported that 34% of inbound calls emanate from corporate customers, those that are asking for analyst assistance.

The focus therefore needs to be on how companies improve their IT service continuity. Yet these two disciplines are in many respects becoming synonymous too. Phil Taylor, director and founder of IT consultancy firm Flex/50 concurs with this view, stating that a high percentage of organisations are taking disaster recovery and business continuity seriously.

“Businesses these days can’t afford to ignore business continuity, particularly because of our total dependence on IT systems and networks,” says Taylor. The on-going push for mobile services and media rich applications will generate increasing transaction rates and huge data volumes too.

The problem is that most businesses think they are ready for business continuity, but once disasters actually strike the real problems occur. Just like an insurance policy protects individual policy holders or organisations from unplanned events, enterprises also need to implement disaster recovery and business continuity plans to minimise the reputational and financial risks associated with downtime. A failure to do so could not only cost time and money, but it could also lose a company its customers and its competitive advantage.

Budgetary challenges

Bryan Foss, a visiting professor at Bristol Business School and fellow of the British Computer Society, says, “Operational risks have often failed to get the executive and budgetary attention they deserve as boards may have been falsely assured that the risks fit within their risk appetite.” Another issue is that you can’t plan for when a disaster will happen, but you can plan to prevent it from causing the loss of service availability, financial or reputational damage.

To prevent damaging issues from arising, organisations need to be able to provide support for end-to-end applications and services where availability is unaffected by disruptive events. When they do occur, the end user shouldn’t notice what’s going on – it should be transparent. This happened when Hurricane Sandy struck, significantly hitting data centres in New York. The October 2012 storm damaged a number of data centres and took websites offline.

Traditionally, backing up is performed overnight when most users have logged off their organisation’s systems. With today’s expectation that services will be available around the clock, every day of the week and with an increasing data volume, the back-up windows is constantly being squeezed, indeed more than ever before. This has led to solutions being employed that depend on an organisation’s recovery point objectives (RPO) and recovery time objectives (RTO).

For some organisations, such as financial services institutions, where these are ideally set at zero, synchronous replication is employed, and this suggests that the data copies are in the same data centre or the data centres are located a few miles or kilometres from each other.

This is the standard way to minimise data retrieval times, and this is what most people have done in the past because they are trying to support data synchronisation. Yet placing data centres in the same circle of disruption can be disastrous whenever a flood, terrorist attack, power outage and so on occurs.

With other organisations, an RTO and RPO of a few milliseconds is acceptable and so they can be placed further apart, but this replication doesn’t negate the need for backing up with modern technologies that allow machines to be backed up whilst they are still operational.

Time is the ruler

Time is the ruler of all things. The challenge though is for organisations to be able to achieve more than 95% bandwidth utilisation from their networks. This is because of the way that the network protocol TCP/IP works.

Customers are reportedly using around 15% of their available bandwidth and some people try to run multiple streams, which you have to be able to run down physical connections from the ingress to the egress in order to attain 95% ultilisation.

Smart technology is available that uses parallelisation techniques and machine intelligence to use virtual connections that fill the physical link and delivers upwards of  95% utilisation.

For example, one organisation needed to back-up 70TB of data using a 10Gb WAN. The process took the customer 42 days to complete. The firm was looking to replicate its entire environment, which was going to cost up to £2 million. 

The company worked with a vendor to install a SCION solution as a proof of concept. Due to the company’s other data requirements, the 10Gb bandwidth was throttled back to 200MB, which still resulted in the customer being able to complete the entire back-up within just seven evenings – achieving 80% expansion headroom on the connection and 75% on the number of days they clawed back. The customer has also since then been able to increase their data volumes while saving time and money.

The problem is that with outdated technology CEOs and decision-makers haven’t had the choice with regards to the distance between their data centres without having to think about the impact of network latency. By employing machine learning, SCION technology can give decision-makers the power to make a different choice to the one that has been historically made. 

Taylor nevertheless concludes with some valid advice: “People need to be clear about their requirements and governing criteria because at the lowest level all data should be backed-up…,and business continuity must consider all operations of a business – not just IT systems”.

To ensure that a disaster recovery plan works it has to be regularly tested. Time is of the essence, and so data back-ups need to be exercised regularly with continuous availability in a way that maintenance doesn’t also prove to be disruptive.

 

Sourced from Claire Buchanan, CCO of Bridgeworks

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics