How to reduce the business risk created by distance

Geographic distance is a necessity because disaster recovery (DR) data centres have be placed outside the circle of disruption.

The meaning of this term depends on the type of disaster. It could be a natural phenomenon like an earthquake, volcanic eruption, flood or fire. But calamities are caused by human error too, so the definition of the circle of disruption varies.

In the past, data centres were on average kept 30 miles apart as this was the wisdom at the time. But then today, the circle’s radius can be up to 100 miles or more.

In many people’s views, a radius of 20 or 30 miles is too close for comfort for auditors, putting business continuity at risk.

So what constitutes an adequate distance between data centres in order to ensure that business goes on, regardless of what happens within the vicinity at one of an organisation’s data centres?

>See also: Hurricane Sandy hits East Coast data centres

Many CIOs are faced with a dilemma of how to balance the need of having two data centres located within the same Metro area to ensure synchronisation for failover capability. Yet, in their hearts they know that both sites will probably be within the circle of disruption.

So to ensure their survival, they should be thinking of what their minimum proximity from the edge of the circle is, for a tertiary DR site.

After all, Hurricane Sandy ripped through 24 US states, covering hundreds of miles of the East Coast of the USA and causing approximately $75 billion worth of damage. Earthquakes are a major issue throughout much of the world too – so much that DR data centres need to be to be located on different tectonic plates.

A lack of technology and resources is often the reason why data centres are placed close to each other within a circle of disruption. There are, for example, green data centres in Scandinavia and Iceland which are extremely energy efficient, but people are put off because they don’t think there is technology available to transfer data fast enough – and yet these data centres are massively competitive.

Risk matrix

Due to the effects of latency, too many data centres are overly placed within a circle of disruption, but there are solutions that reduce the need to choose data centres that are in many respects too close together.

This doesn’t mean that organisations should relax and feel comfortable if their data centres are located far from each other. The risks need to be taken seriously, and they can be analysed by creating a risk matrix to assess the issues that could cause any disruption.

This will allow for any red flags to be addressed as before, and as they arise. Even if a data centre happens to be within a circle of disruption, it’s advisable to situate another one at distance elsewhere. Japan is prone to earthquakes, so it would be a good idea to back-up the data to a New York data centre.

With regards to time and latency created by distance, Clive Longbottom, client service director at analyst firm Quocirca, says, “The speed of light means that every circumnavigation of the planet creates latency of 133 milliseconds. However, the internet does not work at the speed of light and so there are bandwidth issues that cause jitter and collisions.”

Longbottom explains that active actions are being taken on the packets of data that will increase the latency within a system, and says that it’s impossible to say exactly what level of latency any data centre will encounter in all circumstances as there are far too many variables to deal with.

He also thinks that live mirroring is now possible over hundreds of kilometres, so long as the latency is controlled by using packet shaping and other wide area network (WAN) acceleration approaches.

Longer distances, he says, may require a store-and-forward multi-link approach, which will need active boxes between the source and target data centres to “ensure that what is received is what was sent”.

Jitter is a problem. It is defined as packets of data that arrive slightly out of time. The issue is caused, Longbottom says, by data passing through different switches and connections, which can cause performance problems in the same way that packet loss does.

Packet loss occurs when the line is overloaded, which is more commonly known as congestion, and this causes considerable performance drop-offs that don’t necessarily reduce if the data centres are positioned closer together.

The solution is to have the ability to mitigate latency, handle jitter and packet loss, and this needs to be done intelligently, smartly and without human intervention to minimise the associated costs and risks to give IT executives the freedom of choice as to where they place their data centres – protecting their businesses and the new currency of data.

Mitigating latency

Self-configuring optimised networks (SCIONs) use machine intelligence to mitigate latency. With machine intelligence, the software learns and makes the right decision in a micro-second according to the state of the network and the flow of the data, no matter whether it’s day or night. A properly architected solution can remove the perception of distance as an inhibitor for DR planning

“At this stage, be cautious,” warns Longbottom. ‘However, it does have its place, and making sure that there is a solid Plan B behind SCIONs means that they can take away a lot of uncertainty in existing, more manual approaches.”

One company that has explored the benefits of a SCION solution is CVS Healthcare. The main thrust was that CVS could not move its data fast enough, so instead of being able to do a 430GB back-up, it could just manage 50GB in 12 hours because its data centres were 2,800 miles away – creating latency of 86 milliseconds. This put its business at risk, due to the distance involved.

>See also: 10 tips to ensure your company’s business continuity

Machine intelligence has enabled the company to use its existing 600Mb/s network connectivity and to reduce the 50GB back-up from 12 hours to just 45 minutes, irrespective of the data type. 

Had this been a 10GB pipe, the whole process would have taken just 27 seconds. This magnitude of change in performance enabled the company to do full 430GB back-ups on a nightly basis in just four hours. The issues associated with distance and latency were therefore mitigated.

Machine intelligence will have its doubters, as does anything new, but in a world of increasingly available large bandwidth, enormous data volumes and a need for velocity, organisations would do well to consider what technology can do to help their businesses underpin a DR data centre strategy that is based upon the recommendations and best-practice guidelines that have been learnt since disasters like Hurricane Sandy.

Despite all mankind’s achievements, Hurricane Sandy taught us many lessons about the extensive destructive and disruptive power of nature. Having wrought devastation over 24 states has dramatically challenged the traditional perception of what is a typical circle of disruption in planning for DR.

Metro-connected sites for failover continuity have to stay due to the requirements of low delta synchronicity, but this is not a sufficient or suitable practice for DR. Sandy has taught that DR sites must be located hundreds of miles away if businesses are to survive.


Sourced from Claire Buchanan, CCO of Bridgeworks

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics