The backup of data remains a basic IT safety net for any large organisation, but long gone are the days when traditional backup methods can be put in place to support recovery alone, then forgotten about.
If the latest reports from analyst house Forrester are to be believed, the market for backup and recovery software is undergoing a major shake-up as vendors struggle to catch up with other areas of IT in terms of performance and reliability.
‘From missed backup windows to slow or failed recoveries, many firms are weary of the constant battle with their backup software and are looking for a change,’ says Forrester’s Rachel Dines.
Last year, Gartner analysts stated that a third of organisations will change backup vendors by 2016 due to frustrations over cost, complexity or capabilities.
For many companies, the focus is now shifting from simple recovery and restore to all-singing and all-dancing suites that form the bedrock of their resilience and business continuity strategies. A lot of these companies, says Gartner, will scrap their traditional backup and recovery solutions by 2016 in favour of products that can perform a host of other functions such as archiving, replication, and creation of test data using a minimal number of copies.
But with all this added functionality, it is easy for enterprises to get bogged down with the costs and basic maintenance involved with in-house backup and disaster recovery.
As organisations move from tape to disk to cloud technologies, the rise of cost-effective virtualised solutions is starting to eclipse backup just as it has most of the storage world, as Oscar Arean, technical operations manager at disaster recovery specialist Databarracks, explains.
‘Within a few years, we’ll see the adoption of DRaaS (disaster recovery as a service) offerings rise,’ he says. ‘While cloud-based backup and disaster recovery can seem expensive initially, they are far more cost effective than an entire hardware refresh of your primary and secondary site.’
Cloud seems to be the most obvious answer to many problems relating to capacity and, most crucially, getting that data out of ever-growing systems and into the storage layer.
A dirty secret of big-data bases is that they are often too large and cumbersome to back up during a nightly batch window. When traditional de-duplication is applied, the process is even slower. So, the challenge is extracting the data for backup – a process that requires an agile approach.
> See also: How To Improve Enterprise Backup and Recovery
‘Virtualisation is making services more portable, and SAN and application replication gives better options and more reliable recovery methods than tape,’ says Richard Blanford, managing director of infrastructure services company Fordway. ‘There is also a range of hosted and cloud disaster recovery options available.’
Added to this, the cost and complexity of implementing suitable disaster recovery to meet sub-24-hour recovery times and recovery points is reducing.
‘In the case of a disaster,’ says Eran Farajun, executive vice president at cloud backup company Asigra, ‘cloud backup provides a faster recovery time objective as there is no need to manually find tapes and transport tapes from an off-site location.’
According to many of Asigra’s customers, cloud backup provides them with a cost-effective, secure, reliable alternative to traditional tape backup.
‘It can take up to a week to retrieve a tape,’ says Farajun. ‘Cloud backup enables organisations to resume business operations within hours. The automation of cloud backup reduces the risk associated with manual efforts that rely heavily on human intervention.
But there is arguably still a trade-off between cost reduction and agility that is yet to be fully addressed with cloud backup solutions. Iain Chidgey, VP of EMEA at Delphix, argues that while cloud’s impact on backup has been mostly in the area of cost reduction, with many companies flocking to take advantage of it for that reason, cloud has yet to affect the agility considerations of backup and recovery in a significant way.
‘For example,’ he says, ‘if a key application and its data suffer an outage, having the data stored remotely for 40% less capital cost doesn’t really help with fast recovery.’
Many experts agree that a solid business continuity strategy should include multiple backups, including local backup and replication to an off-site data centre. This ensures that businesses can recover and resume operations quickly and efficiently should the worst happen.
But as Chidgey points out, many enterprises do not have regular backups for key databases, let alone copies in multiple places. Companies should stay on the lookout for the next generation of cloud and virtualisation solutions that will enable greater agility, in addition to cost savings.
‘CIOs should be looking for fresher technologies that can tackle the challenge of backing up ever-growing data systems,’ he says, ‘as well as efficiently migrating these backups to multiple centres for disaster recovery purposes.’
Database virtualisation is especially useful in this area, but as with many solutions in IT, it is not a one-size-fits-all solution – the level of backup that a particular IT function requires depends on an assessment of the impact on different parts of a business. For many, if a disaster is survivable in the space of three or four days without its IT service, then simple recovery from tape may well be all that is needed.
For some people, the main disadvantage of cloud-based backup is the bandwidth limitation for recovery.
‘Include physical recovery as part of the service,’ suggests Arean. ‘If you suffer a loss of terabytes of data, it is far faster for one of our engineers to copy the data from our data centre and physically take it to the customer to restore, rather than for it to trickle back over the internet.’
Others recommend a tiered approach when looking at whether to implement backups in multiple locations.
‘Firstly, for critical business processes, there needs to be a level of inherent resilience in the core infrastructure so that equipment failure within the infrastructure. does not impact the available services,’ Blanford says.
Then, data replication to a second location, which may be cloud backup, provides data in a more accessible format than tape for system recovery.
Before deciding on a hybrid solution, it is vital that businesses ask the right questions first, looking at the order in which services should be restored and how quickly they are needed.
Local backups do not protect from more serious incidents that can affect an entire site, but different types of data require different types of recovery, with local backups offering the fastest recovery times. And they tend to be the recoveries that businesses will make most often for corrupted databases, deleted files or even problems affecting an entire VM or storage volume. For some, this is where backup and DR diverge.
It seems that advances in technology and new ways of securing and backing up data are keeping pace with data growth, and cost per storage unit is reducing. Nevertheless, the assumption that cloud equals cost effective may still need to be challenged.
‘As with all other things IT, we’re in a Moore’s law situation,’ says Blanford. ‘The issue is that the total units are increasing faster than the efficiencies reduce the costs so overall costs go up.’
For companies to properly benefit from the cost effectiveness of cloud backup, intelligent retention and deletion policies should be enforced to archive any data that is not critical to the business or required for compliance reasons. However, a third of organisations that Data Barracks asked in a recent survey said they had no policies in place, or simply kept data forever.
As Chidgey emphasises, CIOs need to look beyond storage costs to the ‘total cost of data’, which includes speed and flexibility benefits. In some cases, cloud backup makes sense, but in others – when the impact on application projects is considered – data centre migrations and cloud do not make as much financial sense. ‘Organisations need to look beyond the capex cost of storage only,’ he stresses.
Companies should also look for a vendor or service provider that offers flexibility with different pricing options, such as recovery performance pricing that allows them to pay for what they recover.
Research done by Enterprise Strategy Group in 2013 found that 84% of the companies polled recover less than 20% of their data but with current industry-standard pricing models they are paying for 100% recovery.
The right pricing model should definitely be top of the checklist for things to look for when considering cloud backup providers. And when trying to a find a service that is best suited to their needs, there are a few simple factors that organisations should not overlook, such as whether the provider’s environment is compatible with an organisation’s IT systems.
‘If you need to keep month-end and year-end copies of data for compliance reasons, can your provider accommodate this while making sure that your data storage doesn’t explode?’ questions Arean.
‘Consider the level of support you require from your CSP. Do you just want somewhere else to store your data (a big cloud bucket somewhere) or do you actually want to outsource the function of backups as well as the checking of logs, management and maintenance that goes with it?’
The key is not to fall into the trap of forgetting about it once DRaaS is invoked. ‘The DR plan should take into account the same protection that you are accustomed to in-house so that you remain compliant,’ Arean adds.
This is especially important when the business is within a sector that is obliged to follow certain guidelines, such as the legal or financial sectors.
It is clear that we are moving into an era of hybrid computing, and backup is no exception – companies are finding it easier not just to juggle on-premise with cloud services but to pick and mix several different cloud services for their backup.
Most importantly, these should be flexible and interoperable so that organisations can tailor them exactly to their needs.