One technology that has seen massive growth in recent years is cloud computing. Predictions from Bloomberg Intelligence expect cloud IT spending to increase from £54 billion in 2015 to almost £144 billion in 2020. While, according to IDC, global investment in IT is expected to hit nearly £3 trillion in 2018 – with growth being driven by investment in cloud and hybrid IT.
For cloud companies, this is excellent news. Businesses are properly embracing the widespread benefits of cloud, with what looks like strategic migration and adoption plans. But it’s also good news for the businesses themselves. As the cloud market matures, the cost of cloud will continue to drop, and the strength of service will increase. It is happening already. Since 2008, Amazon Web Services (AWS) has launched over 1,000 new features and has reduced its prices more than 60 times.
Over the past couple of years, the focus for the cloud adoption that’s driving spending has been in public and hybrid cloud models. The former is open to anyone. The latter, a slightly more nuanced approach to cloud provision that grants businesses greater control over their sensitive data through a private model, while allowing them to still benefit from public cloud features and working modes such as ‘cloud bursting’ or ‘split systems’.
Different approaches. But the unifying factor between the two is that by moving to the cloud, businesses are placing their trust in cloud companies, incorporating their security and backup services, rather than thinking solely about their own.
With that comes an expectation that sensitive data and critical processes will stay safe and remain available, especially now that GDPR has come into force. But, as we know, this isn’t always the case, and downtime does occur. In 2017, every major public cloud provider, AWS, Microsoft, Google, experienced some type of service interruption, in some cases for hours and with significant disruptions for their customers. Even major hardware vendors experienced issues that led to data corruption or loss for some of their customers. There are three simple truths about computing: software has bugs, hardware fails, and people make mistakes. The challenge is minimising the impact when one (or more!) of those things happen.
Availability – the need is growing with each outage
Today, every time cloud services are interrupted in such a manner, a domino effect of consequences ensues. First, business operations are halted, with the immediate result that workers become unable to do their jobs. Then there are the customers experiencing poor service, and the time needed to fix the problem. Finally, there is an aftermath of reputational damage and financial consequences to consider. Alongside the current state of IT spending, with greater dependency on cloud resources, big outage and downtime incidents highlight the overwhelming need for IT availability.
A robust and reliable cloud availability and data protection solution, for the most part, yields no consequences for the user when everything is working well. Indeed, had the businesses affected by the outages mentioned above been protected by a cloud data protection solution that worked alongside their core infrastructure, a significant amount of lost time, energy and money could’ve been saved. The outlay for such a service is comparatively small, but the savings it can deliver are huge.
With Gartner citing revenue growth as the key CEO objective, cloud companies need to define and measure clear digital business value to deliver digital business transformation, and availability is going to be key to making that case. The spending power is there – it’s just a matter of who’s going to take advantage of it.
Sourced byMark Adams, Regional VP, UK & Ireland at Veeam