Ask business leaders why they’re moving, or planning to move, IT workloads from the data centre to the cloud, and many will say it’s to get a lower TCO by ‘renting’ rather than having to buy, install and maintain infrastructure of their own.
Push them further as to whether that objective is being met, however, and you’ll find that, contrary to that expectation, many are having as much trouble keeping costs under control using the public cloud as they did with on-premise IT.
The causes are easily understood and common across organisations, but can only be addressed with a combination of technology, process change and expertise in these new school environments.
The first misconception to lay to rest, is the widely accepted wisdom that renting infrastructure and services in the public cloud is not only cheaper but, somehow, also a lot simpler than owning and running your own infrastructure. It can be – when using services like Office 365 and Salesforce.com, for example – but not when it comes to hosting applications on public cloud platforms such as Amazon Web Services (AWS) and Microsoft Azure.
A quick glance at the AWS website is all that’s needed to confirm its complexity – with hundreds of ‘products’ to choose from. It’s great that customers have access to these capabilities, but not so good in managing and optimising them proactively 24/7.
Grouped together under 20 distinct categories and a seemingly endless array of options, purchase decisions quickly get complicated and finding the right combination of services for application workloads at the best price becomes a daunting task.
And it’s not just AWS, the others can be just as complex and equally difficult to cost and budget for. But the complexity of the offering isn’t the only issue, finding the right fit for a particular application workload, with fluctuating demand and differing workload patterns is not for the faint of heart. Indeed, for many, deciding what public cloud product to use can morph from being a fiendishly challenging task into something of a dark art.
Managing a public cloud, or multiple public clouds, requires deep understanding of the detailed application workload patterns and a system that can absorb and truly understand the complex offerings from vendors to help make the right choices.
Lack of governance makes the problem worse
Added to this challenge, is the fact that many cloud projects are being led by DevOps teams who, while they may understand the technology, will be more focused on the speed and ease with which they can bring applications online, than keeping on top of long term operational costs.
Many will be found sidestepping the need to factor TCO into their deliberations altogether, simply by over-provisioning public cloud capacity and functionality, or even leaving old projects live in the cloud as deadwood that drive up opex costs.
Further compounding that issue, are complex billing schemes and a lack of visibility into how cloud resources are being used, requiring due diligence on the part of IT organisations to put in place processes and systems able to seek out this information and work out the actual spend. This is the only way to ensure cloud resources are being used effectively, turned off when they should be and unsuitable purchase decisions avoided.
The good news is that the public cloud can be tamed. Organisations can and should manage public cloud services, and the applications running on them, using the same scrutiny and due diligence as they do their own data centres, and do so by ensuring they have the right visibility and analytics to optimise their use of these resources.
The challenge, of course, lies in finding not just the tooling, but the expertise in optimising these new school environments, which is where a new breed of service comes into the equation.
Services that offer both the software and the expertise around cloud optimisation analysis, to deliver the detailed analysis, and empower businesses to meet the TCO objectives they thought would come from taking such decisions in the first place. And all without having to assign a team to train on yet another new product.
Sourced by Yama Habibzai, chief marketing officer, Densify
The Women in IT Awards is the technology world’s most prominent and influential diversity program. On 22 March 2018, the event will come to the US for the first time, taking place in one of the world’s most prominent business cities: New York. Nominations are now open for the Women in IT USA Awards 2018. Click here to nominate