How can businesses pay for what they use in the cloud?

Cloud computing has seen rapid growth in the past few years and the public cloud market is predicted to grow to $411.4 billion by 2020 according to IT analyst firm Gartner. What drives most organisations to cloud is elasticity and agility, which is the ability to instantly provision and deprovision resources according to the needs of the business.

However, once companies are on the cloud, the vast majority report paying 2-3 times what they expected. Companies are enticed by the promise they will only pay for what they use, but the reality is they pay for what they allocate.

Whilst managing IT cost is one problem for most organisations, another related challenge when managing a public cloud environment is that cost and performance are closely tied. Many would like to optimise for cost alone, but this would mean taking a performance hit and no one wants that.

Around the clock application availability

To guarantee their service licence agreements (SLA), applications need access to all the resources at all times. In response, developers allocate resources based on peak demands. The human aspect, though, needs to be added in.

>See also: Cloud computing is becoming more and more important for businesses

Developers can’t stay at work all day, every day to monitor and adjust resources. When the clock strikes six, those developers want to get home to their families and dinner. Common behavior to assure application performance at all times is by over-allocating. It is little surprise then over 50% of the world’s data centres are over-allocated.

Away from the cloud, the over-allocation of resources on-premises is costly, but is significantly less impactful to the bottom line. On-premises, the over provisioning is masked by over-allocated hardware and hypervisors that allow for sharing resources. When resources are charged by the second or minute in cloud, over-provisioning becomes extremely costly.

For organisations, the solution is to match supply and demand. It’s finding the ideal state, which is when applications have the resources they need to perform; a state of trade offs across many dimensions. Only then would organisations actually be cost efficient by paying for the resources they need when they are needed.

But, we do not live in a world of infinite resources or budget. An automated system is needed to continually figure out the trade-offs, the state the system needs to be in and the actions to be taken. The only way to achieve this is through automation.

The complexity of managing cloud and on-premises applications

First, businesses must look at all resources each application requires and match them to the best instance type, storage tier and network configuration, in real time. This is no easy feat.

>See also: Businesses need to talk about the cloud

In the scenario where an application running a front and back end on Amazon EC2 using EBS storage, there are over 70 instance types available that each define the CPU and its benchmarked performance to be expected, the available bandwidth for network and input/output (IO), amount of local disk available and more. In addition to this are five storage tiers on EBS that further define the IOPS capabilities of the applications, resulting in over 350 options for each component of the application.

The problem is just as complicated on Azure, with over X instance types and different levels of premium and standard storage tiers and the recent introduction of availability zones.

Controlling the cloud

A business’s decision making over where data will be backed up is important as well. When looking at the monitored data at the Infrastructure-as-a-Service (IaaS) layer alone, neither performance or efficiency can be guaranteed.

Let’s take a simple Java Virtual Machine (JVM) as an example. When looking at the memory monitored at the IaaS layer it will always report using 100% of the heap, but is it utilising it? Is the application collecting garbage every minute or once a day? The heap itself should be adjusted based on that to make sure the application gets the resources it needs, when it needs them.

>See also: The cloud in the enterprise: businesses now need much greater visibility

CPU isn’t better. If the IaaS layer is reporting an application consuming 95% of a single CPU core, most would argue that it needs to be moved to a two core instance type. Looking into the application layer allows you to understand how the application is using that CPU.

If a single thread is responsible for the bulk of the resource consumption adding another core wouldn’t help but moving to an instance family with stronger CPU performance would be a better solution.

To sum it up, ensuring application performance while maintaining efficiency is more difficult than ever. The only way to truly pay for what you use you must match supply and demand across multiple resources, from the application layer down to the IaaS layer in real time.

 

Sourced by Mor Cohen, senior product manager of Cloud at Turbonomic

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...

Related Topics

as-a-service
Cloud Computing