How CIOs can tackle the challenge of big software

 

Today CIOs are told that hundreds of people are coming for dinner, and they all want different meals, at different times of the day.

This is the challenge of big software: where CIOs are faced with running multiple apps across multiple platforms, where scalability is critical and costs must be kept down.

To address the realities of big software, companies need to think differently. Traditional enterprise applications were monolithic in nature, procured from best of breed providers and installed on a relatively small number of large servers.

However, modern application architectures and capacity requirements force companies to now roll out many applications, components and integration points spread across potentially thousands of hosted physical and virtual machines on premise or in a public cloud.

Organisations must have the right mix of products, services, and tools to match the requirements of the business yet many IT departments are undertaking these challenges with their traditional enterprise vendors who themselves are trying to re-purpose technologies designed decades ago for the fast emerging new world.

>See also: Software verification: the first step towards safe and resilient systems

Some IT Directors have turned to public cloud providers like AWS (Amazon Web Services), Microsoft Azure, and GPC (Google Public Cloud) as a way to offset much of the CAPEX (capital expenses) of deploying hardware and software needed to bring new services online.

They wanted to consume applications as services and offset most of the costs to OPEX (Operating Expenses). Initially, public cloud delivered on the CAPEX to OPEX promise, Moor Insights & Strategy analysts state, with cloud providers touting upwards of 45% in capital reductions in some cases, but organisations needing to deploy solutions at scale found themselves locked into a single cloud provider, fluctuating pricing models and unable to take advantage of the economies of scale that comes from committing to a platform.

Furthermore, trying to manage very many virtual machines running in the cloud the same way as they have managed physical servers or virtual machines in VMWare does not scale.

Forward thinking IT directors know they must disaggregate their current data centre environments to support scale-out private or hybrid cloud environments.

Operations: the most expensive piece in the OpenStack puzzle

OpenStack is a way for organisations to deploy Open Source cloud infrastructure on commodity hardware.

Customers look at OpenStack as an opportunity to reduce the cost of application deployment whilst increasing the speed with which they can bring new application services online.

The cost to deploy OpenStack is relatively low, the ongoing investment in maintenance, labour, and operations can be high as some OpenStack solutions are unable to automate basic tasks such as updating and upgrading their environment.

People are the expensive piece of the puzzle: fewer, but more experienced staff is the key to keeping costs down.

One of the main challenges with OpenStack is determining where the year-over-year operating costs and benefits of managing the solution reaches parity, not just public cloud, but with their software licensing and other critical infrastructure investments.

In working with many of the largest OpenStack deployments out there is that it has become clear that in a typical multi-year OpenStack deployment, labour can make up >40% of the overall costs, hardware maintenance and software license fees combined are around 20%, while hardware depreciation, networking, storage, and engineering combine to make-up the remainder according to HDS.

>See also: Software is redefining IT infrastructure

Whilst the main advantage of moving to the public cloud is still the short-term reduction in the cost per headcount and the speed of application deployment that is unhindered by organisation inflexibility, the year-over-year public cloud expenses can be greater than using an automated on premises OpenStack implementation.

OpenStack is big software and it needs a new model

Building a private cloud infrastructure with OpenStack is an example of the big software challenge.

Significant complexity exists in the design, configuration, and deployment of all production ready OpenStack private cloud projects.

While the upfront costs are negligible, the true costs are in the ongoing operations; upgrading and patching of the deployment can be expensive.

This is a stark example of how Canonical’s big software solutions address these challenges with a new breed of tools designed to model, deploy and operate big software. Canonical OpenStack Autopilot enables the deployment of revenue-generating cloud services by implementing a reference cloud that is flexible whilst minimising operational overhead.

Application service components and the accompanying operations required to run them are encapsulated in code that enable organisations to connect, integrate, deploy and operate new services automatically without the need for consultants, integrator, additional costs or resources.

>See also: 7 things to look for in a software vendor

Companies can choose from hundreds of micro-services that enable everything from cloud communications, IoT enablement, big data, security and data management tools.

Defying the laws of economics

Just like gravity, it’s very hard to defy the laws of economics. Workloads will naturally go where the infrastructure is the most cost effective.

CIOs know they must have cloud as part of their overall adoption and that OpenStack is a key driver and enabler for hybrid cloud adoption.

IT organisations that take a traditional approach will continue to struggle with service and applications integration while working to keep their operational costs from rising too much.

The good news is, companies are developing software to help organisations with the insight, solutions, and leadership to engage in the big software era.

 

Sourced by Mark Baker, OpenStack product manager at Canonical

Avatar photo

Nick Ismail

Nick Ismail is a former editor for Information Age (from 2018 to 2022) before moving on to become Global Head of Brand Journalism at HCLTech. He has a particular interest in smart technologies, AI and...

Related Topics

Open Source
Storage