Risky business: the CIO war on application risk

Over the last few years, we’ve seen the devastating effects software failures can have on organisations. High profile IT glitches experienced by companies like RBS and Natwest have led to service outages with embarrassing repercussions.

The outage at NASDAQ alone resulted in paralysis of stocks with a value of more than $5.9 trillion, as well as a shutdown of trading for three hours.

Little has been done to address the underlying technical issues which cause these problems – and until they are, we expect many more of these glitches, with an ever increasing impact on consumers and corporations worldwide.

Most worryingly, these problems will mostly affect more developed economies, such as the UK, whose early adoption of IT created a legacy of older, more complex IT systems. The finger is now pointed squarely at CIOs to make sure these glitches are addressed effectively.

Nature of the beast

Let’s set the scene: IT systems are growing in complexity, IT staff turnover remains high, sourcing decisions are made without regard to the long term, without expertise or tacit knowledge, and the pace of competition continues to increase.

All the while, organisations are having to support a multitude of new channels – including mobile, cloud and the Internet of Things – as well as traditional applications where investment has been neglected during the recession.  

This scenario ultimately results in a complex cobweb of patched together systems, which any single group of engineers would struggle to understand as a whole. Is it any wonder many organisations are failing to keep a hold of their systems?

The quality of disparate software components making up these systems is starting to become a focus, and developers of these components are leading that charge. The quality of the system that these components comprise when integrated together becomes the single most important contributor to the glitches that we’re seeing.

The reason? Most organisations are not well equipped to manage the structural quality of their IT systems –that is how the whole system is constructed – and still rely heavily on traditional testing approaches to solve extremely complex quality problems.

>See also: The cyber security roadmap

The right tools for the job

Typically, CIOs approach application risk through quality testing those only addresses whether an application completes a certain function, but doesn’t broadly assess the quality of a live system.

In contrast, software risk prevention is about assessing that an application doesn’t do what it shouldn’t do when it is up and running, a subtle but important difference. It may sound intuitive, but unfortunately it’s an expertise many organisations have not yet developed.

Reading the signs

CIOs need visibility into the structural quality of their most mission-critical systems. IT in organisations uses many metrics: budgets, project plans, burn down charts, on-time percentages, defect rates and incident stats – but they rarely have effective metrics for measuring the quality of their systems.

This is a fundamental blind spot for most CIOs because it’s the structural characteristics of a system that truly affect these other metrics over time. This is like trying to diagnose a patient using height, weight, and temperature, without conducting a blood test or an MRI to figure out what’s happening inside.

The first step for CIOs is to establish means of measuring the structural quality of their systems and removing the cost of hidden problems in a system as a result of poorly engineered code, often called ‘technical debt’.

Winning the battle

To reduce application risk and technical debt, CIOs need to take action. They need to communicate to their teams that KPIs around software quality will be highly visible in the CIO dashboard and constantly tracked.

The senior technical team then needs to choose the software practices and flaws they want to eradicate and the CIO should introduce a zero-tolerance policy on those characteristics as their systems evolve.

>See also: Cyber security: the solutions aren't working?

This will result in an environment where new software introductions can no longer corrode business systems, helping organisations to avoid the notorious IT glitches that have plagued them.

Winning the war

The trouble with IT failures is there is no easy solution, simple fix or easy explanation to all these failures – they are systemic.

If we are to safeguard against these problems in the future there has to be a radical change in culture around software construction. A shift in focus away from ‘immediate functionality today at all costs’ towards ‘sound structure that enables immediate functionality’ – an investment driven by CIOs to help already overstretched IT departments get control.

Until then, we are likely to see an ever increasing number of software-driven failures, which are less likely to happen in isolation and more likely to have a bigger impact on the business.

 

Sourced from Lev Lesokhin, EVP Strategy and Market Development at CAST

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics

Applications
Risk Assessment