The science of splitting up IT projects

The IT management community needs no reminder of the fact that large IT projects often fail spectacularly.

Failed government IT projects –from the Child Support Agency’s disastrous and much delayed case management upgrade to the NHS’s abandoned National Programme for IT –gain the most attention but mammoth IT failures afflict the private sector too.

There are many theories on why large IT projects fail: the technical complexity of large systems; the employee churn that inevitably occurs in lengthy projects; the simple fact that an unforeseen catastrophe is more likely to occur during a longer period of time. All point to a seemingly self- evident conclusion –that splitting large projects into smaller chunks reduces the risk of calamitous failure.

This is easier said than done, of course. Dividing a single functioning system into smaller subsystems can, if done inexpertly, increase technical complexity and make the overall project harder to manage.

Furthermore, sticking to small, bite-size IT projects can prevent an organisation from effecting substantial business change.

A critical IT management capability, therefore, is to be able to split IT projects into manageable chunks without increasing complexity, and without sacrificing the ability to innovate through significant, IT-led change.

Reducing interdepedence

If simply dividing IT systems into smaller, manageable subsystems guaranteed success, then IT projects based on service-oriented architecture (SOA) –in which application functionality is split into discrete web services –should have a 100% success rate.

However, as consultant and IT complexity expert Roger Sessions found when working on SOA projects, this was far from the case.

“It seemed like SOA should just work, because you’re splitting the system down into smaller and smaller pieces,” he says. “So I became interested in why so many SOA projects were failing.”

Sessions came to the realisation that, while the individual components of an SOA might be simple, the interdependencies between services were often highly complex.

These interdependencies would manifest themselves as floods of messages between services, degrading system performance, or conflicts over data that was shared by two or more services.

Organisations that have tried to split large IT projects into smaller components have failed, Sessions argues, because they have neglected the complexity of interdependencies between subsystems.

He points to the FBI’s Sentinel intelligence management system, which was criticised by an official audit in 2010 for being two years behind schedule and $100 million over budget.

“They split the project into a number of different segments, with many iterations of each one,” Sessions says. “But they had no concept of the overall complexity they were creating by splitting it up.”

Unfortunately, predicting the interdependencies between subsystems is a ‘chicken and egg’ problem, says Sessions. “You need to figure out how you’re going to split the project up before it has even begun, but you won’t know what the interdependencies will be until much further into the project.”

Seeking synergy

Sessions has developed a methodology to address this problem. It proposes that the key to minimising interdependencies between subsystems is to identify functions that have ‘synergy’.

“Two functions are synergistic if, from the business’s perspective, one is not useful without the other,” he says. “And if two functions are synergistic, they are highly likely to be interdependent. Therefore, identifying synergistic functions allows you to predict where the interdependencies will lie.

“You can identify synergistic functions before you know how the technical architecture is going to work, which means you can predict interdependencies very early in the design phase.”

Ideally, he says, synergistic functions should be delivered by the same subsystem, minimising the interdependencies between subsystems.

Of course, there must be some interdependencies or the subsystems will be siloed. But the key to reducing complexity – and therefore project risk –is to keep them to a minimum, and to define essential interdependencies as clearly as possible.

“In most systems, I would estimate, 5% of the dependencies are essential and 95% of them are due to bad design, and those are the ones that are causing IT projects to fail,” Sessions says.

He has recently started teaching a graduate course in IT complexity analytics at Colombia’s University of the Andes. He believes that understanding the complexity of IT systems is crucial if the notorious rates of IT project failure are to improve.

“Every architectural decision we make should be seen through the lens of system complexity,” he says. “Unfortunately, most organisations don’t understand complexity in terms of the number of functions and interdependencies within a system. And until they do, they’re going to continue wasting hundreds of millions of dollars on failed IT projects.”

Technical complexity is not the only source of risk in a large IT project. According to Alexander Budzier, a PhD student at the BT Centre for Major Programme Management at Oxford University’s Saïd Business School, the ‘social complexity’ of an IT project is a more powerful predictor of IT failure than technical complexity.

“Social complexity includes things like user resistance, stakeholder turnover, team turnover and, with outsourced projects, knowledge transfer between the parties,” he explains.

Budzier and Professor Bent Flyvbjerg, founder and chair of the Centre, have studied a corpus of over 5,000 IT projects, to identify the factors that contribute to project failure.

Their data confirms that long-lasting IT projects are risky. “Our data shows that after 30 months, projects face high variability [of success].”

That is not to say IT projects are more prone to cost or schedule overruns than any other type of project, on average.

The problem is that they are peculiarly vulnerable to ‘black swans’ –unpredictable, ‘random’ events that have a catastrophic impact on project success.

“Originally, everybody thought IT was just difficult, that you’d have higher average cost overruns,” explains Flyvbjerg.

“But we actually found that this is not the case. It’s not the average that is the problem, it’s the outliers.”

Budzier and Flyvbjerg’s research has found that one in five IT projects will suffer a black swan event, four times as many as construction projects. “If a CIO greenlights five projects, one of them will have a huge cost overrun, going 200% over budget, and 60% over schedule,” says Budzier. “But you can’t tell upfront which one it’s going to be.”

There are a number of reasons for this, says Flyvbjerg. “IT is a much younger field than construction, so there isn’t as much accumulated knowledge. Also, it’s much less tangible. If you are building a bridge, you can see if something is going wrong with your own eyes. With IT projects, you might be building in the wrong direction for months before you realise.”

There are ways to reduce the risk of black swans, they have found. Project management, for example, should be focused on managing social complexity.

“If you have a large IT project that involves multiple organisations in many countries, the role of the project manager should be to focus on outside political alliances, network building and liaising with the right stakeholders in the right countries.”

Interestingly, Budzier and Flyvbjerg have found that IT projects focused on data systems, such as business intelligence and reporting, are the most prone to black swan events, a fact that Budzier attributes to their social complexity. “Stewardship of data is often very guarded within organisations, and breaking up departmental turfs is a big political issue.”

Time limits

When it comes to splitting IT projects up to reduce the risk of failure, Budzier and Flyvbjerg say the guiding principle should be to minimise the duration of component projects. “The longer you take to do a project, the bigger the window you open for a black swan event to occur,” says Flyvbjerg. “It’s more important to minimise the duration of a project than the cost or the number of function points.”

The ideal is therefore to split a long project into shorter projects that can be run in parallel. Budzier and Flyvbjerg also advise minimising functional interdependence between projects, as well as minimising their interdependence in terms of benefits realisation.

“If you split a project into five smaller projects, you don’t want the benefits of one sub-project to rely on the successful completion of all the other ones. That puts the final project at very high risk,” says Budzier. “They each should deliver value to the organisation in their own right.”

Black swans are by their nature difficult to predict, but Budzier and Flyvbjerg have identified some warning signals that CIOs should look out for. Chief among these is if project managers believe that what they are working on is unique.

“If a project manager thinks their project is unique, they don’t look for previous projects to learn from,” says Flyvbjerg. “That’s very risky, and in fact projects are very rarely unique.”

In fact, learning from prior experience is the best way to analyse project risk, they argue –much better than trying to calculate the risk of failure based on the components of the project. Unfortunately, despite documenting projects in extreme detail, few organisations have an easily accessible record of prior projects and their relative success.

Integration Agile

Large organisations are increasingly adopting Agile software development techniques, such as delivering functioning code every two weeks, testing throughout the development cycle and involving the customer in the process. Many see in Agile the solution to the IT failure rates that have dogged large IT projects in the past.

“There are two things that we can say about Agile development,” says Roger Sessions. “Firstly, it’s a great idea. Secondly, it doesn’t scale.”

Budzier agrees. “Agile does a lot of good things that reduce IT project risk: it involves the user, it involves the decision-maker, it creates tangible outcomes.

“On the other hand, from a decision- maker’s perspective, it requires a lot of trust for the business to sign a £500 million cheque before the IT department has a plan for how they are going to deliver. That’s why we don’t see Agile being used for projects worth more than £50 million.”

Both Sessions and Budzier agree that a development methodology that combined the forward visibility of traditional ‘waterfall’ projects with the flexibility and responsiveness of Agile would remove much of the risk of large IT programmes.

The key to such a technique, both agree, would be the ability to split longer projects into smaller pieces effectively. “The key complexity would be how you parallelise the Agile projects,” says Budzier. “One approach is to think of them as cells: you don’t specify the precise technical solution you are after, but what it needs to do and how it needs to interact with other systems. You can then let the developers figure out how best to achieve that through rapid iterations.”

This blended approach may have to be the future of enterprise IT development because, as Budzier points out, if large organisations wish to make significant changes to their IT systems, large IT projects are unavoidable.

“In one organisation we looked at, the CIO had introduced a policy of never doing projects that lasted more than a year,” he says. “But that stifled all kinds of innovation. It created such complexity within the company that now rolling out change is extremely hard and very expensive. And they have to do large projects again because they had lost their ability to innovate.”

Pete Swabey

Pete Swabey

Pete was Editor of Information Age and head of technology research for Vitesse Media plc from 2005 to 2013, before moving on to be Senior Editor and then Editorial Director at The Economist Intelligence...

Related Topics

IT Project