Are we any better than our competitors at IT? Do IT organisations in other industries perform better with similar levels of investment? How can we show the business that it is getting value for money?
Isolated from the inner workings of other organisations, IT executives are often unsure about how well their operations stack up against those of their peers. And that opacity can have serious consequences. In the absence of direct comparisons, it can result in project cancellations, staff reductions and budget cuts even when there is no clear justification either way. And, ultimately, it can lead to the IT organisation, or key parts of it, being outsourced to a third party armed with hard numbers showing superior efficiency and costs.
As such, benchmarking key elements of the IT function and weighing its strengths and weaknesses against similarly sized companies – either within the same industry or elsewhere – has become an increasingly vital tool for senior IT management, especially as IT budgets come under intense scrutiny.
Regular users of benchmarking services argue that they achieve a much better understanding of the quality of their service delivery and where it might be improved – whether measuring applications development performance, data centre efficiency, network infrastructure quality of service or some other key IT component.
But getting hold of that sensitive information on competitors’ IT in order to make meaningful comparisons is not something that can be done alone.
Management consultants and auditors might seem the obvious source of such information, but not only have few built the models and back-end databases to support benchmarking, they are not always the most enthusiastic about sharing client information.
McKinsey and a few industry-focused consultants such as The Hackett Group and Greenwich Associates, do perform this type of cross-company analysis at the business level, but the bulk of IT benchmarking is the domain of sector analysts (notably, Gartner and Forrester Research) and specialist benchmarking companies (Compass, Metri, TPI and a few others).
Alongside, and often working with them, are IT services companies such as Accenture, Fujitsu Services, Atos Origin and Capita, which provide more assessment benchmarks than comparison benchmarks, often tapping into the knowledge bases of the experts themselves so they know the going rates for their own outsourcing services.
What they all have in common is the promise of telling organisations whether they lead or trail their peers in key areas. And sometimes that amounts to a call
At the turn of the last decade, Deutsche Bank’s IT department was operating at the extremes of inefficiency. “It was a mess,” says Rich Murphy, former CFO for IT at the bank. It took years of concerted effort, he says, but the bank eventually reached a point where it was indisputably one of the most efficient. “How did we know that? All the benchmarking we did. We knew where we were, we knew where we were coming from, and we knew where we had to get to.”
For seven years, he was able to assess Deutsche Bank’s IT costs against its financial services peers. “We had information on the IT operations of all the other big banks around the world and reports that showed where we sat relative to those in all key areas of IT. It was a blind benchmark: You don’t get to see where Deutsche Bank sits relative to UBS or Citi,” says Murphy, but it did provide a starting point for radical improvement and a gauge for progress thereafter.
That is not an uncommon response; a bad set of benchmark results often acts as a stimulus for continuous improvement.
“Benchmarking identifies pockets of great strength and pockets of difficulty. The real key is not where you sit in absolute terms, but what progress you are making,” says Murphy, now executive in residence at project software company Planview.
While typically blind within a given industry sector, such reports do reveal the state of IT in different parts of larger organisations. In the case of Deutsche Bank, regular benchmarking shows comparisons between its different IT functions – investment banking, retail banking, asset management and so on.
One division might be investing, say, 7% of its revenue in IT, while another is investing 12%. “It allows you to ask questions like, ‘Why can unit A do it with 7% yet B needs 12%?’,” says Murphy. “There might be good reasons, but it allows you to ask the question.
“There is nothing more powerful than sitting in a meeting with three [divisional] CIOs putting up their benchmark data, one with his head proudly up knowing he is the best of them. Internal benchmarking is a wonderful management tool,” says Murphy. It can be used to cross-reference IT effectiveness across business units, subsidiaries, geographical areas, data centres, and more, he points out.
But benchmarking IT departments has not always been regarded so positively. In the 1990s, the initial use of extensive benchmarking was predominantly in outsourcing situations, says Paul Michaels, director of consulting at Metri in the UK. “The client was using it to beat up the service provider in order to get a cost reduction. That history of being a stick to beat people up with means benchmarking is a bit tainted.”
Art of benchmarks
Getting to the necessary level of benchmarking granularity takes a mix of specialist skills and data: the gathering and analysis of qualitative information from interviews and systems analysis and, as a robust basis for comparison, the creation of a quantitative resource – a database of accumulated case study material.
“The challenge is to deconstruct complex operations into a series of metrics,” says Nigel Hughes, market and service director at Compass, the largest IT benchmarking specialist, which draws on a database that holds the results of almost 10,000 client engagements.
Its rival Metri, where much of the focus is on IT infrastructure benchmarking and outsourcing contracts, uses a methodology, component-based service measurement, that breaks down the parts of IT: helpdesk, transaction processing, and so on, in order to determine how these work together when delivering a service. And to determine what quality levels you are trying to deliver with that set of services.
“Once you understand that, then you can compare it to others who are delivering to the same level of services,” says Michaels. “So in the case of something like a helpdesk run out of Malaysia, you will want to know about the quality of service you are getting, what kind of response rates and volumes are being handled.
“That then makes it easy to determine the price you should be paying for that level of service.”
Metri makes that determination across five parameters – cost and price of the service; the complexity of the environment; the productivity of the people and their skill sets; the maturity of the processes in place; and the quality of what is being delivered and its appropriateness based on service levels, customer satisfaction and the like.
As that might suggest, firms should not only look within their own industry for comparisons. “People shouldn’t always want to do apples-to-apples comparisons,” says Compass’s Hughes.
Best practice can come from any industry. Indeed, by restricting benchmark comparisons to immediate peers, IT organisations often miss the real competition. If a bank’s IT organisation sees a fellow bank as its peer, then it might be blind to the fact that the real competitor for its services are an IT services company – such as IBM, HP or an Indian outsourcer. “Those are who you must demonstrate you are doing a good job against,” says Michaels.
Indeed, companies such as IT services company Cognizant use their own benchmarking techniques, for example, Return on Outsourcing which draws on Forrester Research intellectual property, to identify the long terms benefits of outsourcing, measuring aspects such as the client’s technology complexity, the criticality of applications to the business, the data securities in place and critical source dependencies.
Pressure to outsource is just one of many triggers for engaging benchmarking services. In current economic conditions, it typically revolves around a desire to gauge IT costs and see where costs can be taken out of the business.
Indeed, most benchmarking exercises start when organisations feel their IT is underperforming – often brought on by the arrival of a new CIO, a merger or some other catalyst.
Further reasons for benchmarking are defensive, with the data allowing the IT organisation to defend itself from the perception within the company or with customers of a lack of value for money. Indeed, benchmarking is increasingly being used to support that internal IT market. “We are doing a lot of work around service catalogues,” says Michaels, “with organisations asking ‘What is it we offer, how do we offer it and how do we charge for it?’.”
But the more progressive companies are not on the defensive; they see benchmarking as a tool to use on a regular basis to help them improve and demonstrate the quality of their IT delivery. “On average, clients make a 17% saving on improvements in processes and costs as a result of their benchmarking,” says Hughes of Compass.
One of its clients, Lancashire County Council, for example, decided to use benchmarking as a solution for long-term performance management of ICT. “The benchmarking exercise was the first phase of a change process and provided managers with clear measures on current service delivery against a peer group of comparable public and private sector organisations. This external perspective and the calibration of the council’s performance against best practice was particularly important in the initial benchmarking exercise in showing what was possible to achieve and setting targets for the future,” says the council. Over time, it analysed application development, network infrastructure, servers and the data centre and desktop infrastructure.
But increasingly, benchmarking initiatives are not just being driven from inside the IT department – especially when IT is engaged in business services management. At one large oil company, for instance, the HR department brought in a benchmarking specialist to examine what they regard as the excessive costs of its IT services.
That level of focus means IT has to treat benchmarking as constantly present.
Michaels uses the analogue of an Olympic athlete. Organisations often have internal change programmes, technology implementations, reorganisations, cost containment exercises and so on – the equivalent of the training programme the athlete is on, he illustrates.
“But the benchmark is like the race. Only when you compare yourself with
the others in your field do you really know if your training is working and you are keeping pace with the competition.”