Remodelling the enterprise

 
 

 

The basis for a consolidated IT infrastructure will be fundamentally the same whatever the specific needs of a company. Stuart Murray, solutions architect at IT infrastructure services provider Computacenter, says, “An ideal consolidated infrastructure should involve a single data centre – with a hot back-up site – consisting of a small number of highly scalable servers servicing a large application pool.”

Implementing such a set-up does not necessarily mean that companies have to make large investments in new kit – something that is frequently cited as a concern by the boards of companies considering consolidation, says Neil Meddick, enterprise services director at Computacenter.

Meddick’s experience is echoed by other consultants in the field. They all point out that, where new equipment is required, this is often balanced by savings on hardware and software elsewhere. “Often the way forward is actually a process of rationalisation and restructuring. This involves a lot of planning and auditing, but the view that organisations always need to buy more hardware and software to perform better is false,” says Meddick.

The initial consolidation audit should reveal many opportunities for redeploying and re-utilising existing resources. Rationalising is then a case of examining exactly what the business has on its IT inventory and putting the pieces of the puzzle together to create a coherent and scalable architecture – from servers and storage through to applications at every level of the business. The resulting architecture is more efficient, easier to manage and a lot more flexible.

To keep overall costs down, organisations have found significant scope for reutilising existing resources in three significant areas.

Storage
Storage offers perhaps the greatest possibilities in terms of rationalisation and re-utilisation. As data volumes have spiralled over the past several years, companies have simply added more and more storage capacity.

In many cases, implementing a storage area network (SAN) has proved to be the most efficient way of managing storage resources. As a SAN is an open environment, with an independent operating system and processor-independent storage devices, any vendor’s storage or server can be added or taken away without standards or compatibility issues. This means existing storage devices can be redeployed as part of an interoperable storage network, allowing data to be freely exchanged between all devices on a centrally managed network.

US retail giant Sears Roebuck &Co is just one company currently working towards implementing a very large SAN, which the company will use to link data residing on various Unix and Windows NT servers. The SAN will support applications in areas such as human resources and enterprise resource planning.

Entertainment Partners, a company that provides production services to the entertainment industry, also found that adding more storage to its systems was simply creating capacity and manageability problems. The company has now implemented a SAN based on Sun Microsystems’ storage devices, allowing it to attach multiple existing servers to the system. As a result, Entertainment Partners says it has reduced processing time by nearly 80%, taking a two-day process down to 10 hours.

Servers and systems
Once storage systems have been consolidated, the process of reducing servers becomes much easier. Given that a typical server has only 17% to 22% utilisation, there is considerable scope for effective utilisation of existing machines.

The first step, in most cases, is to standardise all servers so that they are running the same versions of each software component – operating systems, applications and system management tools. Server loads should then be examined throughout the day to see which servers are running which applications and when peak loads are occurring.

Once that is done, it may become clear that spare capacity can be freed up by putting certain applications on the same server – for example, it may be possible for an email server whose peak load generally occurs at the start of the working day to share a machine with an application whose load is greater during the evening. Clearly, there is scope to reduce licence fees by consolidating duplicated applications onto one machine.

Consolidation can create technical issues, however. One common problem: serving different applications on the same machine can result in degraded performance due to processor hogging (this is when one application’s demands on system resources impede the performance of another application on the server).

 
 


Chris Franklin, HP: “By centralising, you can also rationalise the applications used.”

 

Such problems can, of course, be overcome with the right systems management applications. “Even if an application asks for more resources, it will be ‘throttled out’, leaving resources free for the other applications,” says Chris Franklin, enterprise class marketing manager at Hewlett-Packard, provider of the OpenView systems management suite.

Organisations consolidating their systems also need to assess which systems can be re-used. “The newer the asset, the more attractive retention is,” says Murray. If, however, the lease is due to expire, then it may make sense to write-off the asset as part of the consolidation. “Consolidation provides a good opportunity to define the IS/IT strategy, and it makes little sense to retain assets not in the strategy,” says Murray.

Software
Getting rid of non-core assets is one thing, discovering completely unused ones is another.

Consultants advise that IT departments considering consolidation should first stop any further software purchases; second, find out what has already been bought; and third, find out what is actually being used. This usually reveals that a lot of packages have become ‘shelfware’ – either the software is never installed, or it has been installed, but is not really used.

Once a software audit has been undertaken, an effective enterprise-wide software policy can be put in place, reducing software licence costs and generally improving efficiency. Many systems management packages are able to keep an inventory of software in use, and in some cases, to actually track the use of the software.

“When a company moves from an environment where there have been maybe 30 different lines of business – all of which, to a certain extent, have been doing their own thing – to an environment with a centralised IT function, you will almost certainly discover that the different lines of business have implemented different applications to do the same job. So by centralising, you can also rationalise the applications used,” says Chris Franklin, enterprise class marketing manager at HP.

Consolidating software also provides an opportunity to review operating system use. Suppliers warn, however, that customers should not always scale down operating system use just for the sake of it. As Franklin says, “There’s nothing to stop a customer going solely with one operating system, say, one vendor’s particular Unix variation. But although that would certainly simplify things, it may not necessarily maximise IT performance or capabilities.”

Software consolidation should be driven by objectives. An organisation may want to rationalise primarily to gain better control of its software resources, or it may do so in order to eliminate the proliferation of expensive software licences. Whatever the reason, the end result will almost certainly mean a productivity increase within the IT department and across the company as a whole; staff will be using fewer systems, fewer applications, and they will be more rigidly controlled. This will also enable the IT department to support and update the software more easily.

Virtual gains
Many companies embarking on consolidation projects are now looking at moving to a utility-style model, something that is increasingly being advocated by several suppliers. Hewlett-Packard believes it has a clear lead in this respect: “We are now looking at the idea of not just pooling storage virtually, but actually pooling processor and memory resources virtually. You start to see a logical map rather than a physical map, of the IT environment,” says Franklin.

This means that, for example, if an application needs a certain amount of processor resources, an administrator can simply take control of the necessary resources. It won’t matter if those processors are all in one mainframe or within 15 single processor servers.

HP argues that this will liberate the IT department from historical constraints. “Historically, if a business has a problem that it wants to solve with IT, it will look for an application to solve that problem. That application will then be linked to an operating system, which in turn has always tied a company into a hardware preference – a Windows environment or a Unix environment, for example. Over the next few years, technologies such as Itanium will enable businesses to use the same physical hardware with different operating systems running on top. The same machine can be a Windows server one day and a Unix server the next,” says Franklin.

Eventually, that may go even further. Some researchers are working on ways of spreading any computing load across any machine or groups of machines, both within a data centre or beyond. These systems will be able to grab resources at a machine-cycle level, according to a need specified by a service-level agreement.

But such advances will take time. Meanwhile, data centre consolidation today still offers some virtualisation, with considerable savings in money and support.

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

Related Topics